Welcome to today’s CompTIA Security+ practice test!

Today’s practice test is based on subdomain 4.4 (Explain security alerting and monitoring concepts and tools) from the CompTIA Security+ SY0-701 objectives.

This beginner-level practice test is inspired by the CompTIA Security+ (SY0-701) exam and is designed to help you reinforce key cybersecurity concepts on a daily basis.

These questions are not official exam questions, but they reflect topics and scenarios relevant to the Security+ certification. Use them to test your knowledge, identify areas for improvement, and build daily cybersecurity habits.

Click the button below to start today’s practice exam.

 

Results

QUIZ START

#1. A SOC analyst observes repeated failed authentication attempts from multiple geographic regions within a 10-minute period. The SIEM generates a “possible credential stuffing attack” alert. Which action should the analyst take FIRST?

Previous
Next

#2. A security team needs continuous traffic flow visibility, including top talkers, application usage, and bandwidth statistics. Which tool BEST provides this capability?

Previous
Next

#3. A company is transitioning to an agentless vulnerability scanner for its cloud-based infrastructure. Which advantage does this approach provide?

Previous
Next

#4. A DLP system flags outbound email traffic containing customer credit card numbers. What is the PRIMARY goal of using DLP in this case?

Previous
Next

#5. An organization wants a single console to collect logs from firewalls, servers, and endpoints for correlation and reporting. Which technology BEST meets this requirement?

Previous
Next

#6. During a vulnerability scan, numerous “false positives” are reported. Which action BEST addresses this issue?

Previous
Next

#7. A SOC team notices their SIEM is triggering multiple alerts on the same benign event type, causing alert fatigue. Which solution BEST mitigates this?

Previous
Next

#8. A company wants to ensure antivirus signatures are up to date across all endpoints. What is the BEST monitoring method?

Previous
Next

#9. A cloud environment generates a high volume of logs from ephemeral containers. Which monitoring approach is MOST suitable?

Previous
Next

#10. A SOC team needs to store log data for one year for compliance purposes but reduce the cost of SIEM operations. What is the BEST approach?

Previous
Finish

Note: CompTIA and Security+ are registered trademarks of CompTIA. This content is not affiliated with or endorsed by CompTIA.

To view CompTIA Security+ practice tests on other days, click here.To view answers and explanations for today’s questions, expand the Answers accordion below.

Answers

NumberAnswerExplanation
1CA SOC analyst observes repeated failed authentication attempts from multiple geographic regions within a 10-minute period. The SIEM generates a “possible credential stuffing attack” alert.
Which action should the analyst take FIRST?


A. Disable all affected user accounts: This is a containment action. While it might be a necessary step if the attack is confirmed and ongoing, taking it as the first action without validation could lock out legitimate users due to a false positive or misinterpretation of the alert.

B. Increase the password complexity policy: This is a long-term preventative measure that, while good practice, does not address an active or possible credential stuffing attack immediately. It’s a control change, not an incident response step.

C. Validate the alert and confirm the source
In incident response, the first step after receiving an alert is always validation. An analyst needs to confirm if the alert is a true positive (i.e., the attack is actually happening) and gather more context. This involves examining logs, confirming the source IPs, checking the targeted accounts, and looking for other correlated events. Acting without validation could lead to unnecessary disruption (like disabling accounts for a false alarm).

D. Report the incident to law enforcement: Reporting to law enforcement is a step that comes much later in the incident response lifecycle, usually after the incident is confirmed, contained, eradicated, and thoroughly investigated, and if legal or regulatory requirements mandate it. It’s not the first action an analyst takes upon receiving an initial alert.
2BA security team needs continuous traffic flow visibility, including top talkers, application usage, and bandwidth statistics.
Which tool BEST provides this capability?

A. SIEM (Security Information and Event Management): A SIEM collects and correlates security logs and event data from various sources to detect and alert on security incidents. While it might ingest some flow data, its primary function is security event correlation and logging, not detailed, continuous traffic flow visibility and statistics in the way a dedicated flow analyzer does.

B. NetFlow analyzer
NetFlow (and similar flow technologies like IPFIX, sFlow) is specifically designed to collect and export IP network traffic flow information. A NetFlow analyzer processes this data to provide detailed insights into traffic flow, top talkers (who is communicating the most), application usage (based on ports), bandwidth statistics, and communication patterns. This directly addresses the need for continuous traffic flow visibility.

C. Antivirus: Antivirus software detects, prevents, and removes malicious software on endpoints. It has no capability for analyzing network traffic flow, top talkers, or bandwidth statistics across the network.

D. SCAP (Security Content Automation Protocol): SCAP is a suite of standards for automating vulnerability management, compliance checking, and security policy enforcement. It’s used for security automation and assessment, not for real-time network traffic flow analysis.
3BA company is transitioning to an agentless vulnerability scanner for its cloud-based infrastructure.
Which advantage does this approach provide?


A. Real-time continuous monitoring without credential use: While agentless scanners can operate without persistent agents, they typically still require credentials (e.g., API keys, SSH keys, or cloud platform roles) to access and scan cloud resources. Real-time continuous monitoring is often better achieved with agents or integrated cloud security posture management (CSPM) tools.

B. Easier deployment with reduced endpoint impact
Agentless scanners do not require software agents to be installed on each target system. This significantly simplifies deployment, especially in large or dynamic cloud environments where spinning up and tearing down instances is frequent. It also reduces the impact on the endpoint (i.e., the cloud instance) as there’s no agent consuming resources or potentially causing conflicts.

C. Detection of zero-day exploits automatically: Agentless scanners (like most other vulnerability scanners) are primarily effective at detecting known vulnerabilities by checking configurations, patch levels, and software versions against a database of known flaws. They are generally not designed to automatically detect previously unknown (zero-day) exploits.

D. Reduced need for SIEM correlation: Vulnerability scan findings, whether from agent-based or agentless scanners, are critical data points for a SIEM (Security Information and Event Management) system. A SIEM uses these findings for correlation with other security events and for overall risk assessment. Agentless scanning does not reduce the need for SIEM; if anything, it provides valuable data for it.
4CA DLP system flags outbound email traffic containing customer credit card numbers.
What is the PRIMARY goal of using DLP in this case?


A. Detect and block malware attachments: While some security solutions integrated with email might do this, it’s the primary role of antivirus/anti-malware solutions and email security gateways, not the core function of DLP. DLP focuses on the content of the data for sensitivity, not malware itself.

B. Monitor for unauthorized software installations: This is a function of endpoint security solutions, asset management systems, or patch management tools, not DLP. DLP is concerned with data movement.

C. Prevent data exfiltration of sensitive information
DLP (Data Loss Prevention) systems are specifically designed to prevent sensitive information from leaving the organization’s control. When a DLP system flags outbound email with credit card numbers, its primary goal is to stop that sensitive data (PII, financial data, etc.) from being leaked or “exfiltrated” outside the secure perimeter, either accidentally or maliciously.

D. Improve wireless access control: Wireless access control (e.g., WPA3, 802.1X) manages who can connect to the wireless network. This is completely unrelated to preventing sensitive data from leaving via email.
5AAn organization wants a single console to collect logs from firewalls, servers, and endpoints for correlation and reporting.
Which technology BEST meets this requirement?

A. SIEM (Security Information and Event Management)
A SIEM system is specifically designed to collect, aggregate, correlate, and analyze security logs and event data from various sources (like firewalls, servers, endpoints, applications, etc.) across an entire organization. It provides a central console for security monitoring, incident detection, and reporting, precisely matching the described requirement.

B. SNMP traps: SNMP (Simple Network Management Protocol) traps are notifications sent by network devices to a management station when specific events occur. While they send some alert data, they are not a comprehensive solution for collecting, correlating, and reporting on all types of logs from diverse sources like a SIEM.

C. NetFlow: NetFlow (or similar technologies) collects network traffic flow data, providing insights into who is communicating with whom, when, and over what ports. While valuable for network visibility, it focuses on traffic flows, not comprehensive security logs from operating systems, applications, or security devices for correlation and reporting across the entire IT landscape.

D. Antivirus: Antivirus software is used to detect, prevent, and remove malicious software on individual endpoints. It is an endpoint security tool and does not provide a centralized log collection, correlation, and reporting console for an entire organization’s diverse IT assets.
6
DDuring a vulnerability scan, numerous “false positives” are reported.
Which action BEST addresses this issue?

A. Disable scanning on sensitive systems: Disabling scans on sensitive systems is counterproductive and highly insecure. These systems often contain critical data and services, making them prime targets for attackers. Blindly disabling scans increases risk.

B. Rely only on manual penetration testing: Manual penetration testing is invaluable for finding complex, logical vulnerabilities that automated scanners might miss. However, it’s time-consuming, expensive, and not scalable for routine, comprehensive vulnerability identification across a large environment. It complements, but does not replace, automated scanning.

C. Remove the scanner from production: Removing the scanner means abandoning automated vulnerability identification, leaving the organization blind to new or existing flaws. This is a highly irresponsible action that would drastically increase the attack surface and risk.

D. Use SCAP benchmarks for tuning
SCAP (Security Content Automation Protocol) provides standardized methods for expressing security baselines, vulnerability criteria, and configuration settings. By using SCAP benchmarks, you can tune your vulnerability scanner to align with specific, well-defined security policies and configurations relevant to your environment. This helps the scanner understand what truly constitutes a deviation or vulnerability for your systems, significantly reducing the number of irrelevant “false positives” while still accurately identifying real issues.
7BA SOC team notices their SIEM is triggering multiple alerts on the same benign event type, causing alert fatigue.
Which solution BEST mitigates this?


A. Increase SIEM log retention: Increasing log retention means storing more data for longer. While important for forensics, it does not address the issue of too many alerts being generated from that data. It might even exacerbate the problem if more data leads to more unprocessed alerts.

B. Implement alert tuning and correlation rules
Alert fatigue occurs when security analysts are overwhelmed by a high volume of false positive or low-priority alerts, leading to missed real threats. The best way to mitigate this is by tuning the SIEM’s alert rules to be more precise, reducing noise. Correlation rules are also crucial; they combine multiple, seemingly benign individual events into a single, more meaningful alert when a specific pattern or sequence indicative of a true threat is observed. This reduces the sheer volume of alerts while making the remaining ones more actionable.

C. Disable SIEM alerts entirely for that event: This is a dangerous overcorrection. Disabling alerts entirely for a specific event type, even if currently benign, means potentially missing legitimate threats in the future. The goal is to reduce false positives and noise, not to eliminate visibility.

D. Increase analyst staffing levels: While more staff might temporarily alleviate the workload, it’s not a sustainable or efficient solution for alert fatigue caused by poor SIEM configuration. It’s an expense that doesn’t fix the underlying problem of inefficient alert generation; instead, it just throws more people at it. Proper tuning (Option B) makes the existing staff more effective.
8AA company wants to ensure antivirus signatures are up to date across all endpoints.
What is the BEST monitoring method?


A. SIEM alerting on signature update events
Most enterprise antivirus solutions log events related to signature updates (e.g., successful update, failed update, update status). A SIEM (Security Information and Event Management) system can collect these logs from all endpoints. By configuring the SIEM to alert on missing or failed signature updates, the security team gains centralized and automated real-time visibility into the update status across the entire environment, which is the best monitoring method for ensuring they are up to date.

B. SNMP polling of workstations: While SNMP (Simple Network Management Protocol) could potentially be used to query some basic information from managed devices, relying solely on polling for detailed antivirus signature status across all workstations is often less efficient, less comprehensive, and less granular than collecting specific event logs via a SIEM. Antivirus software is designed to generate specific logs for this purpose.

C. Manual checks on each device: This is highly inefficient, impractical, and prone to human error, especially in environments with more than a handful of endpoints. It does not provide continuous or real-time monitoring.

D. DLP policy enforcement: DLP (Data Loss Prevention) systems are designed to prevent sensitive data from leaving the organization. They have no direct role in monitoring or enforcing antivirus signature updates
9DA cloud environment generates a high volume of logs from ephemeral containers.
Which monitoring approach is MOST suitable?


A. Static log retention on each container: This is unsuitable because containers are ephemeral. When a container is destroyed, its local logs are lost, making it impossible for post-mortem analysis or incident investigation.

B. Only scanning containers quarterly: Scanning quarterly is far too infrequent for a dynamic container environment, especially one generating a high volume of logs. New containers are spun up and down constantly, and vulnerabilities or malicious activity could occur and disappear long before a quarterly scan. This provides very limited real-time visibility or security.

C. Disabling logging for containers to save resources: Disabling logging completely is a severe security risk. Without logs, there’s no way to audit activity, troubleshoot issues, or investigate security incidents. The resource savings would be vastly outweighed by the loss of critical visibility and accountability.

D. Forwarding logs to a centralized SIEM
Ephemeral containers (short-lived, frequently created and destroyed) make it impractical to store logs directly on the container itself or rely on infrequent scanning. The most suitable approach is to forward their logs in real-time to a centralized SIEM (Security Information and Event Management) system. The SIEM can then aggregate, store, analyze, and correlate these logs, providing continuous visibility and enabling detection of security incidents even after the container has been decommissioned.
10AA SOC team needs to store log data for one year for compliance purposes but reduce the cost of SIEM operations.
What is the BEST approach?

A. Use log archiving solutions
Log archiving solutions (e.g., cold storage in cloud, cheaper storage tiers) allow organizations to move older, less frequently accessed log data from expensive, active SIEM storage to more cost-effective, long-term storage while still meeting compliance retention requirements. This reduces the operational cost of the SIEM while ensuring logs are available if needed for audits or investigations.

B. Delete older logs every month: This directly contradicts the compliance requirement to store logs for one year. Deleting logs would lead to non-compliance.

C. Disable verbose logging: Disabling verbose logging reduces the volume of logs collected, which can save costs. However, it also reduces the detail of information available for security analysis and forensics, potentially hindering incident response or compliance auditing. While it saves cost, it compromises data granularity, and may not be the best approach if full detail is needed.

D. Forward logs only from firewalls: This would severely limit the visibility into the entire IT environment. Logs from servers, endpoints, applications, and other security devices are crucial for comprehensive security monitoring and compliance. Forwarding only firewall logs would leave significant blind spots.