Welcome to today’s practice test!
Today’s practice test is based on Domain 3.1 (Compare and contrast security implications of different architecture models) from the CompTIA Security+ SY0-701 objectives.
This beginner-level practice test is inspired by the CompTIA Security+ (SY0-701) exam and is designed to help you reinforce key cybersecurity concepts on a daily basis.
These questions are not official exam questions, but they reflect topics and scenarios relevant to the Security+ certification. Use them to test your knowledge, identify areas for improvement, and build daily cybersecurity habits.
Click the button below to start today’s practice exam.
Results
#1. A security architect is designing a hybrid cloud deployment that includes both on-premises and public cloud resources. Which of the following should be clearly defined in the responsibility matrix?
#2. A company is transitioning to a serverless architecture. What is one primary security implication of this model?
#3. Which model allows applications to be segmented into lightweight, independently deployable services?
#4. A financial firm needs to ensure critical systems are isolated from external networks. What’s the BEST approach?
#5. Which technology abstracts the control of network devices to enable centralized management?
#6. An organization is deploying IoT sensors. Which is the MOST significant security concern?
#7. What is a key security benefit of containerization over traditional virtualization?
#8. An administrator is reviewing infrastructure design with a focus on reducing single points of failure. What should they prioritize?
#9. A SCADA system in a power plant is being evaluated. What’s the PRIMARY security challenge with this architecture?
#10. A development team uses IaC to deploy infrastructure. What is a critical risk of this approach?
Note: CompTIA and Security+ are registered trademarks of CompTIA. This content is not affiliated with or endorsed by CompTIA.
To view CompTIA Security+ practice tests on other days, click here.To view answers and explanations for today’s questions, expand the Answers accordion below.
Answers
| Number | Answer | Explanation |
|---|---|---|
| 1 | C | In a hybrid cloud, it’s crucial to clearly define who is responsible for what security tasks (e.g., cloud provider for infrastructure, customer for data and applications) across both on-premises and cloud environments to avoid security gaps. Data sovereignty is a legal consideration dictating data location, not a definition of security roles in a responsibility matrix. Data classification is the process of categorizing data sensitivity, which informs security controls, but isn’t what a responsibility matrix defines. Authentication mechanisms are security tools/methods. The matrix defines who is responsible for managing them, not the mechanisms themselves. |
| 2 | C | In a serverless model, the underlying infrastructure and runtime environment are fully managed by the cloud provider. This leads to limited visibility for the customer into the actual server, OS, and often the full application execution environment, making traditional monitoring and logging challenging. Serverless reduces the customer’s attack surface related to managing servers because the servers are abstracted away and managed by the cloud provider. Customers have no direct control over patching the underlying OS in a serverless model. This is entirely the cloud provider’s responsibility. Serverless architecture is cloud-native and does not require on-premises hardware firewalls for its security. Instead, cloud-native security controls are used. |
| 3 | C | Microservices is an architectural model where an application is broken down into small, independent, lightweight services, each running in its own process and communicating via APIs. This allows for independent deployment and scaling. While virtual machines can host applications, they are typically heavier and encapsulate an entire OS, not just a lightweight, independently deployable service. A monolithic architecture is the opposite, where an entire application is built as a single, indivisible unit. An RTOS (Real-Time Operating System) is a specialized operating system designed for time-critical applications, unrelated to application segmentation models. |
| 4 | B | Physical air-gapping is the strongest isolation method, involving a complete physical separation of a system or network from all other networks (including external ones). This creates a highly secure environment where data cannot move in or out without manual intervention, making it ideal for extremely critical systems like those in a financial firm. VLANs (Virtual Local Area Networks) segment networks logically, but they still share the same physical infrastructure and rely on routing rules, making them less secure than physical air-gapping for complete isolation from external networks. Software firewalls provide network security at the host or application level, but they still operate within a connected network and are not designed for complete physical isolation from external networks. Containerization isolates applications at the operating system level, providing process and resource isolation, but it does not physically separate systems from networks. |
| 5 | B | SDN (Software-Defined Networking) abstracts the control of network devices (like routers and switches) from their underlying hardware, allowing network administrators to centrally manage and program the network through software, rather than configuring individual devices. A NGFW (Next-Generation Firewall) is a security device that filters traffic, not a technology for abstracting and centralizing overall network device control. A VPN (Virtual Private Network) creates secure, encrypted connections over a network, but it’s unrelated to centralized network device management. A DMZ (Demilitarized Zone)is a network segment used to expose public-facing services while isolating the internal network. It’s not a technology for abstracting control. |
| 6 | A | The MOST significant security concern for IoT sensors is frequently the use of default credentials (e.g., “admin/password”). These are often unchangeable or commonly known, making devices extremely vulnerable to unauthorized access and compromise. IoT sensors typically have low compute capacity, which can limit complex security features but isn’t the primary concern itself. While centralized control can have its own risks if mismanaged, it’s often a necessary and potentially beneficial aspect of managing large IoT deployments, not the most significant security vulnerability of the sensors. Hypervisor exploits target virtualized environments. IoT sensors are usually physical, embedded devices, making this concern irrelevant to their deployment. |
| 7 | D | Containerization’s design (sharing the host OS kernel, bundling only necessary components) inherently leads to reduced attack surface (smaller images, fewer installed packages than a full VM OS). This, combined with the ability to create reproducible environments from definitions (like Dockerfiles), improves consistency, reduces configuration drift, and allows for faster, more reliable deployment of securely built images. Traditional virtualization (VMs) provides stronger OS-level isolation because each VM has its own kernel. Containers, on the other hand, share the host OS kernel, offering less isolation at the OS level compared to VMs. While container orchestration platforms (like Kubernetes) offer robust network controls, the level of granularity isn’t inherently superior to what advanced virtualization platforms can provide, nor is it the primary distinguishing security benefit. Containerization doesn’t necessarily mean a faster patch cycle for the host OS itself. The benefit is typically in the faster “patching” (rebuilding and redeploying) of the application and its dependencies within the container via immutable images. |
| 8 | C | High availability (HA) architecture directly focuses on reducing single points of failure by incorporating redundancy, failover mechanisms, and resilient design across all infrastructure components (hardware, software, network, power) to ensure continuous operation. Air-gapping provides extreme security isolation from external networks but is a specific security control, not the primary method for ensuring overall operational continuity by reducing internal single points of failure across an entire infrastructure. Microservices reduce single points of failure at the application level by breaking down monolithic applications. While beneficial for application resilience, “High availability architecture” is a broader term encompassing all infrastructure layers. Logical segmentation divides networks for security and containment (limiting impact), but it doesn’t inherently build in the redundancy or failover mechanisms primarily aimed at reducing single points of failure for operational uptime. |
| 9 | B | The PRIMARY security challenge for SCADA systems is often limited patch availability. These critical operational technology (OT) systems frequently run on legacy hardware and software that cannot be updated frequently or easily due to stability concerns, vendor support, and the need for continuous operation, leaving them vulnerable. While some SCADA systems may use proprietary protocols, the lack of strong encryption or secure communication overall is a broader issue. “Non-standard” isn’t the most significant concern. SCADA systems are typically designed for very restricted access. “Broad user access” would be a severe misconfiguration rather than an inherent architectural challenge. Real-time needs are a constraint that contributes to limited patching However, the unpatched systems themselves are the primary security challenge; not the real-time needs directly. |
| 10 | D | With IaC (Infrastructure as Code), infrastructure definitions are written as code. A misconfiguration within this code (e.g., an open firewall port, incorrect access policy, insecure default settings) will be automatically, consistently, and rapidly deployed across all environments, potentially at scale, creating a widespread security vulnerability. IaC inherently promotes scalability by allowing infrastructure to be spun up or down rapidly and consistently. While using a specific cloud provider’s IaC tool (like CloudFormation for AWS) can lead to vendor lock-in, IaC itself (especially with tools like Terraform) aims to reduce vendor lock-in by providing multi-cloud capabilities. IaC is fundamentally designed for enabling and enhancing automation, not limiting it. |


