Welcome to today’s practice test!
Today’s practice test is based on Domain 3.3 (Compare and contrast concepts and strategies to protect data) from the CompTIA Security+ SY0-701 objectives.
This beginner-level practice test is inspired by the CompTIA Security+ (SY0-701) exam and is designed to help you reinforce key cybersecurity concepts on a daily basis.
These questions are not official exam questions, but they reflect topics and scenarios relevant to the Security+ certification. Use them to test your knowledge, identify areas for improvement, and build daily cybersecurity habits.
Click the button below to start today’s practice exam.
Results
#1. A security administrator is classifying customer data to ensure it receives adequate protection. Which classification is most appropriate for full names, Social Security numbers, and addresses?
#2. An organization uses encryption to protect files stored on servers. What data state is this approach primarily securing?
#3. Which of the following best ensures compliance with geographic data restrictions for a multinational cloud provider?
#4. A company implements hashing on stored passwords. What does this protect against?
#5. A security analyst is tasked with implementing a data protection method that allows partial data use while maintaining privacy. Which method is MOST appropriate?
#6. Which classification level is BEST suited for a company’s trade secrets?
#7. A system must process encrypted financial transactions in real time. What data state is involved during processing?
#8. Which of the following is a benefit of segmenting data by classification?
#9. A financial institution replicates encrypted backups to a secure facility in another region. What is the PRIMARY objective?
#10. Which technique replaces sensitive data with a unique identifier that has no exploitable meaning?
Note: CompTIA and Security+ are registered trademarks of CompTIA. This content is not affiliated with or endorsed by CompTIA.
To view CompTIA Security+ practice tests on other days, click here.To view answers and explanations for today’s questions, expand the Answers accordion below.
Answers
Number | Answer | Explanation |
---|---|---|
1 | C | For highly sensitive personal identifiable information (PII) like full names, Social Security numbers, and addresses, Restricted is the most appropriate classification. This typically implies the highest level of protection, limiting access to only authorized personnel with a strict need-to-know, and often requires strong encryption and rigorous access controls. Public data has no access restrictions and can be freely shared, which is entirely inappropriate for this type of sensitive PII. While “Confidential” implies data that should not be shared outside the organization, “Restricted” usually denotes an even higher level of sensitivity and tighter controls, often reserved for data whose compromise would lead to severe harm (like financial penalties, identity theft, or reputational damage). PII like SSNs almost always falls under a “Restricted” or “Highly Confidential” category. “Critical” often refers to the availability or importance of data to core business operations (e.g., data needed for transaction processing), rather than its sensitivity or confidentiality. While the data might be critical to the business, its classification based on sensitivity is best described as Restricted. |
2 | B | Data at rest refers to data stored on physical or digital media. “In use” refers to data being processed. “In transit” refers to data moving across networks. Data sovereignty is a legal concept about data being subject to the laws of the country in which it is stored. |
3 | B | Geofencing defines virtual geographic boundaries, ensuring that data is stored, processed, or accessed only within specific, compliant regions, directly addressing geographic data restrictions. Tokenization replaces sensitive data with non-sensitive substitutes for security, not geographic compliance. Data masking creates non-sensitive versions of data for non-production uses, primarily for security and privacy, not geographic compliance. Data integrity ensures data accuracy and consistency, which is a general security principle, not a specific mechanism for geographic compliance. |
4 | C | Hashing transforms passwords into a fixed-length string, making it computationally intensive and extremely difficult to reverse-engineer the original password, thus protecting against brute-force attempts to recover passwords from the stored hashes. Replay attacks involve re-sending valid data transmissions to impersonate a legitimate user. Hashing stored passwords doesn’t directly prevent this, though other authentication mechanisms (like nonces) do. Hashing is a one-way function and doesn’t involve encryption (which is two-way). It protects the stored password itself from being revealed, not provide or deny access to encrypted data. Credential reuse is when a user uses the same password across multiple services. Hashing doesn’t prevent a user from reusing the same password on another service. |
5 | C | Data masking obscures parts of data for privacy while allowing use. Obfuscation is a broader term for making data unclear. It’s less precise for achieving partial, usable privacy than data masking. Tokenization replaces data with tokens. Encryption protects entire datasets, not partial views. |
6 | A | Confidential is the classification level typically reserved for highly valuable proprietary information, such as trade secrets, whose unauthorized disclosure would cause significant harm to the company. Public data has no access restrictions and would make trade secrets vulnerable. While trade secrets are “sensitive,” “Confidential” is a more precise and commonly accepted classification for this level of proprietary information, often implying stricter controls. “Critical” usually refers to the data’s importance for business operations or availability (e.g., system uptime), not primarily its confidentiality or proprietary nature as a trade secret. |
7 | C | When a system is actively “processing” encrypted financial transactions in real time, the data is loaded into memory (RAM) and being directly manipulated by the CPU. This active processing state is known as data in use. Data in transit refers to data moving across networks (e.g., from a client to a server for the transaction). While the transaction might have been in transit, the question specifies “during processing.” Data at rest refers to data stored on persistent storage (e.g., hard drives, databases) when it’s not being actively accessed or processed. Data lifecycle is a broad term encompassing all stages of data from creation to deletion. It’s a concept, not a specific data state during real-time processing. |
8 | B | Segmenting data by classification directly simplifies regulatory compliance by enabling the precise application of appropriate security controls and policies to specific data sets based on their sensitivity and applicable laws. Data classification doesn’t directly impact data transmission or processing speed. Thus, it doesn’t reduce data latency. It’s a security/governance practice, not primarily a user experience improvement. It mitigates insider threats by restricting access, but does not eliminate them. |
9 | B | Replicating encrypted backups to an off-site, secure facility is a key strategy for high availability. If the primary data center or region experiences a disaster (e.g., fire, flood, cyberattack), these off-site backups ensure that data can be restored, minimizing downtime and maintaining business continuity. Data obfuscation makes data harder to understand but doesn’t relate to its availability for recovery after a disaster. Data tokenization replaces sensitive data with non-sensitive tokens. It’s a data protection method for privacy/security, not for ensuring data is available for disaster recovery. Replicating backups primarily serves disaster recovery and availability, not improving the performance of operational systems. |
10 | D | Tokenization replaces sensitive data with a unique, randomly generated, non-sensitive identifier (a “token”) that retains none of the original data’s meaning or value. The original sensitive data is stored securely elsewhere. Hashing transforms data into a fixed-length, one-way string (a hash) used for integrity verification or password storage, but it’s not designed to be a replacement for sensitive data. Encryption transforms data into an unreadable format using a key, and it is designed to be reversible (decryptable) for authorized users, unlike the typically irreversible nature of tokenization. Data masking replaces sensitive data with realistic, but false, data to maintain format and usability. However, it’s not a unique, non-exploitable identifier for the original data. |