AWS Certified Cloud Practitioner (CLF-C02)

A comprehensive study guide with analogies and resources to help you prepare for the AWS Cloud Practitioner certification exam.

AWS Certified Cloud Practitioner Exam Overview

Exam Format

  • Multiple choice and multiple answer questions
  • 65 questions total (50 scored, 15 unscored)
  • 90 minutes duration (120 minutes total seat time)
  • Passing score: 700/1000 points
  • No penalty for wrong answers

Domain Breakdown

  • Cloud Concepts: 24% (~15-16 questions)
  • Security & Compliance: 30% (~19-20 questions)
  • Cloud Technology & Services: 34% (~22 questions)
  • Billing, Pricing & Support: 12% (~8 questions)

What This Exam Tests

The AWS Cloud Practitioner certification validates your ability to explain, understand, describe, and identify AWS concepts. It's a knowledge-based exam, not a skills test.

Key Analogy: Think of this exam as a driver's license theory test – it confirms you understand road rules and signs, but doesn't test if you can actually drive the car.

This certification does not validate:

  • Programming skills
  • Technical diagramming
  • Code management
  • Architectural design skills

Recommended Study Path

Average study time: 24 hours

Suggested Study Process:

  1. Watch lecture videos and memorize key information (50%)
  2. Complete hands-on labs in your AWS account to cement knowledge
  3. Take practice exams to simulate the real exam experience (50%)
  4. Recommended: 1-2 hours daily for 14 days

Study Tips

  • Focus on understanding core AWS services and their use cases
  • Learn key cloud concepts (benefits, economics, shared responsibility)
  • Familiarize yourself with the AWS Well-Architected Framework
  • Understand AWS Global Infrastructure concepts
  • Know the difference between AWS service types (global vs. regional)

Cloud Concepts (24% of exam)

Cloud computing offers several key advantages over traditional on-premises infrastructure:

Analogy: Cloud vs. On-Premises

Think of traditional IT infrastructure like owning a car (high upfront cost, maintenance, limited capacity) versus cloud computing being like a rideshare service (pay-as-you-go, no maintenance, scale as needed).

  • Trade Capital Expense for Variable Expense

    No upfront investment in physical servers and data centers.

    Analogy: Instead of buying a water tank for your home, you pay for water as you use it.

  • Economies of Scale

    Benefit from AWS's massive scale that you couldn't achieve alone.

    Analogy: Buying groceries in bulk is cheaper - AWS buys compute in massive bulk and passes savings to you.

  • Stop Guessing Capacity

    Scale resources up or down based on actual demand.

    Analogy: Rather than buying a stadium-sized parking lot for Black Friday, you can expand your parking as needed.

  • Increase Speed and Agility

    Deploy new resources in minutes, not months.

    Analogy: Getting a new server is like downloading an app versus building a house.

  • Eliminate Operational Burden

    AWS maintains the infrastructure, allowing you to focus on applications.

    Analogy: Like dining at a restaurant versus cooking from scratch - focus on enjoying your meal, not maintaining the kitchen.

  • Go Global in Minutes

    Deploy applications worldwide with low latency.

    Analogy: Instead of building offices worldwide, you can instantly open virtual branches in any country.

Practical Learning Flow:

  1. Read the Six Advantages of Cloud Computing whitepaper
  2. Create a free-tier AWS account and explore the AWS Management Console
  3. Deploy a small web server using EC2 to experience on-demand provisioning
  4. Scale the web server up and down to understand elasticity
  5. Delete the resources when done to see pay-as-you-go in action

Video Resource:

Key topics covered: Six advantages of cloud computing, how AWS implements these benefits, and real-world examples.

The AWS Well-Architected Framework helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for applications. It is based on six pillars:

Analogy: Well-Architected Framework

Think of the Well-Architected Framework as building codes for a house. Just as building codes ensure homes are safe, sturdy, and efficient, the Well-Architected Framework ensures your cloud infrastructure is secure, reliable, and cost-effective.

1. Operational Excellence

Running and monitoring systems to deliver business value.

Analogy: Like having maintenance schedules and monitoring systems for your home.

2. Security

Protecting information, systems, and assets.

Analogy: Like home security systems, locks, and fire alarms.

3. Reliability

Ensuring a system performs its function correctly and consistently.

Analogy: Like having backup generators and water systems in your home.

4. Performance Efficiency

Using resources efficiently to meet requirements.

Analogy: Like energy-efficient appliances and smart home systems.

5. Cost Optimization

Avoiding unnecessary costs.

Analogy: Like insulating your home and using programmable thermostats to reduce utility bills.

6. Sustainability

Minimizing environmental impact of cloud workloads.

Analogy: Like using solar panels or other renewable energy sources for your home.

Practical Learning Flow:

  1. Read the Well-Architected Framework Whitepaper
  2. Try the AWS Well-Architected Tool in the AWS console
  3. Assess a simple architecture (like a web application with database) against the six pillars
  4. Implement one improvement based on your assessment

Video Resource:

Key topics covered: Each pillar explained in depth, with real-world examples and best practices for implementation.

For the Exam:

You need to understand the pillars at a high level, but not the detailed implementation. Know that the Well-Architected Tool is an auditing tool to assess workloads against the framework.

The AWS Cloud Adoption Framework (CAF) provides guidance for coordinating the different parts of organizations that are moving to the cloud.

Analogy: Cloud Adoption Framework

Think of CAF as a relocation plan for moving your company to a new city. It covers everything from business reasons for moving, to training staff, to technical infrastructure, to security, to ongoing operations.

Six Core Perspectives:

Business

Ensures cloud investments align with business outcomes

People

Focuses on culture, leadership, and skills development

Governance

Maximizes benefits while minimizing transformation risks

Platform

Delivers cloud solutions including automation and integration

Security

Ensures confidentiality, integrity, and availability

Operations

Ensures services are delivered at a level that meets business needs

Practical Learning Flow:

  1. Read the AWS CAF Whitepaper
  2. Complete the CAF Assessment to understand how these perspectives apply to your organization
  3. Create a simple migration plan for a hypothetical application using the CAF as a guide

Video Resource:

Key topics covered: The six CAF perspectives, practical applications, and case studies of successful migrations.

Exam Tip:

The Cloud Adoption Framework appears frequently on the exam. Know the six perspectives and their basic purposes. Expect questions around which perspective is appropriate for different scenarios.

AWS Global Infrastructure

AWS has built a global infrastructure designed for running enterprise applications that's highly available, fault-tolerant, and scalable. Understanding this infrastructure is crucial for the exam and for effectively using AWS services.

Regions are geographically isolated areas where AWS has deployed infrastructure.

Analogy: AWS Regions

Think of AWS Regions like separate countries, each with their own laws (compliance), currency (pricing), and available products (services). What happens in one country doesn't directly affect another.

Key Characteristics:

  • Each region is completely independent
  • Data stored in a region stays in that region unless explicitly transferred
  • Regions have different service availability
  • Regions have different pricing
  • Each region has a unique identifier (e.g., us-east-1, eu-west-2)

Region Selection Factors:

  • Compliance: Data sovereignty and legal requirements
  • Latency: Proximity to users
  • Cost: Pricing varies by region
  • Service availability: Not all services available in all regions

Special Regions:

  • US East (N. Virginia) - us-east-1

    First and oldest AWS region. New services typically launch here first. All billing information appears here regardless of where resources are deployed.

  • AWS GovCloud (US)

    Designed for US government agencies and customers with highly sensitive workloads. Complies with FedRAMP High and other government compliance requirements.

  • China Regions

    Operated by local Chinese partners. Requires a separate account and credentials from standard AWS regions.

Practical Learning Flow:

  1. Visit the AWS Global Infrastructure page
  2. In the AWS Management Console, practice switching between regions using the region selector in the top right
  3. In each region, check which services are available by trying to create different resources
  4. Use the AWS Pricing Calculator to compare costs across different regions

Availability Zones are distinct locations within a region that are engineered to be isolated from failures in other AZs.

Analogy: Availability Zones

Think of Availability Zones like multiple power plants serving a city. Each has independent power, cooling, and networking, and if one goes down, the others continue to provide electricity to the city.

Key Characteristics:

  • Physical data centers with redundant power, networking, and connectivity
  • Connected to other AZs with high-bandwidth, low-latency networking
  • Physically separated by a meaningful distance (miles)
  • Isolated from disasters affecting other AZs
  • Each region generally has three or more AZs

High Availability with AZs:

  • Deploy applications across multiple AZs to achieve high availability
  • Services like RDS and Elastic Beanstalk can automatically use multiple AZs
  • Each AZ has a unique identifier (e.g., us-east-1a, us-east-1b)
  • AZ IDs are mapped independently for each AWS account

Practical Learning Flow:

  1. In the AWS Management Console, create an EC2 instance and note the AZ selection options
  2. Create a VPC with subnets in different AZs
  3. Deploy an RDS database with the Multi-AZ option enabled
  4. Set up an Elastic Load Balancer that distributes traffic across instances in multiple AZs

Video Resource:

Key topics covered: How regions and AZs relate to each other, high availability designs, and best practices for deployment.

Edge Locations are AWS data centers designed to deliver content with the lowest possible latency to end users.

Analogy: Edge Locations

Think of Edge Locations like local distribution centers for a global retailer. They store the most popular products closer to customers for faster delivery, while less frequently requested items come from the main warehouses (regions).

Edge Location Services:

  • Amazon CloudFront: Content delivery network (CDN)
  • AWS Global Accelerator: Improves availability and performance
  • Amazon Route 53: DNS service
  • AWS Shield: DDoS protection
  • AWS WAF: Web application firewall

Other Edge Infrastructure:

  • Local Zones: Extensions of regions placed closer to large population centers
  • Wavelength Zones: AWS infrastructure deployed on 5G networks
  • Direct Connect Locations: Where you can establish dedicated connections to AWS
  • Points of Presence (PoPs): Includes both Edge Locations and Regional Edge Caches

Video Resource:

Key topics covered: How Edge Locations work, CloudFront architecture, and implementing a global content delivery strategy.

AWS services are either global (available across all regions) or regional (specific to the region in which they're created).

Analogy: Global vs. Regional Services

Think of global services like a passport (valid worldwide), while regional services are like a driver's license (only valid in the state/country where it was issued).

Global Services:

  • IAM: Identity and Access Management
  • Route 53: DNS service
  • CloudFront: Content delivery network
  • WAF: Web Application Firewall
  • AWS Organizations: Account management
  • AWS Artifact: Compliance documents

Regional Services:

  • EC2: Compute instances
  • S3: Object storage
  • RDS: Relational database service
  • DynamoDB: NoSQL database
  • Lambda: Serverless functions
  • VPC: Virtual Private Cloud

Important Exam Notes:

  • In the AWS Management Console, global services do not display a region in the region selector
  • Data in regional services stays in that region unless explicitly transferred
  • Some services like IAM appear to operate globally but store their data in us-east-1
  • Some regional services like S3 have global namespaces even though the data is regional

Security & Compliance (30% of exam)

The Shared Responsibility Model clearly defines which security tasks belong to AWS and which belong to you.

Analogy: Shared Responsibility Model

Think of AWS like an apartment building. AWS is responsible for the building's structure, utilities, and common areas (security OF the cloud), while you're responsible for everything inside your apartment, including locks on your door and who you let in (security IN the cloud).

AWS Responsibilities (Security OF the Cloud)

  • Physical security of data centers
  • Hardware and global infrastructure
  • Cloud network infrastructure
  • Virtualization infrastructure
  • Software for managed services

Think of this as the foundation and structure of a building that the landlord maintains.

Customer Responsibilities (Security IN the Cloud)

  • Customer data
  • Platform, applications, identity & access management
  • Operating system configuration
  • Network security & firewall configuration
  • Client-side encryption & data integrity
  • Server-side encryption

Think of this as everything inside your apartment that you control.

Responsibility Shifts Based on Service Model:

Infrastructure as a Service (IaaS)

Example: EC2

Customer responsible for: OS, applications, data, middleware, etc.

Platform as a Service (PaaS)

Example: RDS, Elastic Beanstalk

Customer responsible for: Applications and data

Software as a Service (SaaS)

Example: Amazon WorkMail, Amazon Connect

Customer responsible for: Data and access management

Practical Learning Flow:

  1. Review the AWS Shared Responsibility Model Documentation
  2. Create a simple categorization exercise: list 10 AWS services and identify customer vs. AWS responsibilities
  3. Practice explaining the model to someone else using the apartment building analogy

Video Resource:

Key topics covered: How responsibility is shared, what shifts with different service types, and common misconceptions.

AWS provides various security services to help you implement the security controls that are your responsibility under the Shared Responsibility Model.

Analogy: AWS Security Services

Think of AWS security services like the security system for your home - with door alarms (GuardDuty), security cameras (Inspector), a safe for valuables (KMS), and a guard service (Shield) - all working together to protect your property.

Amazon Inspector

Automated security assessment service that helps improve security and compliance of applications.

Analogy: Like a home inspector checking for vulnerabilities and code issues.

Amazon GuardDuty

Threat detection service that continuously monitors for malicious activity and unauthorized behavior.

Analogy: Like a security guard watching for suspicious behavior around your property.

AWS Shield

Managed Distributed Denial of Service (DDoS) protection service.

Analogy: Like a bouncer preventing a mob from overwhelming your restaurant.

AWS WAF

Web Application Firewall that helps protect web applications from common web exploits.

Analogy: Like a security checkpoint that inspects visitors before they enter your building.

Video Resource:

Key topics covered: Overview of AWS security services, when to use each service, and how they work together.

AWS provides various tools and programs to help customers meet their compliance requirements in the cloud.

Analogy: AWS Compliance

Think of AWS compliance programs like restaurant health certifications. AWS has been inspected and certified for various standards, but you still need to handle your food (data) properly to maintain compliance.

AWS Compliance Programs:

  • HIPAA: Healthcare (US)
  • GDPR: Data protection (EU)
  • PCI DSS: Payment card industry
  • FedRAMP: US government
  • SOC 1/2/3: Service Organization Controls
  • ISO/IEC 27001: Information security

Compliance Tools:

  • AWS Artifact: Portal for compliance reports
  • AWS Config: Resource configuration and compliance
  • AWS Security Hub: Comprehensive security posture
  • Service Control Policies (SCPs): Centralized permission controls
  • AWS Audit Manager: Continuous auditing of AWS usage

AWS Artifact:

The central resource for compliance-related information.

  • Access AWS security and compliance documents
  • Review and accept agreements with AWS for specific regulations
  • Download AWS certification reports (ISO, PCI, SOC, etc.)
  • Available to all AWS customers at no additional cost

Video Resource:

Key topics covered: AWS compliance programs, using AWS Artifact, and shared compliance responsibilities.

IAM Deep Dive

Identity and Access Management (IAM) is a global service that allows you to manage access to your AWS resources securely. It's one of the most important services to understand for the exam and for working with AWS.

IAM provides the infrastructure to control authentication (who can sign in) and authorization (what they can do) for your AWS account.

Analogy: IAM

Think of IAM as the security system for a large office building. The security desk (IAM) checks IDs (authentication), assigns different access badges (policies) to different people (users) or departments (groups), and issues temporary visitor passes (roles) when needed.

Key IAM Features:

  • Global service (not region-specific)
  • Integrated with all AWS services
  • Shared access to your AWS account
  • Multi-factor authentication (MFA)
  • Identity federation (use your existing identities)
  • Free to use (no additional charge)

IAM Best Practices:

  • Enable MFA for the root user and all IAM users
  • Create individual IAM users instead of sharing credentials
  • Use groups to assign permissions to IAM users
  • Follow the principle of least privilege
  • Use IAM roles for applications on EC2
  • Rotate credentials regularly

Video Resource:

Key topics covered: IAM basics, users, groups, roles, policies, and best practices.

IAM identities are the entities to which you can assign permissions to access AWS resources.

IAM Users

Represents a person or service that interacts with AWS.

Analogy: An employee with a specific ID badge.

  • Has a name and credentials
  • Can have two types of access:
    • Console access: Username and password
    • Programmatic access: Access key ID and secret access key
  • Can belong to multiple groups
IAM Groups

Collection of IAM users that share the same permissions.

Analogy: A department in a company where all employees have similar access needs.

  • Cannot be nested (no groups within groups)
  • A user can be in multiple groups
  • No limit to the number of users in a group
  • Makes permission management easier
IAM Roles

Set of permissions that can be assumed by entities that you trust.

Analogy: A temporary visitor pass or a job function that different people can assume.

  • Used for temporary access
  • No long-term credentials
  • Common use cases:
    • EC2 instances accessing AWS resources
    • Cross-account access
    • AWS service access to other AWS services
    • Identity federation (external users)

Access Keys:

  • Long-term credentials for IAM users
  • Consist of an access key ID and a secret access key
  • Used for programmatic access (CLI, SDK, API)
  • Maximum of two access keys per user
  • Can be created, deleted, made active/inactive
  • Best practice: Rotate access keys regularly
  • Security tip: Keep both slots filled (one active, one inactive) for easy rotation

IAM policies are JSON documents that define permissions and are attached to IAM identities (users, groups, roles).

Analogy: IAM Policies

Think of policies as rulebooks that specify exactly what actions someone can perform. Like a list of rooms someone can enter in a building, actions they can take, and at what times.

Policy Types:

  • Identity-based policies: Attached to IAM identities
  • Resource-based policies: Attached to resources (e.g., S3 buckets)
  • AWS managed policies: Created and managed by AWS
  • Customer managed policies: Created and managed by you
  • Inline policies: Embedded directly in a user, group, or role

Policy Structure:

  • Version: Policy language version
  • Statement: Array of permissions
  • Effect: Allow or Deny
  • Action: API calls that can be made
  • Resource: AWS resources the actions apply to
  • Condition: When the policy is in effect

Example Policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::example-bucket",
        "arn:aws:s3:::example-bucket/*"
      ]
    }
  ]
}
                        

This policy allows getting objects and listing the contents of a specific S3 bucket.

The AWS root user is the account owner with complete access to all AWS services and resources. It's created when you first set up your AWS account.

Analogy: Root User

Think of the root user like the master key to a building - it can open any door and access any room. While extremely powerful, it should be locked away securely and only used when absolutely necessary.

Root User Tasks:

The following actions can only be performed by the root user:

  • Change account settings (name, email, root password)
  • Close your AWS account
  • Change or cancel AWS Support plans
  • Register as a seller in the Reserved Instance Marketplace
  • Configure S3 buckets to enable MFA Delete
  • Create a CloudFront key pair
  • Sign up for GovCloud

Root User Security Best Practices:

  • Enable MFA for the root user account
  • Never share your root user credentials
  • Create an administrative IAM user for daily administrative tasks
  • Only use the root user for tasks that specifically require it
  • Use a strong, complex password
  • Do not create access keys for the root user

Video Resource:

Key topics covered: Root user capabilities, security best practices, and when to use the root user vs. IAM users.

AWS Services (34% of exam)

AWS offers various compute services to run your applications, from virtual servers to serverless computing.

Analogy: Compute Services

Think of AWS compute options like transportation choices. EC2 is like owning a car (full control but more maintenance), Lambda is like taking a taxi (pay only for the ride, no maintenance), and Elastic Beanstalk is like having a chauffeur (someone else drives your car).

Amazon EC2 (Elastic Compute Cloud)

Virtual servers in the cloud with complete control.

Analogy: Renting an apartment where you control everything inside.

On-Demand Instances

Pay by the hour/second with no commitment

Reserved Instances

Lower hourly rate with 1 or 3-year commitment

Spot Instances

Bid for unused EC2 capacity at large discounts

Dedicated Hosts

Physical servers dedicated to your use

AWS Lambda

Serverless compute service that runs code in response to events.

Analogy: Like a vending machine that only runs when you put money in and press a button.

  • Pay only for compute time consumed
  • No servers to manage
  • Scales automatically
  • Integrated with many AWS services
AWS Elastic Beanstalk

Service for deploying and scaling applications without managing infrastructure.

Analogy: Like hiring a property manager for your apartment building who handles maintenance.

  • Upload your code and Beanstalk handles deployment
  • Automatically scales, monitors, and manages your application
  • You retain full control of the underlying resources

Exam Tips:

  • Know the basic purpose and use case for each compute service
  • Understand the pricing models for EC2 (especially the differences between On-Demand, Reserved, and Spot)
  • Know that Lambda is a serverless, event-driven compute service that only charges for usage

AWS offers a variety of storage options for different needs, from object storage to file systems.

Analogy: Storage Services

Think of AWS storage options like different types of physical storage. S3 is like a filing cabinet (organized objects), EBS is like a hard drive attached to your computer, EFS is like a shared network drive, and Glacier is like deep storage in a basement.

Amazon S3 (Simple Storage Service)

Scalable object storage with industry-leading durability and availability.

Analogy: Like a virtually infinite filing cabinet where each document has a unique label.

  • Store any type of file up to 5TB in size
  • 99.999999999% durability
  • Pay only for what you use
  • Different storage classes for different needs (Standard, Intelligent-Tiering, One Zone-IA, Glacier, etc.)
Amazon EBS (Elastic Block Store)

Persistent block storage volumes for EC2 instances.

Analogy: Like an external hard drive for your computer.

  • Automatically replicated within an Availability Zone
  • Different volume types (SSD, HDD) for different workloads
  • Can be attached to a single EC2 instance at a time
Amazon EFS (Elastic File System)

Scalable, elastic file system for use with AWS services and on-premises resources.

Analogy: Like a shared network drive that multiple computers can access simultaneously.

  • Automatically grows and shrinks as you add and remove files
  • Can be mounted to multiple EC2 instances at once
  • Regional service that is available across multiple AZs

Exam Tips:

  • Know the basic use case for each storage service
  • Understand that S3 is object storage, EBS is block storage, and EFS is file storage
  • Remember that EBS volumes are tied to a single AZ, while S3 and EFS are regional services
  • Know the Snow Family devices and their capacity ranges

AWS offers purpose-built database services for different types of applications and workloads.

Analogy: Database Services

Think of AWS database services like different types of stores. RDS is like a traditional department store (organized by sections), DynamoDB is like a convenience store (quick access to items), and Redshift is like a warehouse store (bulk items for analysis).

Amazon RDS (Relational Database Service)

Managed relational database service supporting multiple engines.

Analogy: Like hiring a professional to manage your library of books organized by categories.

Supported database engines:

MySQL
PostgreSQL
MariaDB
Oracle
SQL Server
Aurora
  • Automated backups and patching
  • Multi-AZ deployment for high availability
  • Read replicas for improved read performance
Amazon DynamoDB

Fast, flexible NoSQL database service for any scale.

Analogy: Like a giant key-value filing system that can retrieve any file instantly regardless of how many you have.

  • Single-digit millisecond response times
  • Serverless with automatic scaling
  • Ideal for applications with large amounts of data and strict latency requirements
  • Supports both document and key-value data models
Amazon Redshift

Fully managed, petabyte-scale data warehouse service.

Analogy: Like a massive research library organized specifically for efficient analysis of large collections.

  • Designed for data analysis and business intelligence
  • 10x faster performance than traditional data warehouses
  • Integrates with data lakes and business intelligence tools

Exam Tips:

  • Know which database service to use for different scenarios
  • Understand that RDS is for relational databases, DynamoDB is for NoSQL
  • Remember that Redshift is for data warehousing and analytics
  • For the exam, focus most on RDS and DynamoDB as they are the most commonly tested

AWS offers a comprehensive suite of networking services to build secure, robust, and scalable applications.

Analogy: Networking Services

Think of AWS networking like a modern city's infrastructure. VPC is like the city layout with neighborhoods (subnets), Direct Connect is like a private highway, and Route 53 is like the city's address system and directions service.

Amazon VPC (Virtual Private Cloud)

Isolated network environment in the AWS cloud.

Analogy: Like owning a private, gated community where you control all access.

  • Complete control over your virtual networking environment
  • Create subnets, route tables, network gateways
  • Configure security with security groups and network ACLs
  • Connect to your on-premises network using VPN or Direct Connect
Amazon Route 53

Highly available and scalable cloud Domain Name System (DNS) service.

Analogy: Like a phone book and map service that translates names into addresses and directs traffic.

  • Register domain names
  • Route internet traffic to your resources
  • Health checks and DNS failover
  • Traffic flow to optimize routing
AWS Direct Connect

Dedicated network connection from your premises to AWS.

Analogy: Like having a private road connecting your office directly to AWS data centers, bypassing public highways.

  • Reduces network costs
  • Increases bandwidth throughput
  • Provides a more consistent network experience

Video Resource:

Key topics covered: VPC architecture, subnets, security groups, NACLs, and connectivity options.

Monitoring & Logging

AWS provides several services for monitoring your resources, collecting and tracking metrics, collecting and analyzing logs, and setting alarms. Understanding these services is crucial for maintaining reliable and well-performing applications.

Amazon CloudWatch is a monitoring and observability service that provides data and actionable insights for AWS resources and applications.

Analogy: CloudWatch

Think of CloudWatch like a comprehensive health monitoring system for your body. It tracks vital signs (metrics), records health events (logs), and can trigger alerts (alarms) when measurements fall outside normal ranges.

CloudWatch Metrics:

  • Time-ordered data points about your resources
  • Automatically collected for many services
  • Custom metrics can be created
  • Stored for 15 months by default
  • Used to visualize performance and health
  • Examples: CPU utilization, network throughput, error rates

CloudWatch Logs:

  • Centralized repository for logs from many sources
  • AWS services, applications, on-premises resources
  • Real-time monitoring of log data
  • Can be filtered, searched, and analyzed
  • Can be stored indefinitely or with retention policies
  • Can be exported to S3 for long-term storage

CloudWatch Alarms:

CloudWatch Alarms monitor metrics and can trigger actions based on thresholds.

  • Set thresholds and evaluate actions based on metrics
  • Can trigger EC2 actions (stop, terminate, reboot, recover)
  • Can send notifications through Amazon SNS
  • Can invoke Auto Scaling actions
  • States: OK, ALARM, INSUFFICIENT_DATA

CloudWatch Dashboards:

Customizable home pages in the CloudWatch console to monitor resources.

  • Create multiple dashboards for different purposes
  • Include metrics and alarms from multiple regions
  • Can be shared with others
  • Visualize metrics and alarms in various formats

Practical Learning Flow:

  1. Launch an EC2 instance and explore the default CloudWatch metrics
  2. Create a CloudWatch alarm to send an email when CPU utilization exceeds 70%
  3. Configure CloudWatch Logs agent on an EC2 instance
  4. Create a custom dashboard with key metrics from different services

Video Resource:

Key topics covered: CloudWatch metrics, logs, alarms, dashboards, and best practices for monitoring.

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.

Analogy: CloudTrail

Think of CloudTrail like security cameras in a building. It records who did what, when, and from where - creating a video history (activity log) of all actions that you can review later if needed.

Key CloudTrail Features:

  • Records AWS API calls for your account
  • Records the identity of the API caller, time of call, source IP, request parameters, and response elements
  • Maintains event history for 90 days by default
  • "Trails" can be created for longer retention in S3
  • Can be configured to deliver logs to CloudWatch Logs
  • Can be set up for all regions or a single region
  • Helps with security analysis, resource change tracking, and compliance auditing

Types of Events:

Management Events

Operations performed on resources in your AWS account (also called control plane operations).

Examples: Creating an EC2 instance, configuring security groups, creating an S3 bucket

Data Events

Resource operations performed on or within a resource (also called data plane operations).

Examples: S3 object-level API activity, Lambda function execution activity

Not logged by default due to high volume

Practical Learning Flow:

  1. Enable CloudTrail in your AWS account
  2. Create an S3 bucket to store CloudTrail logs
  3. Perform some API actions (create/modify resources)
  4. View the CloudTrail events in the CloudTrail console
  5. Create a query using Amazon Athena to analyze CloudTrail logs

Video Resource:

Key topics covered: CloudTrail concepts, event types, trails, and using CloudTrail for security and compliance.

AWS Trusted Advisor is an online tool that provides real-time guidance to help you provision your resources following AWS best practices.

Analogy: Trusted Advisor

Think of Trusted Advisor like a consulting firm that regularly audits your business operations and provides recommendations to improve efficiency, security, and cost management.

Trusted Advisor Check Categories:

Cost Optimization

Recommendations to help you save money by eliminating unused or idle resources.

Examples: Idle instances, underutilized EBS volumes, unassociated Elastic IPs

Security

Recommendations to improve the security of your AWS environment.

Examples: Security groups with unrestricted access, IAM use, MFA on root account

Fault Tolerance

Recommendations to help improve the resiliency of your AWS environment.

Examples: RDS backups, EBS snapshots, availability zone balance

Performance

Recommendations to help improve the performance of your services.

Examples: High utilization instances, CloudFront CDN optimization, provisioned IOPS

Service Limits

Recommendations when you are approaching service limits.

Examples: VPC limits, EBS volume limits, EC2 instance limits

Available Checks by Support Plan:

  • Basic Support (Free): Only Service Limits and Security (some) checks
  • Developer Support: Same as Basic
  • Business Support: All Trusted Advisor checks
  • Enterprise Support: All Trusted Advisor checks plus access to Technical Account Manager (TAM)

Practical Learning Flow:

  1. Access Trusted Advisor in the AWS Management Console
  2. Review available checks based on your support plan
  3. Implement recommendations for the Security category
  4. Set up Trusted Advisor notifications using Amazon SNS

Disaster Recovery

Disaster recovery involves planning for and recovering from events that negatively impact business operations. AWS provides various services and strategies to help you implement effective disaster recovery solutions.

Disaster recovery planning is essential for ensuring business continuity in the event of a disaster.

Analogy: Disaster Recovery

Think of disaster recovery like insurance policies for different values of cars. Basic coverage (backup & restore) is cheap but limited, while premium coverage (multi-site) costs more but protects you better.

Recovery Point Objective (RPO)

Maximum acceptable amount of data loss measured in time.

Analogy: If you back up every 24 hours, your RPO is 24 hours - you could lose up to a day's worth of data.

Recovery Time Objective (RTO)

Maximum acceptable delay between service interruption and restoration.

Analogy: How long you can afford to be without your car after an accident.

Business Continuity Plan (BCP):

  • A document that outlines how a business will continue operating during an unplanned disruption
  • Includes procedures for before, during, and after a disaster
  • Identifies critical business functions and resources
  • Defines roles and responsibilities during a disaster
  • Includes communication plans and contact information

AWS provides multiple disaster recovery strategies with different trade-offs between cost and recovery time/data loss.

Backup & Restore
Lowest Cost

Regular backups stored in another region, restored when needed.

High RPO/RTO (hours/days) | Low cost

  • Back up data to Amazon S3
  • Use AWS Backup for centralized backup management
  • Restore from backups when disaster occurs
  • No ongoing costs for standby resources
Pilot Light
Medium-Low Cost

Core systems kept running, scaled up when needed.

Medium RPO/RTO (hours) | Medium-low cost

Analogy: Like a pilot light on a gas stove - small flame ready to be turned up.

  • Core components (like databases) are always running
  • Other components are turned off but configured and ready
  • Data is continuously replicated to the DR site
  • When disaster occurs, quickly start the application servers
Warm Standby
Medium-High Cost

Scaled-down but fully functional copy of the production environment.

Low RPO/RTO (minutes/hours) | Medium-high cost

Analogy: Like a backup generator that's already running but not at full capacity.

  • Complete system is up and running but at minimal capacity
  • Can handle some production traffic
  • When disaster occurs, scale up to full production capacity
  • Data is continuously replicated
Multi-site Active/Active
Highest Cost

Full production environment running in multiple regions.

Near-zero RPO/RTO | Highest cost

Analogy: Like having two identical houses fully furnished, so if one burns down, you can immediately move to the other.

  • Full production capacity in multiple regions
  • Traffic is distributed across all regions
  • When disaster occurs, traffic is automatically routed away from the affected region
  • Data is continuously synchronized between regions

Video Resource:

Key topics covered: The four main disaster recovery strategies, their pros and cons, and implementation details.

AWS provides several services that can be used in disaster recovery solutions.

AWS Backup

Centralized backup service that automates backup of data across AWS services.

  • Centralized management for backups
  • Policy-based backup solutions
  • Cross-region and cross-account backup
  • Encryption, retention, and lifecycle management
Amazon S3 Cross-Region Replication

Automatically replicates data from one region to another.

  • Asynchronous replication
  • Provides geographic redundancy
  • Helps meet compliance requirements
  • Improves data access times for users in different regions
Amazon RDS Multi-AZ and Read Replicas

High availability and read scaling for database workloads.

  • Multi-AZ: Synchronous replication to a standby in a different AZ
  • Automatic failover in case of infrastructure failure
  • Read replicas: Asynchronous replication for read scalability
  • Can be promoted to primary (manually) if needed
Route 53 DNS Failover

Automatically routes traffic away from unhealthy endpoints.

  • Health checks monitor endpoint health
  • Failover routing policy directs traffic when issues occur
  • Works with global and regional services
  • Can be used to implement active-passive and active-active architectures

Practical Learning Flow:

  1. Set up cross-region replication for an S3 bucket
  2. Create a Multi-AZ RDS instance and test failover
  3. Configure Route 53 health checks and failover routing for a simple website
  4. Use AWS Backup to create and manage backups of different resources

Billing, Pricing & Support (12% of exam)

Understanding AWS pricing principles and models is essential for cost management.

Analogy: AWS Pricing

Think of AWS pricing like a utility bill. You pay for what you use (like electricity), can get discounts for long-term commitments (like fixed-rate plans), and can sometimes get lower rates during off-peak hours (like Spot Instances).

AWS Pricing Fundamentals:

💰
Pay-as-you-go

Pay only for the resources you use, with no long-term commitments.

Analogy: Like paying for a taxi only for the distance you travel.

📉
Save when you reserve

Get significant discounts by making long-term commitments to certain services (1 or 3 years).

Analogy: Like getting a discount on a gym membership by paying annually instead of monthly.

🔄
Pay less by using more

Volume-based discounts for services like S3 and data transfer.

Analogy: Like bulk discounts at a warehouse store - the more you buy, the less you pay per unit.

Common Pricing Models for AWS Services:

EC2 Pricing Options

On-Demand Instances

Pay by the hour/second with no commitment

Best for: Short-term, unpredictable workloads

Reserved Instances (RI)

Up to 72% discount for 1 or 3-year commitment

Best for: Steady, predictable workloads

Spot Instances

Bid for unused capacity at up to 90% discount

Best for: Flexible start/end times, fault-tolerant workloads

Savings Plans

Commitment to a consistent amount of usage ($/hour) for 1 or 3 years

Best for: Flexibility across instance families, regions, and compute services

S3 Pricing Components

Storage

Pay for data stored in your S3 buckets

Varies by storage class (Standard, Intelligent-Tiering, Glacier, etc.)

Requests and data retrievals

Pay for requests made to your objects (GET, PUT, COPY, etc.)

Data transfer

Pay for data transferred out of S3 to the internet or other AWS regions

Data transfer in is typically free, and transfer within the same region is usually free or low cost

Exam Tips:

  • Know the three fundamental pricing characteristics: pay-as-you-go, save when you reserve, pay less by using more
  • Understand the different EC2 pricing models and when to use each
  • Remember that data transfer IN to AWS is typically free, while data transfer OUT has costs
  • Know that Reserved Instances provide significant savings (up to 72%) compared to On-Demand

The AWS Free Tier enables you to gain hands-on experience with AWS services at no cost within certain limits.

Always Free

Services and features that are always free to use, with no expiration date.

  • Amazon DynamoDB: 25 GB of storage
  • AWS Lambda: 1 million free requests per month
  • Amazon SNS: 1 million publishes
  • Amazon CloudWatch: 10 custom metrics and 10 alarms
  • AWS CloudFormation: No charge for the service itself (pay only for resources created)
12 Months Free

Free for 12 months following your initial AWS sign-up date.

  • Amazon EC2: 750 hours per month of t2.micro or t3.micro instances
  • Amazon S3: 5 GB of standard storage
  • Amazon RDS: 750 hours of db.t2.micro database usage
  • Amazon CloudFront: 50 GB of data transfer out
  • Elastic Load Balancing: 750 hours
Trials

Short-term free trials that start when you begin using the service.

  • Amazon Inspector: 90-day free trial
  • Amazon Lightsail: 1 month free (first month, up to 750 hours)
  • AWS Backup: 1-month free trial
  • Amazon Pinpoint: 30-day free trial

Important Notes:

  • You can still incur charges if you exceed the free tier limits
  • Some services are free to use but may provision other AWS resources that cost money
  • Set up billing alerts to avoid unexpected charges
  • The 12-month free tier starts when you first sign up for an AWS account
  • Some free tier benefits are per account, per month; others are one-time only

AWS provides several tools to help you monitor, analyze, and optimize your AWS costs.

Analogy: Billing and Cost Management Tools

Think of AWS billing tools like different financial management tools. The AWS Cost Explorer is like a spending analysis app showing where your money goes, AWS Budgets is like setting spending limits on your credit card, and Cost Allocation Tags are like categorizing expenses in a financial app.

AWS Cost Explorer

Visual tool to view and analyze your AWS costs and usage over time.

Analogy: Like a financial dashboard showing your spending patterns over time.

  • View cost data for the past 13 months
  • Forecast future costs based on historical data
  • Filter and group data by various dimensions (service, region, etc.)
  • Identify cost trends and opportunities for optimization
AWS Budgets

Set custom cost and usage budgets with alerts when thresholds are exceeded.

Analogy: Like setting spending limits and alerts on your credit card.

  • Create budgets for costs, usage, Reserved Instance utilization, and Savings Plans
  • Set alerts based on actual or forecasted spend
  • Receive notifications via email or SNS
  • Track progress throughout the month
Cost Allocation Tags

Label AWS resources to track and allocate costs to specific projects or departments.

Analogy: Like categorizing expenses in a budgeting app by purpose or department.

  • Two types: AWS-generated tags and user-defined tags
  • Must be activated in the Billing console to appear in cost management tools
  • Use with Cost Explorer and other tools to analyze costs by tag

Exam Tips:

  • Know the purpose of each billing and cost management tool
  • Understand that Cost Explorer visualizes costs and forecasts future spending
  • Remember that AWS Budgets allows you to set custom budgets and alerts
  • Know that Cost Allocation Tags must be activated in the Billing console
  • Understand that Consolidated Billing allows you to take advantage of volume pricing discounts

AWS offers different support plans to meet various customer needs, from basic to enterprise-level support.

Basic Support

Free support included with all AWS accounts.

  • 24/7 access to customer service, documentation, whitepapers, and support forums
  • Access to six core Trusted Advisor checks
  • Access to Personal Health Dashboard
  • No technical support via email or phone
Developer Support

Recommended for those experimenting or testing in AWS.

  • All Basic Support features
  • Email access to technical support
  • 12-hour response time for general guidance
  • 24-hour response time for system impaired issues
  • No 24/7 phone support
Business Support

Recommended for production workloads.

  • All Developer Support features
  • 24/7 phone, email, and chat access to technical support
  • 1-hour response time for production system down
  • 4-hour response time for production system impaired
  • Access to all Trusted Advisor checks
  • Access to Infrastructure Event Management (for additional fee)
Enterprise Support

Recommended for business and mission-critical workloads.

  • All Business Support features
  • 15-minute response time for business-critical system down
  • Designated Technical Account Manager (TAM)
  • Concierge Support Team for billing and account inquiries
  • Infrastructure Event Management included
  • Access to online training (self-paced labs)

Trusted Advisor Availability by Support Plan:

Basic & Developer Support

  • Service Limits
  • Security (some checks)

Business & Enterprise Support

  • All Trusted Advisor checks:
  • Cost Optimization, Performance, Fault Tolerance, Service Limits, Security

Hands-On Labs

Hands-on experience is crucial for understanding AWS services and preparing for the exam. These step-by-step labs will help you gain practical experience with key AWS services.

This lab will guide you through launching an EC2 instance, connecting to it, and performing basic operations.

Lab Steps:

  1. Launch an EC2 Instance:
    • Navigate to the EC2 Dashboard in the AWS Management Console
    • Click "Launch Instance"
    • Select an Amazon Linux 2 AMI
    • Choose t2.micro instance type (Free Tier eligible)
    • Configure instance details (use default VPC and subnet)
    • Add storage (use default 8 GB)
    • Add tags (Name = "My First EC2")
    • Configure security group to allow SSH (port 22) from your IP
    • Create a new key pair and download it
    • Launch the instance
  2. Connect to Your Instance:
    • For Windows: Use PuTTY or Windows Subsystem for Linux (WSL)
    • For Mac/Linux: Use Terminal
    • Set permissions on your key file: chmod 400 your-key.pem
    • Connect using: ssh -i your-key.pem ec2-user@your-instance-public-ip
  3. Perform Basic Operations:
    • Update the system: sudo yum update -y
    • Install a web server: sudo yum install -y httpd
    • Start the web server: sudo systemctl start httpd
    • Enable the web server at boot: sudo systemctl enable httpd
    • Create a simple web page: echo "Hello from AWS" | sudo tee /var/www/html/index.html
  4. Update Security Group to Allow Web Traffic:
    • Go to the Security Groups section in the EC2 Dashboard
    • Select the security group attached to your instance
    • Add a rule to allow HTTP (port 80) from anywhere
  5. Access Your Web Server:
    • Copy your instance's public IP address
    • Paste it into a web browser
    • You should see "Hello from AWS"
  6. Clean Up:
    • Return to the EC2 Dashboard
    • Select your instance
    • Click Actions → Instance State → Terminate
    • Confirm termination

Learning Objectives:

  • Launch and connect to an EC2 instance
  • Configure security groups to control access
  • Install and configure software on an EC2 instance
  • Understand the EC2 instance lifecycle

This lab will guide you through creating an S3 bucket, uploading objects, and configuring basic settings.

Lab Steps:

  1. Create an S3 Bucket:
    • Navigate to the S3 service in the AWS Management Console
    • Click "Create bucket"
    • Enter a globally unique bucket name (e.g., "your-name-aws-lab-2023")
    • Select a region close to you
    • Leave the default settings for now
    • Click "Create bucket"
  2. Upload Objects to Your Bucket:
    • Click on your new bucket name
    • Click "Upload"
    • Click "Add files" and select a few small files from your computer
    • Click "Upload"
  3. Configure Bucket Properties:
    • Go to the "Properties" tab
    • Explore available options (versioning, server access logging, etc.)
    • Enable versioning by clicking on "Versioning" and selecting "Enable"
  4. Set up Static Website Hosting:
    • Still in the "Properties" tab, scroll down to "Static website hosting"
    • Click "Edit"
    • Select "Enable"
    • For the index document, enter "index.html"
    • Click "Save changes"
  5. Create and Upload an Index Document:
    • Create a simple HTML file on your computer named "index.html" with content like:
      <html>
      <body>
        <h1>Hello from Amazon S3</h1>
      </body>
      </html>
    • Go to the "Objects" tab
    • Click "Upload" and add your index.html file
    • Click "Upload"
  6. Configure Permissions:
    • Go to the "Permissions" tab
    • Under "Block public access", click "Edit"
    • Uncheck "Block all public access" (Note: This is for learning purposes only)
    • Click "Save changes" and confirm
    • Scroll down to "Bucket policy" and click "Edit"
    • Enter a policy that allows public read access to your bucket:
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::your-bucket-name/*"
          }
        ]
      }
    • Replace "your-bucket-name" with your actual bucket name
    • Click "Save changes"
  7. Access Your Static Website:
    • Go back to the "Properties" tab
    • Scroll down to "Static website hosting"
    • Note the "Bucket website endpoint" URL
    • Click the URL or copy and paste it into a new browser tab
    • You should see your "Hello from Amazon S3" message
  8. Clean Up:
    • Delete all objects in your bucket
    • Delete the bucket

Learning Objectives:

  • Create and configure S3 buckets
  • Upload and manage objects
  • Configure bucket properties and permissions
  • Set up static website hosting
  • Understand S3 security and public access controls

This lab will guide you through creating IAM users, groups, and policies, and implementing security best practices.

Lab Steps:

  1. Set Up MFA for the Root User:
    • Sign in to the AWS Management Console as the root user
    • Go to the IAM Dashboard
    • Under "Security Status", click "Activate MFA on your root account"
    • Follow the prompts to set up a virtual MFA device (using Google Authenticator, Authy, etc.)
  2. Create an Administrative IAM User:
    • In the IAM Dashboard, click "Users" and then "Add user"
    • Set user name to "Admin"
    • Select "AWS Management Console access"
    • Choose "Custom password" and enter a strong password
    • Uncheck "Require password reset"
    • Click "Next"
    • Select "Attach policies directly"
    • Search for and select "AdministratorAccess"
    • Click "Next", review, and then "Create user"
  3. Create an IAM Group for Administrators:
    • In the IAM Dashboard, click "User groups" and then "Create group"
    • Name the group "Administrators"
    • Search for and select "AdministratorAccess" policy
    • Click "Create group"
  4. Add the Admin User to the Administrators Group:
    • Click on the "Administrators" group
    • Go to the "Users" tab
    • Click "Add users"
    • Select the "Admin" user
    • Click "Add users"
  5. Create a Custom Policy:
    • In the IAM Dashboard, click "Policies" and then "Create policy"
    • Click the "JSON" tab
    • Enter a policy that allows read-only access to S3:
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Action": [
              "s3:Get*",
              "s3:List*"
            ],
            "Resource": "*"
          }
        ]
      }
    • Click "Next"
    • Name the policy "S3ReadOnlyAccess-Custom"
    • Add a description like "Allows read-only access to all S3 resources"
    • Click "Create policy"
  6. Create a Limited-Access User:
    • Go back to "Users" and click "Add user"
    • Set user name to "S3Reader"
    • Select "AWS Management Console access"
    • Choose "Custom password" and enter a strong password
    • Click "Next"
    • Select "Attach policies directly"
    • Search for and select your "S3ReadOnlyAccess-Custom" policy
    • Click "Next", review, and then "Create user"
  7. Test User Permissions:
    • Sign out and sign in as the "S3Reader" user
    • Navigate to S3 and verify you can view buckets and objects but cannot create or modify them
    • Try to access another service like EC2 and note the access denied message
  8. Generate Access Keys:
    • Sign out and sign in as the "Admin" user
    • Go to the IAM Dashboard and click "Users"
    • Select the "Admin" user
    • Go to the "Security credentials" tab
    • Under "Access keys", click "Create access key"
    • Select "Command Line Interface (CLI)"
    • Click through the confirmation
    • Download the .csv file with the credentials
    • Click "Done"
  9. Clean Up:
    • Delete the access keys
    • Delete the "S3Reader" user
    • Delete the "Admin" user
    • Delete the "Administrators" group
    • Delete the "S3ReadOnlyAccess-Custom" policy

Learning Objectives:

  • Implement AWS security best practices
  • Create and manage IAM users, groups, and policies
  • Understand the principle of least privilege
  • Work with programmatic and console access
  • Test and verify access controls

Video Resources

Video tutorials can be a great way to learn AWS concepts. Here's a curated list of free YouTube videos covering key topics for the AWS Certified Cloud Practitioner exam.

These comprehensive videos cover all the major topics for the AWS Certified Cloud Practitioner exam.

AWS Certified Cloud Practitioner Training 2023 - Full Course

A complete course covering all exam topics with detailed explanations and examples. Includes practice questions after each section.

Cloud Concepts Security AWS Services Billing

AWS Certified Cloud Practitioner Certification Course (CLF-C02)

An in-depth course specifically for the new CLF-C02 exam version. Includes hands-on demos and real-world examples.

All Domains Hands-on CLF-C02 Updates

These videos focus on specific topics or services that are important for the exam.

AWS IAM Explained | AWS Identity and Access Management | AWS Tutorial

A concise explanation of IAM concepts, including users, groups, roles, and policies.

AWS S3 Tutorial

A detailed overview of Amazon S3, including buckets, objects, storage classes, and security.

AWS EC2 Tutorial For Beginners

A step-by-step guide to EC2, including launching instances, security groups, and instance types.

AWS VPC Tutorial

Explains AWS Virtual Private Cloud concepts, including subnets, route tables, and gateways.

AWS Shared Responsibility Model Explained

Clarifies who is responsible for what in the AWS cloud environment.

AWS Well-Architected Framework Overview

Covers the six pillars of the AWS Well-Architected Framework and how to apply them.

These videos focus specifically on preparing for the AWS Certified Cloud Practitioner exam.

AWS Certified Cloud Practitioner Practice Exam Questions (CLF-C02)

65 practice questions with detailed explanations, in the same format as the actual exam.

Practice Questions Explanations CLF-C02

How I Passed the AWS Cloud Practitioner Exam

Tips and strategies from someone who recently passed the exam, including study resources and time management.

Study Tips Exam Day Resources

AWS Certified Cloud Practitioner Exam Day Experience

A walkthrough of what to expect on exam day, including the check-in process, exam interface, and question types.

Exam Experience Testing Center Online Proctoring

Practice Resources

Practice exams and questions are essential for exam success. Here are resources to help you test your knowledge and identify areas for improvement.

Resources directly from AWS to help you prepare for the exam.

AWS Certified Cloud Practitioner Official Practice Question Set (CLF-C02)

20 questions similar to those on the actual exam. Provides detailed explanations for each answer.

Official Resource Paid

AWS Certified Cloud Practitioner Sample Questions

10 free sample questions provided by AWS, with answers and explanations.

Official Resource Free

AWS Digital Courses and Labs

Official AWS training platform with free digital courses and labs for the Cloud Practitioner certification.

Official Resource Free Options Paid Options

Practice exams and questions from trusted third-party providers.

ExamTopics CLF-C02 Practice Questions

Over 300 practice questions with community discussions. Basic access is free, premium removes ads and limitations.

300+ Questions Free Option Community Discussions

Whizlabs AWS Certified Cloud Practitioner Practice Tests

7 full-length practice tests with detailed explanations. One free test available.

Free Sample Paid Full Access Detailed Explanations

Tutorials Dojo AWS Certified Cloud Practitioner Practice Exams

6 practice exams with detailed explanations. Known for high-quality, exam-like questions.

Paid Exam-like Format Study Mode & Review Mode

A proven approach to using practice resources effectively and maximizing your chances of passing the exam.

Recommended Study Plan:

  1. Learn the Content (Week 1-2):
    • Watch video courses or read study guides
    • Complete hands-on labs for key services
    • Make flashcards for key concepts and services
  2. Initial Self-Assessment (End of Week 2):
    • Take one practice exam without studying for it
    • Review all answers, even the ones you got right
    • Identify knowledge gaps and weak areas
  3. Targeted Study (Week 3):
    • Focus on weak areas identified in the assessment
    • Use multiple resources for difficult concepts
    • Create summaries in your own words
  4. Practice Phase (Week 4):
    • Take 2-3 full practice exams, simulating exam conditions
    • Review incorrect answers thoroughly
    • Take note of any patterns in mistakes
    • Continue to strengthen weak areas
  5. Final Review (Last Few Days):
    • Review all key concepts and services
    • Focus on frequently missed topics
    • Take one final practice exam
    • Get a good night's sleep before the exam

Practice Exam Tips:

  • Always review explanations for all questions, even those you got right
  • Keep a record of topics or questions you find challenging
  • Aim for 80%+ scores consistently on practice exams before taking the real one
  • Use practice exams as a learning tool, not just a testing tool
  • Simulate exam conditions: timed, no distractions, no notes
  • Take practice exams on different days to ensure consistent performance