Cloud Fundamentals
You'll learn what the cloud actually is, how it's structured, who is responsible for what, and tour the services you'll be working with for the next 5 years. No fluff — just the real concepts that show up on every cloud exam and every job interview.
The Problem Cloud Solves
Before cloud computing existed, every company that wanted to run software had to buy physical servers, store them in a dedicated room, hire people to maintain them, pay for power, pay for cooling, and hope nothing broke. This model is called on-premise infrastructure (or "on-prem").
The problem was brutal: a startup couldn't afford $500,000 worth of servers before they had a single customer. A retail company that needed extra computing power for one busy day — like Black Friday — had to buy servers they'd use once a year and leave idle the other 364 days. A company that wanted to expand globally had to fly servers to new countries and wire them up themselves.
Cloud computing solved this entirely. Instead of buying infrastructure, you rent it from a provider on demand — pay only for what you use, scale up in minutes, scale back down when you don't need it.
The delivery of computing services — servers, storage, databases, networking, software, analytics, and intelligence — over the internet, on a pay-as-you-go basis. You access these resources from a provider's data centers instead of owning them yourself.
The 5 Core Characteristics (NIST Definition)
The US National Institute of Standards and Technology (NIST) defines cloud computing by 5 essential characteristics. These show up directly on the AWS Cloud Practitioner exam, so learn them as a set:
- 1On-demand self-service — You provision resources yourself without human involvement from the provider. Log in, click a button, your server is live in 60 seconds. No emails, no waiting for someone to approve your ticket.
- 2Broad network access — Resources are accessible over the internet from any device — laptop, phone, office, coffee shop. Location stops being a constraint.
- 3Resource pooling — The provider serves multiple customers from the same shared physical infrastructure using multi-tenancy. You don't get your own dedicated server room — you share infrastructure with thousands of others, with logical isolation between you. This is what makes cloud cost-effective.
- 4Rapid elasticity — Resources can be scaled up or down almost instantly, often automatically. Need 1,000 servers for a product launch? Done in 5 minutes. Launch is over? Scale back to 2 servers in minutes. Pay accordingly.
- 5Measured service — Usage is metered and you pay only for what you consume, like electricity. AWS tracks every CPU hour, every gigabyte stored, every API call, and bills precisely for it.
Think of cloud vs on-premise like renting billboard space vs buying land and building your own billboard. Renting = you pay monthly, scale the size up or down, and the billboard company handles maintenance. Buying = massive upfront cost, you own it forever, you fix it when the lights break. For 99% of companies, renting is smarter — especially when you're starting out and don't know exactly how much space you'll need.
The 3 Cloud Providers That Matter
There are three major cloud providers. You'll encounter all three in your career, but AWS is the market leader and what you're studying for your first cert.
| Provider | Market Share | Key Strength | You'll See It |
|---|---|---|---|
| AWS (Amazon Web Services) | ~32% | Widest service catalogue, most mature | Your exam + most employer environments |
| Azure (Microsoft) | ~23% | Deep Microsoft/enterprise integration | Companies using Windows, Office 365 |
| GCP (Google Cloud) | ~12% | Data analytics, ML workloads | Data-heavy tech companies |
Every role you're targeting — Growth Engineer, Cloud Security Engineer, Cloud Security Architect — operates entirely inside cloud environments. Understanding WHY cloud exists helps you explain security decisions in business terms. That's what separates architects from technicians: the ability to connect a technical control to a business outcome.
- 1Go to
aws.amazon.com/freeand click "Create a Free Account." You'll need: an email address, a phone number, and a credit card. You will not be charged if you stay within Free Tier limits. AWS uses your card only to verify identity. - 2Complete account setup — choose "Personal" account type, fill in your details, verify your phone, and choose the Free support plan (the basic one, no cost).
- 3Log into the AWS Management Console at
console.aws.amazon.com. Take 5 minutes to just look around. Don't click anything that says "Create" or "Launch" yet. - 4In the top navigation, click Services → you'll see categories like Compute, Storage, Security, Database. Browse each category — just read the service names. This is the full catalogue of what AWS offers.
- 5Use the search bar to find these three services:
EC2,S3, andIAM. Click on each one. Don't do anything — just read the description on the dashboard. Write 1 sentence in your own words about what each service seems to do. - 6In the top-right corner, you'll see your region (probably US East (N. Virginia)). Click it and change it to Canada (Central) —
ca-central-1. This is where you'll work for compliance reasons (you work at a Canadian investment firm). Notice how the console looks the same but you're now in a different physical location. - 7Write in your notes: What is AWS? What problem does it solve? Explain it as if you're telling a non-technical colleague at Equiton. If you can explain it simply, you understand it.
- AResource pooling — shared infrastructure reduces costs
- BRapid elasticity — scale up for 2 weeks, scale back down, pay only for those 2 weeks
- CBroad network access — accessible from anywhere
- DOn-demand self-service — no human approval needed
- AOn-premise is always faster because you own the hardware
- BCloud uses the internet, on-premise does not
- COn-premise requires upfront capital purchase of hardware; cloud uses a pay-as-you-go rental model with no ownership
- DThere is no meaningful difference — they deliver the same outcome
- AOn-demand self-service
- BRapid elasticity
- CResource pooling
- DMeasured service
Why the 3 Models Matter — Especially for Security
When someone says "we're on cloud," the next question should always be: which layer? The three service models define how much the provider manages versus how much you are responsible for — including security. This directly affects what you need to protect, what you need to configure, and who you call when something breaks.
As a Cloud Security Engineer or Architect, you'll spend your career having conversations that start with: "Are we on IaaS or PaaS for this workload, and therefore who owns the security control here?" Misunderstanding this is how companies get breached.
The cloud provider gives you raw, virtualized infrastructure: virtual machines, storage, and networking. Everything above the hypervisor — operating system, runtime, middleware, application, data — is your responsibility to install, configure, patch, and secure.
The provider manages infrastructure AND the operating system and runtime environment. You bring your application code and data. You don't worry about server patching, OS updates, or runtime installation. You just deploy code.
The provider manages everything — infrastructure, platform, AND the application itself. You access a fully built software product through a browser or app. No servers, no code, no deployment. Just use the software.
On-premise = You make pizza at home. You buy ingredients, own the oven, do everything from scratch. Total control, total responsibility.
IaaS = You rent a commercial kitchen. You have professional equipment, but you supply and cook your own ingredients. Kitchen = AWS's responsibility; what you cook = yours.
PaaS = You order a meal kit. Ingredients are prepped and measured — you just assemble and cook. The hard infrastructure work is done; you bring the skill.
SaaS = You order delivery. You just eat. Someone else handles ingredients, cooking, packaging, and delivery. You consume the end result.
The Responsibility Stack — Visual Breakdown
This table is critical. It shows who manages what at each layer. As a security professional, "you manage it" means "you must secure it." If the provider manages it, they are responsible for security at that layer — you cannot configure or patch it yourself.
| Layer | On-Premise | IaaS | PaaS | SaaS |
|---|---|---|---|---|
| Physical hardware | You | AWS | AWS | AWS |
| Network infrastructure | You | AWS | AWS | AWS |
| Hypervisor / virtualization | You | AWS | AWS | AWS |
| Operating system | You | YOU | AWS | AWS |
| Runtime / middleware | You | YOU | AWS | AWS |
| Application code | You | YOU | YOU | AWS |
| Data | You | YOU | YOU | YOU (mostly) |
Many companies that use SaaS tools like Salesforce or Google Workspace assume the vendor handles all their data security. Not true. You are always responsible for: who has access to your data (user permissions), how sensitive data is classified, what happens when an employee leaves (offboarding), and compliance with regulations like PIPEDA. The vendor handles the software — you handle the governance of your own data inside it.
Meta Ads Manager, Google Analytics 4, HubSpot, any CRM you use — these are all SaaS. You already operate in the cloud daily. When you move into cloud security, you'll start working at the IaaS layer — the raw virtual machines and networks that sit underneath those polished interfaces. Understanding the stack helps you explain to business stakeholders exactly what the security perimeter looks like and what you're responsible for protecting.
- 1Open a fresh document or spreadsheet. Create 4 columns: Tool Name, Service Model (IaaS/PaaS/SaaS), Who Manages Security, Notes.
- 2List every software tool you use at Equiton: Meta Ads, Google Ads, GA4, any CRM, email tools, reporting dashboards, internal apps. Aim for 10+ tools.
- 3Classify each tool as IaaS, PaaS, or SaaS based on today's definitions. Hint: most tools you use daily are SaaS.
- 4For each SaaS tool, write who is responsible for a data breach — the vendor or your company? (Trick: it depends on the layer — vendor handles app security, you handle access control and data governance.)
- 5Google "
[tool name] data processing agreement" for two of your tools. Read the first page. Notice what the vendor claims responsibility for and what they explicitly exclude. This is real-world security due diligence. - 6Write in your notes: If Equiton moved their investor reporting from a spreadsheet to a cloud database on AWS EC2, what security responsibilities would shift to your team that didn't exist before?
- AAWS — they own and manage the EC2 infrastructure
- BEquiton — EC2 is IaaS, the customer owns OS patching and security
- CSplit 50/50 — both AWS and Equiton share responsibility equally
- DThe third-party software vendor whose app was running on EC2
- AIaaS — they get full control over infrastructure
- BPaaS — the platform handles infrastructure and runtime; they just deploy code
- CSaaS — they use someone else's fully built application
- DOn-premise — they need full control for security reasons
- ASalesforce — they provide the software and are responsible for all data in it
- BYour company — data classification, access controls, and user behavior are customer responsibilities even in SaaS
- CThe sales rep personally — individual liability
- DNo one — SaaS vendors accept all liability for data incidents
The Cloud Is Physical — Location Matters
The cloud is not a magical invisible thing floating in cyberspace. It runs on physical servers in physical buildings around the world. AWS owns and operates hundreds of data centers across 33+ geographic locations. Understanding how these are organized is critical — especially for security, compliance, and reliability.
At Equiton, this is not abstract. Canadian privacy law (PIPEDA) and financial services regulations (OSFI B-13) can require that certain customer data physically stays inside Canada. Knowing which AWS region is in Canada, and what redundancy it provides, is a real job requirement.
A geographic area containing multiple, separate data centers. Regions are completely independent of each other — a catastrophic failure in one region (power outage, natural disaster, networking failure) does not affect others. You choose which region your resources live in when you create them.
A single data center or cluster of data centers within a region, physically separated from other AZs by meaningful distance. Each AZ has its own power supply, cooling, and network connectivity. They connect to each other via ultra-low-latency private fiber.
Regions = Cities. Toronto, Montreal, and Vancouver are separate cities. A flood in Montreal doesn't affect Toronto.
Availability Zones = Neighbourhoods within a city. If one neighbourhood loses power, the other neighbourhoods in the same city still function. You wouldn't put your only office in a single neighbourhood if you needed 99.99% uptime.
Why Multiple AZs = High Availability
If you deploy your application in only one Availability Zone and that data center has a fire, a power failure, or a networking outage — your application goes completely down. If you deploy across 2–3 AZs, one can fail completely while the others continue serving users without interruption.
This is called high availability (HA) architecture and it's one of the most fundamental cloud design principles. AWS recommends deploying production workloads across at least 2 AZs at minimum.
| Design Choice | What Happens If a Data Centre Fails | Appropriate For |
|---|---|---|
| Single AZ deployment | Application is completely unavailable until restored | Dev/test environments, cost-sensitive non-critical apps |
| Multi-AZ deployment | Traffic automatically routes to surviving AZs — users may not notice | Any production workload, anything handling real customer data |
| Multi-Region deployment | Even a full regional outage (rare) doesn't take you down | Global applications, disaster recovery for critical systems |
AWS also operates hundreds of Edge Locations — smaller distribution points in cities worldwide that cache (store copies of) content closer to end users. These power CloudFront, AWS's Content Delivery Network (CDN). Edge Locations are not full regions — they're delivery outposts.
PIPEDA (Canada's federal privacy law) and OSFI B-13 guidelines affect how financial services companies handle data. If Equiton stores investor personal information on AWS, it should live in ca-central-1 (Montreal) or ca-west-1 (Calgary). Deploying Canadian investor data to us-east-1 (Virginia) without explicit consent and legal review is a compliance risk. As a Cloud Security Architect, data residency decisions will be part of your job — and your ECE + marketing background at an investment firm gives you real context most engineers lack.
- 1Go to
aws.amazon.com/about-aws/global-infrastructure/regions_az/— the official AWS infrastructure map. Spend 5 minutes exploring it. Find every region and count the AZs. - 2Find the Canadian regions specifically. Note their region codes and cities. How many AZs does
ca-central-1have? - 3Log into your AWS console. In the top-right corner, click the region name (probably showing
US East (N. Virginia)) and switch to Canada (Central) — ca-central-1. - 4Navigate to EC2 in the console. Notice the dashboard says your region is now Canada. Any EC2 instance you create here will physically live in Montreal — data won't leave Canada.
- 5Now switch to
us-east-1and go to EC2. Notice you're looking at a completely separate inventory. Resources in one region are invisible from another region's console. This isolation is by design. - 6Switch back to
ca-central-1. This is your home region for all future labs. Write a note to yourself: "Always confirm I'm in ca-central-1 before creating any resource. Wrong region = data residency risk."
- Aus-east-1 (N. Virginia) — lowest latency from Toronto, fastest performance
- Bca-central-1 (Montreal) — data physically stays within Canada
- Ceu-west-1 (Ireland) — GDPR-compliant, strongest privacy protection globally
- DAny region — cloud data has no physical location, only logical location
- AThe other AZs in ca-central-1 automatically take over within minutes
- BThe portal is unavailable for 3 hours — single AZ = single point of failure
- CAWS provides emergency backup power within 15 minutes
- DData is replicated to another region automatically
- AEdge Locations > Regions > Availability Zones (Edge Locations are the largest)
- BAvailability Zones contain Regions, which contain Edge Locations
- CRegions contain multiple AZs; Edge Locations are separate content delivery points, not part of the Region/AZ hierarchy
- DRegions and Availability Zones are the same thing — just different names
The Foundation of All Cloud Security Conversations
The Shared Responsibility Model is the single most important concept in cloud security. It clearly defines what AWS is responsible for securing versus what you, the customer, are responsible for securing.
This is not an abstract concept. Misunderstanding this model is the #1 reason companies suffer preventable cloud security breaches. They assume AWS handles security. AWS handles security OF the cloud. You handle security IN the cloud. The distinction is precise and critical.
You will be asked to explain this model in every cloud security job interview. You should be able to draw it from memory and explain both sides with concrete examples within 60 seconds.
AWS is responsible for protecting the infrastructure that runs all AWS services — the hardware, software, networking, and facilities that make up the AWS cloud itself.
You (the customer) are responsible for everything you put into the cloud and how you configure it. This varies by service model — more responsibility with IaaS, less with SaaS.
The Responsibility Varies By Service — Critical Table
The more managed the service, the less you own. But you always own your data, credentials, and access controls — regardless of service model.
| Security Control | EC2 (IaaS) | RDS (Managed DB) | S3 (Object Store) | Lambda (Serverless) |
|---|---|---|---|---|
| Physical data center security | AWS | AWS | AWS | AWS |
| Hypervisor patching | AWS | AWS | AWS | AWS |
| OS patching | Customer | AWS (managed) | N/A | AWS |
| DB engine patching | Customer | AWS (managed) | N/A | N/A |
| Application code security | Customer | Customer | N/A | Customer |
| IAM and access control | Customer | Customer | Customer | Customer |
| Data encryption | Customer | Customer | Customer | Customer |
| Network firewall config | Customer | Customer | Customer | Shared |
The Golden Rule: You Always Own 3 Things
No matter what AWS service you use — EC2, S3, Lambda, RDS, or any other — you are always responsible for:
- 1Your data — how it's classified, encrypted, retained, and deleted. AWS never manages the sensitivity of your data — that judgment always belongs to you.
- 2Access control (IAM) — who can access what, under what conditions. If you give an employee admin access to every AWS resource, that's your configuration, not AWS's problem.
- 3Client-side encryption and authentication — if you transmit data without encryption, AWS won't stop you. Encryption is always the customer's choice to implement.
AWS is your office building landlord. They ensure the building has working locks on the front door, fire suppression systems, physical security guards, and functional elevators. They are responsible for the physical structure being safe and operational.
But if you leave your office door unlocked, share your access badge with strangers, store confidential investor files on a desk in the lobby, or give everyone admin access to your internal computer system — that's on you. The landlord can't control what you do inside your rented space.
Gartner estimates that through 2025, 99% of cloud security failures will be the customer's fault. The most common causes: publicly accessible S3 buckets with no access controls, over-privileged IAM roles, unpatched EC2 instances, exposed security group rules allowing 0.0.0.0/0. AWS never told anyone to do these things — these are all customer configuration choices. Your future job as Cloud Security Architect is to prevent exactly these failures.
- 1Close this browser tab for 10 minutes. Seriously.
- 2On paper or a whiteboard, draw two boxes side by side. Label one "AWS Responsibility" and the other "Customer Responsibility."
- 3From memory, fill in as many items as you can in each box. Don't look at notes. This is retrieval practice — the most effective learning technique.
- 4Come back and check your drawing against the tables in today's lesson. What did you miss? Circle those items — they need more attention.
- 5Now open your AWS console. Go to IAM (Identity and Access Management). Look at the dashboard. What does it show you? Which of the items you listed under "Customer Responsibility" relate directly to what you see here?
- 6Go to S3 in the console. Create an S3 bucket (it's free and takes 30 seconds). During creation, notice the Block Public Access settings — AWS shows these prominently because misconfigured public S3 buckets cause major breaches. Take a screenshot of this screen. This is the shared responsibility model in action.
- 7Delete the S3 bucket after. (Select the bucket → Delete. Confirm with the bucket name.)
- AAWS — they should have prevented public access by default
- BEquiton — bucket access configuration is the customer's responsibility
- CThe third party who downloaded the data
- DAWS and Equiton share equal responsibility for this outcome
- APatching the operating system on EC2 instances
- BEncrypting customer data stored in S3
- CPhysical security of the data centers and underlying hardware infrastructure
- DManaging IAM users and access policies for customer accounts
- ATheir data classification and protection, IAM/access control, and client-side encryption decisions
- BPhysical hardware security, network routing, and hypervisor patching
- COS patching, database engine updates, and managed service availability
- DDDoS protection, hardware replacement, and global network infrastructure
Where Does Your Cloud Actually Live?
Beyond what cloud services you use, organizations must decide where and how those services are deployed. This is the deployment model — and it's a critical architectural decision with major security, compliance, and cost implications.
At a Canadian investment firm like Equiton, the deployment model directly affects regulatory compliance. Some data must stay in a private, controlled environment. Other workloads are fine on public cloud. Understanding this distinction is something you'll explain to executives throughout your career.
Infrastructure owned and operated by a third-party provider (AWS, Azure, GCP), shared with other customers but logically isolated. You access everything over the internet or private connections. Most cloud workloads run here.
Cloud infrastructure deployed exclusively for one organization — either on-premise in your own data center, or hosted by a provider but fully dedicated to you. More control and isolation, but significantly higher cost and operational overhead.
A combination of public and private cloud, connected and orchestrated together. Sensitive workloads stay on private infrastructure; less sensitive workloads scale on public cloud. The two environments communicate securely.
Using multiple public cloud providers simultaneously — not one provider but many. Reduces dependency on a single vendor, allows choosing the best service from each provider, but significantly increases management complexity.
Which Model Should You Recommend? — Decision Framework
As a future Cloud Security Architect, clients will ask you this exact question. Here's how to think through it:
| Factor | Points Toward Public | Points Toward Private/Hybrid |
|---|---|---|
| Data sensitivity | Non-sensitive, anonymized, public data | PII, financial records, health data, regulated data |
| Compliance requirement | Flexible standards, cloud-friendly | Strict data residency, government, financial services |
| Budget | Variable, pay-per-use preferred | Fixed budget, can justify capital expenditure |
| Team capability | Small team, prefer managed services | Large infra team, need deep control |
| Scale requirements | Highly variable, unpredictable demand | Steady, predictable load at scale |
Public cloud = Apartment in a large building. Shared infrastructure, landlord handles maintenance, you have your own locked unit. Economical, scalable, some limitations on customization.
Private cloud = Owning your own house. Full control over everything, expensive to maintain, but you set all the rules.
Hybrid = Your house with a storage unit rental. Sensitive things stay at your house (home safe, important documents). Less critical items go in the rented storage unit. You connect them logically — you know where everything is.
- 1Imagine you've been asked by Equiton's CTO to recommend a cloud deployment strategy. Open a document and write a 1-page recommendation.
- 2Workloads to categorize: Investor PII and financial records · Marketing analytics dashboards · Meta Ads performance data · Internal employee email · Investor portal (login + portfolio view) · Reporting and compliance documents.
- 3For each workload, recommend: Public cloud, Private cloud, or Hybrid? Explain your reasoning in 1–2 sentences using compliance and sensitivity criteria from today's lesson.
- 4Write one paragraph explaining what "hybrid cloud" would look like for Equiton specifically — what stays on-prem, what goes to AWS, and how do they connect securely? (VPN? Direct Connect? HTTPS only?)
- 5Bonus: Search "AWS Direct Connect vs VPN" and read the AWS documentation page. Direct Connect is how enterprises connect their on-prem networks to AWS securely. You'll use this concept in your architecture designs.
- APublic cloud only — AWS ca-central-1 satisfies Canadian data residency for everything
- BPrivate cloud only — all workloads must remain on-premise for regulatory compliance
- CHybrid cloud — regulated transaction data stays on private/on-prem infrastructure; marketing analytics runs on AWS public cloud
- DMulti-cloud — use Azure for transactions and AWS for analytics to distribute risk
- AMore security control — you manage all the hardware yourself
- BBetter compliance — public cloud is always more regulatory-friendly
- CElasticity and economics — scale up/down on demand, pay only for what you use, no upfront hardware cost
- DDedicated hardware — your workloads run on servers no one else uses
- AHybrid cloud — on-premise combined with public cloud
- BPrivate cloud — both providers run dedicated infrastructure
- CMulti-cloud — using multiple public cloud providers simultaneously
- DCommunity cloud — providers sharing infrastructure for a specific industry
AWS Has 200+ Services — Here Are the 15 That Actually Matter
AWS offers over 200 distinct services. You do not need to know all of them. For your Cloud Practitioner exam and your early career, you need to understand what each major category does, and know 1–3 services within each category by name and function.
This tour gives you the vocabulary. Over the next 25 weeks you'll learn each of these in depth. Today's goal: read each one, understand what problem it solves, and be able to describe it in one sentence.
EC2 (Elastic Compute Cloud): Virtual servers. You choose the OS, CPU, memory. The IaaS workhorse — most applications run on EC2.
Lambda: Serverless compute. You upload a function, AWS runs it when triggered, you pay only while it runs. No servers to manage.
ECS/EKS: Container orchestration — running Docker containers at scale.
S3 (Simple Storage Service): Object storage. Store files, images, backups, logs, data lake content. Infinitely scalable. Used by almost every AWS workload.
EBS (Elastic Block Store): Virtual hard drives attached to EC2 instances. Like a disk drive in the cloud.
EFS (Elastic File System): Shared file storage accessible by multiple EC2 instances simultaneously.
RDS (Relational Database Service): Managed SQL databases — PostgreSQL, MySQL, Oracle, SQL Server. AWS handles backups, patching, replication.
DynamoDB: AWS's NoSQL database — key-value and document storage at massive scale.
Redshift: Data warehouse for analytics — querying massive datasets with SQL.
VPC (Virtual Private Cloud): Your private network inside AWS. Define IP ranges, subnets, routing, and security. Everything runs inside a VPC.
Route 53: AWS's DNS service — translates domain names to IP addresses.
CloudFront: CDN — serves content from edge locations close to users for speed.
IAM (Identity and Access Management): Controls who can do what in AWS. Users, groups, roles, policies. The single most important security service — every security decision flows through IAM.
GuardDuty: Intelligent threat detection — watches your AWS environment and flags suspicious activity.
KMS (Key Management Service): Create, store, and manage encryption keys for all your AWS resources.
CloudWatch: Collect metrics, create dashboards, set alarms, store logs from all AWS services. Your operational visibility into everything.
CloudTrail: Records every API call made in your AWS account — who did what, when, from where. Essential for security investigations.
Config: Tracks configuration changes to AWS resources over time — did someone change that security group?
In your Cloud Security Engineer and Architect roles, five services will appear in almost every conversation: IAM (who has access), CloudTrail (audit log of all actions), GuardDuty (threat detection), KMS (encryption keys), and VPC (network security). Learn these five deeply before anything else. They are the foundation of cloud security posture.
How Services Work Together — A Real Example
When a user visits Equiton's investor portal: Route 53 resolves the domain name → CloudFront serves static assets from the edge → the request hits a VPC where the application runs on EC2 → the app queries RDS for investor data → files are stored in S3 → IAM roles control what each service can access → CloudTrail logs every API call → GuardDuty watches for anomalies → CloudWatch shows operational metrics.
This is a typical production architecture. Every one of those arrows between services is a potential attack surface — and your future job is to secure each one.
- 1Log into your AWS Console in
ca-central-1. You'll be navigating to each service category — just looking at the dashboards, reading descriptions, not creating anything yet. - 2Open EC2. Look at the dashboard. Note the options for instance types. Don't launch anything — just observe. Where would you configure which OS to install?
- 3Open S3. This is your object storage service. Your existing bucket from Day 4 should be gone (you deleted it). Look at the Create bucket flow — notice the region selector and the Block Public Access checkboxes.
- 4Open IAM. Look at the left navigation: Users, Groups, Roles, Policies. You'll be working with all of these in Week 4. For now, just read the descriptions on the main dashboard. Notice the Security Recommendations — these are the controls AWS considers most critical.
- 5Open CloudTrail. This is your audit log. Look at the Event History. Every action you've taken in the console this week — creating your account, navigating services, creating and deleting that S3 bucket — is recorded here. Click on a few events. Notice what information is captured: time, service, action, source IP, user agent.
- 6Open GuardDuty. Notice it says "Enable GuardDuty." Don't enable it during free tier — it costs money. But read the description: it uses machine learning to detect threats in your account. This is one of the first things you'd enable on a real account.
- 7Write a one-line description of each of the 15 services from today's lesson in your own words. No looking. If you can't write it — that's what you need to review before the quiz.
- ACloudWatch — operational monitoring and metrics
- BGuardDuty — threat detection and anomaly detection
- CCloudTrail — records every API call including who made it, when, and from where
- DConfig — tracks resource configuration changes
- AGuardDuty — start detecting threats immediately
- BCloudWatch — configure monitoring before anything else
- CIAM — establish proper identity and access controls before any resource is created
- DKMS — encrypt everything from day one
- AEBS (Elastic Block Store) — high-performance block storage
- BS3 with Glacier lifecycle policy — object storage for files, with archival for infrequently accessed data
- CRDS — relational database for structured data
- DEFS — shared file system for multiple EC2 instances
Complete These 6 Review Prompts Before Looking at Your Notes
Research calls this active recall — retrieving information from memory is 2–3x more effective for retention than re-reading. Open a blank document and write an answer to each prompt. Aim for 2–3 sentences minimum. Only look at your notes to check your answers, not to write them.
- 1Explain cloud computing to a non-technical Equiton executive in 3 sentences. Include why it matters for the company.
- 2Draw (or describe in words) the IaaS / PaaS / SaaS responsibility stack. For each layer, give one real example of an AWS service or tool you know.
- 3What is the difference between an AWS Region and an Availability Zone? Why does a Canadian financial services firm care which one they use?
- 4Explain the Shared Responsibility Model in two sentences: what AWS is responsible for, and what the customer is responsible for. Give one example of a breach that would be AWS's fault, and one that would be the customer's fault.
- 5Name 3 cloud deployment models and describe in one sentence each. Which would you recommend for Equiton and why?
- 6Name the 5 security-critical AWS services from Day 6. For each, write what it does in one sentence.
After writing, compare to your notes. For any prompt you got less than 80% right — that topic needs a second read before moving to Week 2. This is not a test for grades. It's diagnostic: tell you exactly where to spend your review time.
Week 1 Key Concepts — Final Summary
| Concept | One-Line Definition | Why It Matters |
|---|---|---|
| Cloud Computing | Renting computing resources over the internet on a pay-per-use basis | Foundation of every role you're targeting |
| 5 NIST Characteristics | On-demand, broad access, pooling, elasticity, measured service | Appears on AWS CCP exam directly |
| IaaS / PaaS / SaaS | 3 layers of cloud service with different responsibility splits | Determines who secures what in your architecture |
| AWS Region | Geographic area containing multiple data centers | Drives data residency and compliance decisions |
| Availability Zone | Separate data center within a region for fault tolerance | Multi-AZ = high availability, single AZ = risk |
| Shared Responsibility | AWS owns security OF the cloud; you own security IN the cloud | #1 security model — every interview question |
| Deployment Models | Public, Private, Hybrid, Multi-Cloud — where cloud lives | Architecture decision you'll make for every client |
| IAM | Controls who can do what in AWS | Most critical security service — everything flows through it |
| CloudTrail | Audit log of every API call in your AWS account | Essential for incident investigation |
- 1Log into your AWS Console in
ca-central-1. First task: Check your Free Tier usage. Click your account name (top right) → Billing Dashboard → Free Tier. Look at how much of your monthly free allocation you've used. The goal: stay at $0 this entire week. - 2Enable MFA on your root account — this is critical. Click your account name → Security Credentials → Assign MFA device. Use Google Authenticator or Authy on your phone. Scan the QR code. This is the single most important security action you can take. Root account compromise = full AWS account compromise.
- 3Go to IAM. Click "Create User" and create a user called
admin-ankit. Give it AdministratorAccess policy. Enable console access. This is your day-to-day working user — you should never use the root account for regular work. Write down the credentials somewhere safe. - 4Sign out of the root account. Sign back in using your new
admin-ankitIAM user. Verify you can still access the console. From this point on, always work as this user — never as root. - 5Go to CloudTrail → Event History. Find the event where you created your IAM user. Click on it and read the full event details. You'll see: the user who performed the action (root), the exact API call, the timestamp, and the source IP. This is real forensic data.
- 6Go to IAM → Security Recommendations. What does AWS recommend you do? How many of those recommendations have you now completed? (Root MFA and creating an admin user should clear 2–3 recommendations.)
- 7Final note: Go to CloudWatch. Look at the main dashboard. You'll see some metrics from today's activity. These are your first operational metrics in AWS — server health, API call rates, and resource utilization will all show up here as you build more infrastructure.
- ALaunch an EC2 instance and configure it securely as the first workload
- BEnable MFA on the root account, then create IAM users so no one works as root
- CEnable GuardDuty and CloudWatch to start monitoring immediately
- DConfigure VPC network settings to prevent unauthorized access
- ACloudWatch Logs — operational logs from running services
- BCloudTrail Event History — records every API call with user identity, time, and source IP
- CAWS Config — configuration state snapshots of resources
- DGuardDuty findings — threat detection alerts
- A"AWS handles all security for cloud workloads — that's the point of using a managed provider"
- B"Security is entirely the customer's responsibility in the cloud — AWS just provides raw compute"
- C"AWS and the customer split security responsibilities 50/50 for all services"
- D"AWS secures the physical infrastructure and core services; the customer secures their data, identities, applications, and configurations — and the split varies by service model"