Posts

Introducing SharpSocks v2.0

It has been over a year since we released the first version of SharpSocks, our proxy-aware reverse HTTP tunnelling SOCKS proxy. This post aims to provide a State of the Nation update for users. It details some of our experiences using it, how the experience and performance has been massively improved, and some of our observations along that journey for those interested in some of the juicy technical details under the hood.

TLDR – SharpSocks performance has been massively increased and is now fully integrated into PoshC2 so you can type sharpsocks to get this going straight away.

What’s the point of a SOCKS proxy?

When performing a simulated attack (red team), or indeed many other types of test, SOCKS proxies become an integral tool during the “Act on Objectives” phase of the kill chain. For example, if the target is behind the network perimeter, is a system accessible via RDP, is an intranet site or is a Citrix hosted application, the SOCKS proxy becomes an essential tool. It can allow access to those “internal only” resources, through an established implants C2 (Command and Control). The diagram below shows a typical scenario in a red team:

Currently, there are plenty of SOCKS proxy implementations that can be used in this scenario (which are very good) but these are usually built in to the various C2 frameworks. The initial goal of SharpSocks was to be “framework agnostic”, such that it could be deployed to a machine, independent of whether an implant was operating or not.

When a SOCKS Proxy is deployed via other C2 frameworks, the beacon frequency must be increased to provide quick feedback (usually 1-5 seconds), as a consequence of the volume of data being sent and received increasing considerably. In addition, there are connections to potentially sensitive services, which can generate events or Indicators of Compromise (IOCs) thus increasing the risk of detection.

By separating the SOCKS proxy from your main C2 Implant comms:

  • You can deploy the SOCKS Proxy to an unrelated machine and communicate through separate infrastructure.
  • Even if the SOCKS Proxy activity gets detected, it will not result in losing your initial foothold. If the SOCKS and C2 Implant are combined then there is a higher chance of detection.

Downsides to the “Separation” :

  • Additional setup, C2 infrastructure, separate commands and tools.
  • The actual code of gets larger.
  • The configuration becomes a lot more complicated.

It also does not help the fact that we like developing in C#, while everyone else wants to pwn stuff with Linux. In order to get around this, the use of SharpSocks with PoshC2 was supported by Wine. The SharpSocks client would work on a Linux OS but this was definitely less than optimal. Sooooo…

SharpSocks Server is now cross platform .Net Core

The main announcement is that at least on the SharpSocks side it has been ported to .Net Core (no more Wine thankfully; more on this below) and that its traffic is routed through PoshC2. In terms of PoshC2, which is detailed here: Introducing PoshC2 v5.0, major improvements have been made in its usability, in order to make SharpSocks a primary component rather than a tack on to the side.

Considering the risks we mentioned above, in deploying the SOCKS Proxy and implant on the same host and going back through the same infrastructure, is it still possible to deploy SharpSocks separately?

The answer is yes; absolutely the primary design goal of SharpSocks being deployed as a standalone SOCKS tool is still available.

State of the Nation

Some of the other project goals were to deliver a tool with the following features:

  1. Performance; ensure that the user experience of any protocol being tunnelled such as RDP is good.
  2. Proxy aware when sending traffic outside of the network.
  3. Allow for the easy customisation of the URLs used, the encryption mechanism and the data transport locations.

Over the last year, SharpSocks has seen regular usage on engagements where we have deployed PoshC2. One theme of threat intelligence led simulated attacks (CREST STAR, CBEST, TIBER etc.) is the use of a range of tools (i.e. C2 Frameworks) in order to simulate different levels of threat actor sophistication. PoshC2 is used in conjunction with commercial offerings and our own private Remote Access Tool (RAT) to simulate different types of adversaries.

Generally, feedback on SharpSocks use has been positive by our team, however the consistent themes of frustration have always been ease of use and performance. By far one of the most embarrassing situations as a developer you can get yourself into is when you completely forget how to setup your own tool. SOCKS proxying by nature is slow, however compared to other similar tools, SharpSocks still wasn’t the fastest.

This is where this updated release comes in. It is designed to make SharpSocks easier to use, fix some long standing bugs and generally make the performance of SOCKS proxying better and faster.

.Net Core

As mentioned above, the SharpSocks Server component is now built as a .Net Core project. Why .Net Core? Well, when SharpSocks was originally built, it was a purely “C# Windows” only solution. This was primarily due to my background as a professional developer before my career in InfoSec. Creating a Windows only tool really does, in terms of InfoSec, limit your potential user base. Eventually I came to the conclusion that yes – it had to become cross platform. Some of the options included Go or Python, but in the end porting to these would have required a complete rewrite of the underlying code base, something I was really not sure was worth the effort considering that .Net Core is available.

By porting to .Net Core, it is now possible to run SharpSocks on either Windows, Linux or MacOS with limited changes. The process of porting involved creating a new .Net Core project and moving the source into it. The only change that had to be made was that the encryption code tried to limit exposure of the key in memory by using DPAPI; in .Net Core this does not appear to be supported as I suspect there is no direct analogue in Linux or MacOS.

The main advantage of running as .NET Core rather than Wine is performance; tunnelling your traffic through SharpSocks feels so much faster. This is entirely logical, as it runs on a small optimised platform runtime.

Other Changes

Other fixes and changes include an overhaul of the error handling, particularly when the server has stopped responding. A particularly nasty issue was found that occurred when the redirector returned a 404 and the server had been stopped.

There is also a new option on Server called socketTimeout (pass either -st or -socketTimeout and an integer value for second e.g. 240).

  • This controls how long a SOCKS connection to the server will be held open without any data having been sent over it. This is not something that you should normally have to tweak, however if you are having problems with the application being tunnelled timing out then this is a value that may help.

How to use (SharpSocks in Posh)

  1. Get a shell (PowerShell or C# implant) on the target host using your favourite execution method.
  2. Within the implant handler, select the implant.
  3. Type sharpsocks within the implant handler. Immediately, a line will be printed out with detail that you will need to start the server. Copy this and paste it into a terminal (NOT into the PoshC2 implant handler).
  4. Once the server has started, type ‘Y’ and enter into the implant handler window. The output should now say that the implant has started. You can confirm this by looking at the output from the server you started in the previous step. If it has started, you should see the port that the SOCKS Proxy is now listening on (most likely 43334).
  5. If you haven’t already done so, now is the time to configure proxychains, your browser or the SOCKS aware application to point to localhost:43334.
  6. With this you can start SOCKS Proxying away.

SharpSocks Deep Dive

As stated, it had been a while since the initial release of SharpSocks and it was time to do a little work on it. The following is a bit of a deep dive on how it works, some of the changes that have been made recently, and issues that we had to work through. If there is anything you would like to know more about, hit me up on twitter @rbmaslen or on the PoshC2 Slack channel (poshc2.slack.com – email us @ labs at nettitude dot com for an invite).

Long Polling

Having taken Silent Break Security’s excellent Dark Side Ops 2 course (https://silentbreaksecurity.com/training/dark-side-ops-2-adversary-simulation/ ) at DerbyCon last year, before smashing the 2018 CTF with SpicyWeasel, I was struck by the usefulness of HTTP Long Poll. I had come across the concept many years ago in another life as a developer but hadn’t really grasped the relevance to the Red Team toolkit until I took the course.

The premise, essentially, is that you are inverting the beacon. In traditional implants:

  • The main thread is on a timer (with an inbuilt jitter value).
  • When this interval elapses a call back is made to the C2 Server.

One of the main problems with running on beacon time is that once you have queued your task, there is plenty of time to be distracted before the results arrive in. Long Polling means that when a request is made to an HTTP Server, the connection will be held open until a result set is ready (or a much longer time out elapses server side).

How this works in the implant context is that when the request comes into the C2 Server from the implant, the connection (HTTP Request) is held open until tasks have been queued or if tasks are queued and ready to go, it returns immediately. This puts the wait upon the Server side and makes the implant/tasking feel very snappy much more like a command line.

It also means shorter beacon times when being actively used and longer ones when not in use, so the operator needs to spend less time adjusting the sleep time. It also probably makes it a little harder for tools looking for “regular” HTTP requests in order to identify beacons.

So how does that relate to SharpSocks? Well, I’m glad you asked; first it’s probably best to understand how SharpSocks works as a concept.

So how does it all work then?

Very good question, if you do know answers on a postcard please 🙂

Breaking it down, there are at a bare minimum two components, the first being the “Server” and the second being “Implant” (both as separate Visual Studio projects). Within the Visual Studio solutions there are also two test applications that you can use if you are debugging the libraries. The way that SharpSocks is normally deployed, particularly via the Implant, is either a PowerShell script that loads the library or a C# implant that loads the Assembly (.dll).

The Server at it’s core is an HTTPListener. When the server starts up, it starts listening for connections to the URI that has been passed.

An important point is that the Implant and the Server need to share a few secrets:

  1. One is an Encryption Key.
  2. The other is a Command Channel Session Id.

The Implant communicates to the Server via XML Messages in regular Web Requests (encrypted of course. The request needs to identify the connection or the Command Channel that messages are being sent about by a randomly generated Session Id.

The Command Channel has a special one that is shared between the “Server” and the “Implant’ at start-up. Messages about connections opening and closing are sent over the Command Channel (encrypted of course with the key); actual traffic and a status are sent with session id’s that identify the connection.

I need to admit that XML is not a great format. It was used for simplicity when SharpSocks was first built, and yes it will be replaced at some point in time, but it works for the moment. A point could also be made about the lack of multiplexing; at the moment each request will only have information about one Session Id (Command Channel can have multiple open/close messages). This is on the road map, as I would really like to be able to reduce the number of web requests.

The first time a Command Channel request is sent by the Implant another port on the localhost interface is opened up. This is the SOCKS port where you can point your browser, other SOCKS proxy aware app or proxychains (the port itself either defaults to 43334 or what you pass on the command line). Now the app that wants to proxy needs to first send a request to this port identifying the host and the port that it wants to open a connection to (an example of the structure here is for SOCKS4) before it can send any data. The details about the connection to open is then queued as a Command Channel task which will be picked up by the implants Command Channel web request. Once the implant receives this message back, the connection to the specified host is attempted and a message based on the status of this connection is returned to the Server.

Back to Long Polling

So thinking about Long Polling you can probably immediately see where this might be of benefit to the Command Channel, particularly if you have an application that is infrequently opening connections. Having a traditional beacon means that there will be a delay in opening the connection at the implant side if the proxying application didn’t make the request to open right before the beacon arrived.

For this reason the implant now uses Long Polling for the Command Channel. If there are no connections to Open or Close (or none arrive during the wait) the Server will hold the connection open for 40 seconds delay before returning back to the implant telling it there is no work to do (the Implant will then wait a very small amount of time before initiating the connection again). This Long Poll interval can be tweaked on the Server at runtime by typing:

LPoll= e.g. LPoll=5 or LPoll=60

Heading back to the implant side, when it receives the Open message it will then attempt to create a TCP Connection to the specified host:port. Now in the previous version, all of the work that it was to perform was started asynchronously within a Task (one was opened per connection). The code running in this task would asynchronously poll that connection to the target application and then send/receive data to and from the Server via web requests. The Server itself would also have a polling loop checking any data on the SOCKS connection.

This kind of worked in theory, but using some real world applications like RDP (and then trying to debug why connections were really slow) showed up that this was a less than optimal design. There were too many Tasks being created (far too many resources being used), it was next to impossible to get an idea of any connections state when debugging and it had really been coded using a too simplistic notion that traffic would follow a pattern of:

client->server
server->client
client->server
etc.

Changing in this release, instead of both the Server and the Implant having a Task per open connection, there is now only a single loop that polls each of connections regularly (very regularly) to check if the connection is still open and read any data that is available. Moving to a singular loop has shown performance gains rather than losses (so the lesson here is beware of unnecessary parallelism particularly in terms of complexity).

Any data that has been read from a polled connection is written into a queue and then an Event is triggered (specifically a ManualResetEvent notifying another thread that there is data to send back to the Server. This threads sole job is to wait until there is data to send via Web Request back to the Server (it is important to note this is fire and forget, NOT long polled). It’s best to think of this as a standard ProducerConsumer loop. In order to implement it here I am using ConcurrentQueue’s. These are a really cool class helping to avoid locks in concurrent code.

So how does the data come back from Server? The Implant also makes a long polling request per open connection to poll the Servers queue for any data that needs to be sent down the SOCKS connection.

It’s really important to understand how this works as it leads to one of the more frustrating limits I’ve found in .Net (I’ve been coding C# since 2007 and this is the first time I have seen it).

Summing up in terms of Web Requests back to the server we have at least:

  • 1 Long Polled Command Channel back to the Server.
  • 1 Long Polled Data request per open connection.
  • 1 Normal request when the Target application has sent data to the implant.

I’ve got all the code ready, should work really nice but NOPE it’s gone really slow again. What?!

Limiting limits

If you have done any coding using WebClient, C#’s wrapper for making Web Requests, then you might have also come across ServicePoint & ServicePointManager. These classes control the underlying connection, the connection pool and the TLS connection to the target site.

In order to do a System Test of everything I configured the test apps to point to each other (on Windows 10 using FireFox with its proxy configured to use the SOCKS proxy port) – I tried to load the BBC Website and the results were not great. First of all it’s insane how much gets loaded by most modern websites and how many connections to different sites there are.

I’ve probably ignored this in Burp by hiding what is not in scope… Both Test applications have the option of passing verbose for logging, yielding a load of information about the connections being opened, data being sent back & forth, and timings. Now i could see a delay in various places but having so many connections made it really difficult, so thinking only having one connection might make life easier, I resorted to Curl.

Sure enough, yes, in Curl I still kept getting an artificial delay, so there was no other option but to go back into Visual Studio and debug it. Did this yield any info? Not really. So, I could see the connection being made, everything was cool until data needed to be sent back and there was always a delay that appeared consistent with the Long Poll, but not always.

Is there a happy ending?

Well, yes. After an inordinate amount of testing, thinking about trying to use Kestrel instead of HTTPListener and much reading of documentation, I came across this utterly insane limit within the ServicePointManager. Now, each domain that you make a connection to will have a ServicePoint instance, meaning that there is a maximum of two (yes, 2!) concurrent connections to any site. This value can be modified, but it must be done before any connection is made to the host otherwise a ServicePoint object will have been created and the limit will be two.

The image above is from:

https://docs.microsoft.com/en-us/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit?view=netframework-4.8

So, we know that at least two connections are being Long Polled and any time data is read from the connection to the target application a third Web Request will be made. Either the Command Channel or Implants other connection will have to return before this can be sent. What can I say? Massively frustrating and I had never seen this before (or maybe I had in the past, forgotten it and am just getting old, who knows).

Conclusion

So, to wrap what have we learnt here… to be honest it just reinforces lessons that I have learnt continually through my career:

  • RTFM – there is incredible value in the MSDN.
  • Avoid unnecessary complexity and too much parallelism.

With this release I really feel that SharpSocks is getting a lot closer to the initial goals of being a very easy to use SOCKS proxy with solid performance. The upcoming features that are still in development are:

  • Reduce some of the web traffic mainly by multiplexing some messages.
  • Stop using XML and use a better message format.
  • Tighter integration with PoshC2.

Oh – I almost forgot: SOCKS5 (TCP only for the moment) and thus Burp are now supported. SOCKS away my friends!

Introducing PoshC2 v5.0

PoshC2 v5.0 is here and there are significant changes and improvements that we’re very excited to reveal!  There’s been a move to Python3, much improved documentation, significant functionality and quality of life improvements, and more.  Read on for a detailed description of it all!

Repositories

We have had a bit of a change around with repository names, as described here.

This brings us back to PoshC2 being the latest and greatest repository, and PoshC2_old being the obsolete repository.

Documentation

The documentation for PoshC2 has been completely re-written and updated, with emphasis on red team trade-craft when using PoshC2 and on extending and customising it to suit any situation or environment.

It will still be updated regularly, but there will be less focus on what all the different techniques are, and more on how to use PoshC2 itself. Check it out at https://poshc2.readthedocs.io.

Python3

PoshC2 has been completely updated to use Python3. With Python2 becoming End-of-Life in January 2020, it was quite an important change to make.

Posh Commands

Efforts have been made to abstract the potentially difficult nuances of setting up and using PoshC2 away from the user.  Remember having to change into the PoshC2 directory, setup and use a Python virtual environment, to avoid dependency version clashes with the rest of your system?  That’s a thing of the past.

The install script now installs a number of posh-* commands to your /usr/bin directory, before setting up the virtual environment ahead of time. These scripts then encapsulate the use of that environment for the user, allowing PoshC2 to be run and configured from any directory, and it will seamlessly use the virtual environment and other configurations behind the scenes.

For example, PoshC2 can be configured from any directory using your editor of choice by issuing:

posh-config

It can then be installed and run as a service simply by executing:

posh-service

Conversely so it can be stopped using:

posh-stop-service

The Implant-Handler can now be started using posh with a username which is used for logging purposes:

posh u crashoverride

…and you’re good to go.

Use of these scripts is recommended, as any further logic will also go into them and we will work to ensure that PoshC2 works seamlessly when they are used, while other techniques may not be maintained.

Prompt

PoshC2 now has an intelligent new prompt that can aid users when performing red teaming!

https://poshc2.readthedocs.io/en/latest/_images/autocompletions.png

This prompt has context-sensitive auto-completions, intelligently suggesting commands based on the current prompt, such as PowerShell commands in a PowerShell prompt and so on. A unique command history is stored per context, so you won’t have to scroll back through PowerShell commands on a C# implant.

https://poshc2.readthedocs.io/en/latest/_images/autosuggestions.png

In addition to this, the prompt features fish-shell-like auto-suggestions in dark grey text based on commands from that prompt’s history, so if you want to repeat or edit log commands you can complete the suggestion by pressing the right-arrow key or ctrl+e.

Combined with the help and searchhelp commands, we hope that this will allow users of PoshC2 to quickly explore PoshC2s functionality and enable veteran users efficiency and speed of operation for those crucial engagements.

SharpSocks

SharpSocks has undergone some significant improvements and integration into PoshC2. It started out as a standalone cmdlet that can be initiated from any PS session, but we have now fully integrated this so the users can be in any implant and type sharpsocks to get this going. A full separate blog has been written with all the changes for SharpSocks and can be found here:

Introducing SharpSocks v2.0

New Payloads

Several new payloads have been added out-of-the-box, including a DotNet2JS payload using James Forshaw’s DotNet2JScript technique, and a new C# payload for PowerShell Implants.

Sharp Implant Improvements

AMSI Bypass

.NET 4.8 introduced Antimalware scanning for all assemblies, including those loaded using Assembly.Load(byte[] bytes), which previously went unchecked and is used by our PowerShell-less C# implant to load C# executables and run them in memory.

This release features a new bypass-amsi command for the C# implant which will patch AMSI in memory and allow flagged modules to be loaded and executed.

Aliases

We fully appreciate that nobody has time to type out long commands over and over. To that end, we’ve streamlined the run-exe commands for the C# Implant. Now, instead of run-exe Seatbelt.Program Seatbelt all it’s just seatbelt all.

We’ve added aliases for all our favourite commands, but you can add your own in Alias.py. See the documentation for more details.

Lateral Movement Methods

In addition to integrating SharpSocks, we’ve integrated PBind, another great tool written by Doug McLeod. This is now full integrated into PoshC2 and users can type invoke-wmijspbindpayload with the relevant arguments to try and attempt execution on another endpoint using SMB named pipes. The default SMB named pipe for PoshC2 is jaccdpqnvbrrxlaf.

invoke-wmijspbindpayload -target  -domain  -user  -pass ''

In addition to the lateral movement command, PoshC2 will automatically create several payloads that are named PBind payloads. These, like the normal payloads, can be executed against a remote host in whichever technique you prefer to use; dcom, wmi, psexec, etc.

Once you have initiated the payload on a remote host, it will automatically open an SMB named pipe ready for connection. You must know the secret key, the encryption key and the pipe name to communicate, but we already have you covered for this as it tells you what commands to run to interact with the PBind implant:

If you want any other information on PBind, a previous blog covers the techniques in more details and the comms methods including a diagram.

https://labs.nettitude.com/blog/extending-c2-lateral-movement-invoke-pbind/

FPC script

Amongst the various scripts that are added to /usr/bin is the fpc (Find PoshC2 Command) script, added to aid in reporting. This script allows you to search for PoshC2 commands, filtering on keywords in the command, the command output and by user.

Credential management

We’ve improved the credential management in PoshC2, and now when you run Mimikatz’s logonpasswords, the credentials will automatically be parsed and stored in the database, and can be displayed or manipulated using the creds command.

Remainder

There’s also a myriad other improvements such as to logging, tracking file hashes on upload, UX improvements, internal modularisation and refactoring and module updates, with a lot more in the pipeline to come. Stay tuned on Twitter (@Nettitude_Labs, @benpturner, @m0rv4i) for the latest updates or join the PoshC2 slack channel by emailing labs at nettitude dot com.

Roadmap

There’s a lot planned for PoshC2 but the main upcoming features are detailed below.

Docker

The bad news is that support for any platform other than Debian flavours of Linux has been retired. While PoshC2 is written in Python3 and can be maintained to work across multiple platforms, we have instead elected to only support Debian-based distributions.  However, we are working to bring Docker support to PoshC2 to add the ability to run PoshC2 from a Docker container, allowing full and stable execution across any platform that supports Docker, including Windows and Mac OSX.

A docker branch has been created with a Dockerfile added in addition to a number of Docker commands that can be used to manage the containers, but this is still very much in an unstable format.

OpSec

One of the biggest concerns when running a red team engagement is operational security, and to that end we’re working on improving both the offensive side by making more of the inner workings of PoshC2 configurable, but also the reporting side by improving file upload tracking, hosts and users compromised and so on.

This will complement improvements to the HTML report and improve the OpSec and reporting side of things considerably.

Full changelist

The full changelist is below:

  • Added Harmj0y’s KeeThief to modules
  • Added RastaMouse’s Watson to modules
  • Added SafetyDump for minidumping in memory
  • Added file hashing when a file is uploaded
  • Rework imports to improve dependency management
  • Break up ImplantHandler into PSHandler.py, PyHandler.py and SharpHandler.py
  • Add ability to upload a file to an ADS
  • Update BloodHound
  • Pull out unpatched payloads into file for easy management
  • Add base64 encoded versions of the shellcode to the payloads directory
  • Add a configurable jitter to all implants
  • Update the notifications config if it is changed in the Config.py
  • Add NotificationsProjectName in Config.py which is displayed in notifications message
  • Add fpc script which searches the Posh DB for a particular command
  • Modify InjectShellcode logged command to remove base64 encoded shellcode and instead just log loaded filename
  • Add aliases for common sharp modules
  • Fix Shellcode_migrate payload creation
  • Start randomuris with a letter then proceed with random letters and numbers as e.g. msbuild errors if task name starts with a number
  • Fix issue with cred-popper and keylogger using same variable and conflicting
  • Add SCF files to the opsec command
  • Add posh commands on *nix for abstracting the use of pipenv and set up the env in the install script.
  • Move to python3 as python2 is EoL in 2020
  • Misc performance improvments, such as reduced cylomatic complexity and only processing the command once per input
  • Store creds in the DB, add ability to add creds/have them automatically parsed from mimikatz and add ability to specify use of a credId for some commands
  • Add create-shortcut command
  • Don’t show powershell one liner when domain fronting as it tries to connect directly
  • Added set-killdate command
  • Added posh-stop-service linux command
  • Updated Mimikatz
  • Added SafetyKatz
  • posh-config uses $EDITOR variable
  • New Sharp_Powershell_Runner payload compilation with mono
  • Added DotNet2JS Payload generation
  • Added get-computerinfo and get-dodgyprocesses to c# core
  • Misc small fixes
  • Updated help and readme
  • Add new prompt with intelligent autocompletions, autosuggestions and smart history
  • Log when a user logs on or logs off in the C2 server output
  • Add ability to broadcast messages in the C2 server output using the ‘message’ command
  • Add quit commands to Sharp and Python handlers
  • output-to-html renamed to generate-reports
  • Force setting of username for logging
  • Show Implant type at Implant prompt
  • Update User Agent string to match current Chrome
  • Added New SharpSocks
  • Addd * suffix for usernames for High Integrity processes
  • Added lateral movements to Sharp Implant (SMB/DCOM/WMI)
  • Add AMSI bypass for C# Implant
  • Updated PBIND hosts in opsec
  • Added invoke-mimikatz output from PBIND to be parsed
  • Updated PBIND to send stderror if the command does not work
  • Added remove-label for implants
  • Updated PSHandler to add DomainFrontURL to sharpsocks if in use
  • Updated RunAs-NetOnly
  • Added RunAs-NetOnly which uses SimpleImpersonation to create a token
  • Removed print statement on pbind-loadmodule
  • Updated pbind-command and pbind-module
  • Updated the pbind commands
  • Updated invoke-wmijspbindpayload
  • Added PBind payloads to PoshC2
  • Updated opsec error when username is None
  • Add output-to-html retired message
  • Add Matterpreter’s Shhmon
  • Updated ‘tasks’ command to add ImplantID
  • List PowerShell modules pretty like C# modules
  • Add new Sharp modules
  • Port the .NET AMSI Bypass to PowerShell
  • Updated Get-ScreenshotMulti to Continue if Screen is Locked
  • Correct Jitter time equation in PS implant
  • Print last commit and timestamp in header
  • Prompt improvements
  • Updated C2Server / SharpSocks Error responses
  • Updated HTML output
  • Update fpc.py to use virtualenv and only copy fpc to /usr/bin
  • Updated help
  • Expand on opsec no-nos
  • Update inject-shellcode help

Operational Security with PoshC2 Framework

This post describes a new capability that has been deployed within PoshC2, which is designed to assist with revealing a wider set of target environment variables at the dropper stage, as part of operational security controls.

Imagine the following scenario.  You’ve deployed IP address white-listing on a proxy in order to limit implant installation to a specific environment.  A connection arrives back from a host that’s not associated with your target environment, so the connection is dropped and PoshC2 never sees the incoming connection attempt.  By adding some additional logging on the proxy, and utilising the new CookieDecrypter.py script, you can now reveal specific environment variables about where the dropper was executed.  This information allows for better decision making on what to do next in this scenario.  It yields greater situational awareness.

The background

When carrying out red team testing, a number of safety checks are required to:

  1. Environmentally key the implant to the target environment variables.
  2. Make sure only those targeted assets are compromised.

The purpose of this post is not to discuss these safety measures in depth.  Rather, it is to instead focus on the problem wherein a target environment has been IP white-listed on the attacking infrastructure, but a connection has come back from an IP address not already on that approved list of target environments.  Large organisations do not always readily or thoroughly understand their full external IP address range, or do not force tunnelling of all traffic across a VPN.  It is therefore not uncommon to receive legitimate connect backs from non-whitelisted sources.

As a simulated attacker, you now have a decision to make; you have sent very unique malicious links to a target employee and those have been verified as followed, and certain local checks have been satisfied, but the connection is coming from an IP address that’s not on your white-list. The dropper must have been executed and has consequently called back to your attacking infrastructure to download the core implant from the PoshC2 server.  Now what?

Further detail

PoshC2’s implant is a staged implant and utilises a dropper to satisfy certain criteria before calling back to the C2 server to pick up a core implant. The dropper uses a unique encryption key specific to the PoshC2 instance, but not unique to each dropper. Each dropper shares the same encryption key; it is only when the core implant is sent down that each implant is uniquely keyed for all further communication.

Generally, the following infrastructure is set up to support a PoshC2 instance. This includes a number of redirectors or Internet Proxies that forward C2 communications back to the PoshC2 instance, limiting exposure of the C2 servers and allowing a flexible approach to building many redirectors to combat take-downs or blocked URL’s.

The redirectors can be made up of any infrastructure or technology, whether it be cloud based redirectors such as CloudFront, or applications running on a host such as Socat, python or Apache, to name a few.

Apache is a preference for its rewrite rules, which can be used to make smart decisions on traffic traversing the proxy. White-listing the X-forwarded-for address is one common method and can be written into rewrite rules using a RewriteMap, as follows.

Something you might not know

The configuration of the dropper has now been modified to retry multiple times after a timeout, if the connection returns a HTTP 404 response. This provides extra time to make a decision on whether to add the IP address to the white-list, without losing the implant. If the IP address is then added to the white-list, then when the dropper connects back, it will be allowed to traverse to the PoshC2 server instance and download the core implant.

Extra help

The CookieDecrypter.py script was created to assist in decision making by providing more information. The connection from the dropper includes a cookie value that details useful environment variables such as those detailed below, however it is encrypted with the key created on initialisation of the PoshC2 instance.

  • Domain
  • User Account
  • Hostname
  • Implant Architecture
  • Process ID
  • URL

Capturing the cookie

By default the cookie value will not be stored anywhere on the proxy server. As previously mentioned, Apache is powerful and includes a number of options to assist, one of which is ModSecurity.

Installing ModSecurity:

apt install libapache2-modsecurity

Enabling the configuration to capture cookies for requests that are returned a HTTP 404 response:

Now if a dropper connects and gets a 404 response, the following log entry will be created in “/var /log/apache2/sec.log”:

The script

The script CookieDecrypter.py has been created to ingest the log file and parse the ‘SessionID’ from the log file, then connect to the PoshC2 database to retrieve the encryption keys and use the encryption functions to reverse the cookie value. It has to be executed on the PoshC2 server.

python CookieDecrypter.py log.txt

Conclusion

With increased visibility comes better decision making.  Knowledge of the workstations domain and the user generally provides most, if not all, of the information you need to assess whether an asset is in scope.  We have multiple layers of protection in our approach to risk management to ensure that we stay on the right side of the rules of engagement, and this is just one of them.  We recommend that you develop your own set of risk management procedures too.

 

Introducing PoshC2 v4.8 – includes C# dropper, task management and more! – Part One

We recently released version 4.8 of PoshC2, which includes a number of fixes and improvements that help facilitate simulated attacks. This is the first post in a series of posts that will include some of the details around the fixes and updates, alongside a number of other posts which will show some of the other cool features we have been working on in the background.

C Sharp (#)

As of PoshC2 version 4.6, a C# implant has been available. The main driver behind this implementation was to stay clear of System.Management.Automation.dll when an environment is heavily monitored and the EDR product can detect loaded modules inside a running process. Granted, not all EDR products are currently doing this, as it can create a hit on performance at the endpoint level, but its important to understand the OPSEC implications of running different C2 droppers.

This has been a work in progress since the release and is continually improving, and we believe this will be the way forward in months to come against advanced blue teams with a good detection and response capability across the organisation. Currently the implant is fully functional and allows an operator to load any C# assembly and execute this in the running process. This allows the user to extend the functionality massively because they’re able to load all the great modules out there in the wild, created by other infosec authors. The way this is loaded uses the System.Reflection namespace. The code can then be called using .NET reflection, which searches inside the current AppDomain for the assembly name and attempts to either run the entry point given or the main method of the executable. An example usage is as follows, for both run-exe and run-dll:

run-exe:

run-dll:

Task Management

One of the issues we’ve overcome in this release was around tracking tasks; there was no way to determine what output related to which issued command. This was largely due to the implant not using task ID’s that were tracked throughout the entire command process flow.

Typically, this was fine because you know what command you’re running, but when multiple people are working on the same instance, or if multiple similar commands are run, then it could be difficult to figure out what output came from which command. This also made tracking failed commands fairly difficult if not impossible to find. The following screenshots shows the output inside the C2Server and the CompletedTasks HTML file:

Figure 1: How commands were issued and returned against an implant

Figure 2: The old format of the tasks report

Furthermore, tasks were only logged in the database when the implant responded with some output. Now, tasks are inserted as soon as they are picked up by the implant with a start time, and updated with a completed time and the desired output when they return. This allows us to track tasks even if they kill the implant or error and never return, and to see how long they took. It also allows us to reference tasks by ID, allowing us to match them in the C2Server log and to only refer to the task by its ID in the response, decreasing message length and improving operational security. An example of the output is shown below:

Figure 3: The new task logging

The generated report then looks like this:

Figure 4: The new report format

User Logging

The astute amongst you will have noticed the new User column in the report above. Another improvement that has been made in relation to tracking tasks is user logging. Now when you start the ImplantHandler you are prompted for a username; it is possible to leave this blank if required, but when PoshC2 is being used as a centralised C2Server with multiple users it’s important to track which user ran which task as shown in the examples below:

Figure 5: You are now prompted for a username when you start the ImplantHandler

All tasks issued from that ImplantHandler instance will be logged as that user, both in the C2Server log and in the report.

Figure 6: If a username is set it is logged in the task output

Figure 7: The username is also logged for the task in the report

For scripting and/or ease of use, the ImplantHandler can also be started with the -u or --user option, which sets the username, avoiding the prompt:

python ImplantHandler.py --user "bobby b"

Beacon Timing

The way beacon sleep times were handled was inconsistent amongst implants, so now we’ve standardised it. All beacon times must now be in the format of value and unit, such as 5m, 10s or 2h. This is then displayed as such for all implant types in the ImplantHandler. As seen below, the fourth column states the current beacon time in seconds, whereas now we show only the output in the newer format.

Figure 8: The old beacon time format

Figure 9: The new beacon time format

Validation has also been added for these, so attempting to set an invalid beacon time will print a suitable error message and do nothing.

Figure 10: The validation message if an invalid format is set

We’ve also changed the implant colour coding so that they are only flagged as timing out if they haven’t checked in for a multiple of their beacon time, as opposed to a hard coded value.

Previously the implants would be coloured as yellow if they hadn’t checked in for 10 minutes or more, and red for 60 minutes or more. Now they are coloured yellow if they have not checked in for 3x beacon time, and red for 10x beacon time, granting far more accurate and timely feedback to the operator.

Figure 11: Implant colour coding has been improved so that the colour is dependent on the beacon time

C2Viewer

The C2Viewer was a legacy script used to just print the C2Server log, useful when multiple people want to be able to view and manipulate the output independently.

There were a few issues with the implementation however, and there was a possibility that it would miss output as it polled the database. Additionally, as this was an additional script, it added maintenance headaches for updates to task output.

This file has now been removed, and instead if you want to view the output in the same way, we recommend that you run the C2Server and pipe it to a log file. You can print the log to stdout and a log file using tee:

python -u C2Server.py | tee -a /var/log/poshc2_server.log

This output can then be viewed and manipulated by anyone, such as by using tail:

tail -f -n 50 /var/log/poshc2_server.log

This method has the added benefit of storing all server output. While all relevant data is stored in the database, having a backup of the output actually seen in the log during usage can be extremely useful.

Further details can be found in the README.md.

Internal Refactoring

We’re also making strides to improve the internals for PoshC2, refactoring files for clarity, and cutting cyclic dependencies. We aim to modularise the entire code base in order to make it more accessible and easier to maintain, including making changes, but as this is a sizeable change we’ll be doing it incrementally to limit the impact.

Conclusion

There have been quite a few changes made, and we’re aiming to not only improve the technical capabilities of PoshC2, but also the usability and maintainability.

Naturally, any changes come with a risk of breaking things no matter how thorough the testing, so please report any issues found on the GitHub page at: https://github.com/nettitude/PoshC2.

The full list of changes is below, but as always keep an eye out on the changelog as we update this with any changes for each version to make tracking easier. This is the first blog of a series of blogs on some additional features and capability within PoshC2. Stay tuned for more information.

  • Insert tasks when first picked up by the implant with start time
  • Update task when response returned with output and completed time
  • Log task ID in task sent/received
  • Add ability to set username and associate username to tasks issued
  • Print user in task information when the username is not empty
  • Improved error handling and logging
  • Rename CompletedTasks table to Tasks table
  • Method name refactoring around above changes
  • Pull out implant cores into Implant-Core.py/.cs/.ps1
  • Rename 2nd stage cores into Stage2-Core.py/.ps1
  • Stage2-Core.ps1 (previously Implant-Core.ps1 ) is no longer flagged by AMSI
  • Use prepared statements in the DB
  • Refactoring work to start to break up dependency cycle
  • Rename DB to Database in Config.py to avoid name clashes
  • Pull some dependency-less functions into Utils.py to aid dependency management
  • Fix download-file so that if the same file is downloaded multiple times it gets downloaded to name-1.ext name-2.ext etc
  • Adjust user/host printing to always be domain\username @ hostname in implants & logs
  • Fix CreateRawBase payload creation, used in gzip powershell stager and commands like get-system
  • Added ImplantID to Tasks table as a foreign key, so it’s logged in the Tasks report
  • Added Testing.md for testing checklist/methodology
  • Fix Get-ScreenshotAllWindows to return correct file extension
  • Fix searchhelp for commands with caps
  • Implant timeout highlighting is now based on beacon time – yellow if it’s not checked in for 3x beacon time and red if not checked in for 10x beacon time
  • Setting and viewing beacon time is now consistent across config and implant types – always 50s/10m/1h format
  • Added validation for beacon time that it matches the correct format
  • Fix StartAnotherImplant command for python implant
  • Rename RandomURI column in html output to Context, and print it as domain\username @ hostname
  • Move service instructions to readme so that poshc2.service can just be copied to /lib/systemd/system
  • Removed C2Viewer.py and added instructions for same functionality to readme just using system commands