DLL Injection: Part Two

In a previous blog post I gave a high level overview of DLL injection, what it is used for and how it might be achieved.

More than one method exists to get our code into a process and have it execute.  A quick scan around the web gives us quite a few ideas.  It boils down to two steps:

  • The first step is to get our code into the memory of the target process.
  • The second step is to run that code.

I’ve written this post assuming that the reader has some C or C++ Windows API programming expertise.

Opening a process
All of the methods presented in this blog will require a handle to another process in order to perform the injection.

The executable that is performing DLL injection, the “injector” usually requires debug privileges in order to be able to successfully open a handle to another process.  This can be achieved by enabling the debug privilege token:

[cpp]
BOOL Inject_SetDebugPrivilege()
{
BOOL bRet = FALSE;
HANDLE hToken = NULL;
LUID luid = { 0 };

if (OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken))
{
if (LookupPrivilegeValue(NULL, SE_DEBUG_NAME, &luid))
{
TOKEN_PRIVILEGES tokenPriv = { 0 };
tokenPriv.PrivilegeCount = 1;
tokenPriv.Privileges[0].Luid = luid;
tokenPriv.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;

bRet = AdjustTokenPrivileges(hToken, FALSE, &tokenPriv, sizeof(TOKEN_PRIVILEGES), NULL, NULL);

}
}

return bRet;
}
[/cpp]

To obtain a handle to a process we need to call the OpenProcess function with the process id of the target.   The process id can be obtained from windows TaskManager and passed to the injector application on the command line, or the Microsoft Tool Help library could be used to enumerate all processes and locate a process by name.  There are most likely other applicable methods.

[csharp]
HANDLE hProcess = OpenProcess(PROCESS_CREATE_THREAD|
PROCESS_QUERY_INFORMATION|
PROCESS_VM_OPERATION|
PROCESS_VM_WRITE|
PROCESS_VM_READ,
FALSE,
ProcessId );

[/csharp]

In the next sections, I will assume that we already know the process id and have obtained a debug token.

LoadLibrary remote thread
The simplest method of injecting a DLL is to make the target process use the Windows API LoadLibrary call to load the DLL from disc for us.

LoadLibrary calls the DLLMain function in your DLL after it loads the DLL, so this is ideal for bootstrapping your own code.  (Note that it is unadvisable to call any thread synchronisation functions from DLLMain because a deadlock can occur.)

The LoadLibrary function takes a pointer to a filename as the parameter:

[csharp]
HMODULE WINAPI LoadLibrary(
_In_ LPCTSTR lpFileName
);
[/csharp]

And by a stroke of luck, when you create a thread, the thread entry point that you have to provide LPTHREAD_START_ROUTINE also takes a single parameter:

[csharp]
DWORD WINAPI ThreadProc(
_In_ LPVOID lpParameter
);
[/csharp]

We can exploit this happy coincidence by setting the entry point of our remote thread to be LoadLibrary instead of a ThreadProc and then pass in the DLL filename pointer as the thread parameter.

First we need to open a handle to the process then we can allocate some memory in the target process which will contain the filename of the DLL we want to inject:

[csharp]
const char* pszFileName = "C:injectinject.dll";

//add one to the length for the NULL terminator
const size_t fileNameLength = strlen(pszFileName) + 1;

void* pProcessMem = VirtualAllocEx( hProcess,
NULL,
fileNameLength,
MEM_COMMIT,
PAGE_READWRITE );

WriteProcessMemory( hProcess,
pProcessMem,
pszFileName,
fileNameLength,
NULL );

[/csharp]

We now need to get the address, in the target process of the “LoadLibrary” function.  Another happy coincidence helps us out here, is that for most (not all) processes Kernel32.dll is always loaded at the same virtual location, even when ASLR is on.  We can therefore get the address of this function in the injector and it will map across to the same virtual address in the target process.

This of course is not always the case so we should perhaps use the Microsoft Tool Help library to enumerate the loaded DLL’s in the target process and obtain the Kernel32 base address.  We could the use an offset from this base address to get the address of LoadLibrary.  I’ve left this as an exercise for the reader.  Here is how we create the thread in the target process:

[csharp]
HMODULE hKernel32 = GetModuleHandle( "Kernel32.dll" );
void* pLoadLib = GetProcAddress( hKernel32, "LoadLibraryA" );

//
// Create a remote thread starting at LoadLibrary
//
DWORD dwThreadId = 0;
HANDLE hThread = CreateRemoteThread( hProcess,
NULL,
0,
pLoadLib, //entry point (LoadLibrary)
pProcessMem, //filename
0,
&dwThreadId );;
[/csharp]

LoadLibrary NTCreateThreadEx variation
A variation on the LoadLibrary technique is to use the undocumented function NTCreateThreadEx.  This allows injection across session boundaries, so it’s possible to inject into a process running in a different user’s session.

An example of NtCreateThreadEx usage is given on securityxploded.com

We can simply replace the call to CreateRemoteThread in the previous example with a call to NtCreateThreadEx:

[csharp]
HMODULE hKernel32 = GetModuleHandle( "Kernel32.dll" );
void* pLoadLib = GetProcAddress( hKernel32, "LoadLibraryA" );

struct NtCreateThreadExBuffer
{
ULONG Size;
ULONG Unknown1;
ULONG Unknown2;
PULONG Unknown3;
ULONG Unknown4;
ULONG Unknown5;
ULONG Unknown6;
PULONG Unknown7;
ULONG Unknown8;
};

//
// Obtain NTCreateThreadEx function pointer
//
typedef NTSTATUS (WINAPI *fpNtCreateThreadEx)
(
OUT PHANDLE hThread,
IN ACCESS_MASK DesiredAccess,
IN LPVOID ObjectAttributes,
IN HANDLE ProcessHandle,
IN LPTHREAD_START_ROUTINE lpStartAddress,
IN LPVOID lpParameter,
IN BOOL CreateSuspended,
IN ULONG StackZeroBits,
IN ULONG SizeOfStackCommit,
IN ULONG SizeOfStackReserve,
OUT LPVOID lpBytesBuffer
);

HMODULE hNtDLL = GetModuleHandle("Ntdll.dll");

fpNtCreateThreadEx pNtCreateThreadEx =
(fpNtCreateThreadEx)GetProcAddress(hNtDLL,"NtCreateThreadEx");

NtCreateThreadExBuffer ntbuffer = {0};
DWORD temp1 = 0;
DWORD temp2 = 0;

ntbuffer.Size = sizeof(NtCreateThreadExBuffer);
ntbuffer.Unknown1 = 0x10003;
ntbuffer.Unknown2 = 0x8;
ntbuffer.Unknown3 = &temp2;
ntbuffer.Unknown4 = 0;
ntbuffer.Unknown5 = 0x10004;
ntbuffer.Unknown6 = 4;
ntbuffer.Unknown7 = &temp1;
ntbuffer.Unknown8 = 0;

HANDLE hThread = NULL;

pNtCreateThreadEx( &hThread,
0x1FFFFF,
NULL,
hProcess,
pLoadLib,
pProcessMem,
FALSE,
NULL,
NULL,
NULL,
&ntbuffer
);
[/csharp]


The downside to this method is that the function is undocumented so it may change in the future.

LoadLibrary QueueUserAPC variation
If we don’t want to start our own thread, we can hijack an existing thread in the target process, by using the QueueUserAPC function.

[csharp]
DWORD WINAPI QueueUserAPC(
_In_  PAPCFUNC pfnAPC,
_In_  HANDLE hThread,
_In_  ULONG_PTR dwData

);
[/csharp]

Calling this function will queue an asynchronous procedure call on the specified thread.  As with the previous methods, it just so happens that the APC callback function prototype more or less matches that of LoadLibrary:

[csharp]
VOID CALLBACK APCProc(
_In_  ULONG_PTR dwParam
);
[/csharp]

So we can simply substitute LoadLibrary instead of a real APC callback function and the parameter can be a pointer to the filename of the dll we wish to inject.   One issue with this method revolves around how Windows executes APC’s.  Windows has no overarching scheduler looking at the APC queue so the queue is only examined when the thread becomes alertable.  This happens when a thread synchronisation call is made such as WaitForSingleObject or SleepEx (and others).

So a ‘hack’ we can employ is to queue the APC on every single thread and hope that at least one of them will become alertable, a prime candidate is the windows message queue thread of a UI application.  Another potential ‘hack’ that could be employed would be to use SetThreadContext to set EIP to point at SleepEx, however we may crash the thread by doing this.

The CreateRemoteThread call from previous examples can be replaced by the following:

[csharp]
HANDLE hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, 0);

if (hSnapshot)
{
THREADENTRY32 thEntry = { 0 };
thEntry.dwSize = sizeof(THREADENTRY32);
DWORD processId = GetProcessId(hProcess);
BOOL bEntry = Thread32First(hSnapshot, &thEntry);

//try and open any thread
while (bEntry)
{
if (processId == thEntry.th32OwnerProcessID)
{

HANDLE hThread = OpenThread(THREAD_ALL_ACCESS, FALSE, thEntry.th32ThreadID);

if (hThread )
{
QueueUserAPC((PAPCFUNC)pLoadLib, hThread, ((ULONG_PTR)pProcessMem);

CloseHandle(hThread);

}
}

bEntry = Thread32Next(hSnapshot, &thEntry);
}

CloseHandle(hSnapshot);
}
[/csharp]

 

SetWindowsHookEx

Another method SetWindowsHookEx, can be used in two ways.  It can either inject a DLL into every running process or can be targeted at a specific thread in a process.

[csharp]
HHOOK WINAPI SetWindowsHookEx(
int idHook,
HOOKPROC lpfn,
HINSTANCE hMod,
DWORD dwThreadId
);
[/csharp]

As Microsoft notes in its documentation, if you call this function from a 32 bit application then it only injects into other 32bit applications.  Conversely a 64bit application calling this method only injects into other 64bit applications.

It may be possible to work around this limitation and craft your 32bit injector code to switch into 64bit mode and then call SetWindowsHookEx, the technique is detailed in ReWolf’s blog.

Or, simply create a 64bit and 32bit injector application.  Note though that you would need to inject a 64bit DLL into a 64bit process.

Another downside of using SetWindowsHookEx is that if you want to play nicely with Windows, you will need to call UnhookWindowsHookEx from the injector when you have finished with the hook, requiring that your injector application continues to run after hooking and sets up some sort of communication, for example a named pipe or mutex with the hook DLL so that hook removal can be negotiated.

Other methods
There are still further injection methods to investigate, an interesting one exploits shared sections and example of which is the System Tray injection method used by the Win32.Gapz virus, a Metasploit version of this can be found here:

https://github.com/0vercl0k/stuffz/blob/master/gapz_code_injection.cpp

This involves writing some shell code and exploiting a security weakness in Windows so is not as legitimate as the other techniques discussed.

Conclusion

It’s relatively simple to load a DLL into another process by causing LoadLibrary to be invoked on a remote thread as shown above.  The QueueUserAPC technique is an interesting one, however it suffers from two problems the first being that it requires an alertable thread and the second is that it’s difficult to determine if the APC has been called so that the memory can be released.  A possible solution would be to use a named event in the injected DLL.

Using SetWindowsHookEx has the downside that it can only inject into either 32bit or 64 bit processes, depending on which type of process it’s being called from.

Download the  source code

To contact Nettitude’s editor, please email media@nettitude.com.

DLL Injection: Part One

A High Level Overview

DLL injection is a technique that can be used by legitimate software to add functionality, aid with debugging, or reverse engineer software running on a Windows PC.  It is also often used by malware to subvert applications running on target systems, so from a security point of view, it’s useful to know how DLL injection works.
This blog post will attempt to explain code injection with a very simple, high level overview.  I am not planning on delving into the technical details here, which will follow in a subsequent post.

Why place our code into another process?

Virus checkers and personal security products largely use these techniques to place their own software routines into all processes running on the system.  This allows them to monitor each process while it’s running and install hooks into critical functions.  For example, you may want to monitor calls to the “CreateProcess” function in your web browser.  A call to this function could be an indicator that the browser has been compromised, as you don’t generally expect the browser to be running additional processes.
Another legitimate reason would be for reverse engineering purposes.  We may have a malware executable that we want to monitor the behaviour of, and loading our code into the malware would enable us to monitor this.
There are also nefarious reasons. From a penetration testing point of view, we may want to retrieve private information from the memory of an application, for example, password hashes from the lsass process.
Malware will also use injection techniques to place shell-code, executable or DLL code into the memory of another process in order to hide itself.

How is injection achieved?

Broadly speaking, user mode processes running on Windows are ‘compartmentalised’ ,in that they have their own memory space and cannot usually read or write the memory of other running processes.  Furthermore, the basic unit of execution in Windows is called a Thread, and each process must have at least one Thread in order to be “running”.  In other words,all code execution happens in Threads.

How is Injection achieved?
So, in order to “inject” our code and get it to run inside another process, we are going to need to be able to write our code from our injector application into the target process memory and then create a Thread of execution in the target process which will then execute the memory we have injected.

Injection Process

Luckily for us, Microsoft provides some functionality that allows us to do this.
There are several different methods we can use to get our code to run; the simplest is a combination of the functions “OpenProcess”, “VirtualAllocEx” ,“WriteProcessMemory”, “ReadProcessMemory” and “CreateRemoteThread”.
I will cover the technical details of how these, and other techniques are used in a follow-up post.

Do we always need to inject our code?

To load our own code into a process, we don’t always need to use injection.  Another simpler method is available for legitimate purposes.
Microsoft provides some registry keys:

HKLMSoftwareMicrosoftWindows NTCurrentVersionWindowsAppInit_DLLs
HKLMSoftwareMicrosoftWindows NTCurrentVersionWindowsLoadAppInit_DLLs

On a 64 bit machine the following keys are also provided for 32bit executables:

HKLMSoftwareWow6432NodeMicrosoftWindows NTCurrentVersionWindowsAppInit_DLLs
HKLMSoftwareWow6432NodeMicrosoftWindows NTCurrentVersionWindowsLoadAppInit_DLLs

Placing a DLL filename into the AppInit_DLLs key will cause the DLL to be loaded when any application on the system loads User32.dll (which is broadly speaking, all applications on the system).  This works provided that the LoadAppInit_DLLs key is set to 0x00000001.
We would then need to create our own thread as the DLL loads from the “DllMain” entry point being careful not to attempt any thread synchronisation.

Conclusion

Using AppInit_DLLs registry key has a downside, which is that it will load our DLL into every application on the system.  If we want a more targeted approach, then we need to use an injection method to target a single application.

 

 

To contact Nettitude’s editor, please email media@nettitude.com.

A Beginners’ Guide to Obfuscation

Obfuscation is a technique used to change software code in order to make it harder for a human to understand. There are several reasons one might obfuscate code:

  • To make it harder for unauthorised parties to copy the code
  • To reduce the size of the code in order to improve performance. For example a browser can download a javascript file quicker
  • For fun! There are code obfuscation competitions
  • To avoid detection from security products, such as Intrusion Detection systems
  • To make any analysis of the code more difficult. For instance, reverse engineering a malicious executable

I will focus on the last two, which are of most interest to a security researcher. I  will use JavaScript as an example, but the techniques are mostly transferable to other languages.

For example, take the following code exploit, being utilised in JavaScript:

var launcher = new ActiveXObject(“WScript.Shell”);
launcher.Run(“C:malware.exe”);

One could easily write a snort signature to detect this activity on a network level, for example:

 alert tcp any any -> any 80 (content: “WScript.Shell”; content: “.Run|22|malware.exe”; distance: 0; within 100; msg: “malware.exe executed by javascript”;)

 By looking at the code it is quite obvious that it is trying to run an executable called ‘malware.exe’.

I will now demonstrate and rate nine common obfuscation techniques which an attacker could utilise to avoid detection from security products and make the understanding of what the code does difficult for a security analyst.

String Concatenation

The author can split the strings. which can be signatured or give indication of what the code does into substrings. which can then be concatenated to get the desired result:

 var a = “cript.Sh”;
var b = “.e”;
var launcher = new ActiveXObject(“WS” + a + “ell”);
launcher.Run(“C:mal” + “ware” + b + “xe”);

 Rating: 2 (out of 10) – It would be relatively easy to figure out the true intentions of the code by looking at it, but this method helps the code avoid static signature checks when used with an interpreted language. It is likely to be removed during the optimisation process if used in a compiled language.

String replacement methods

This method uses a particular language’s char or string replacement methods to replace certain character sequences in strings in order to get the desired result:

 var launcher = new ActiveXObject(“WxxSxcxrxixpxtx.xSxhxexlxlx”.replace(/x/g, “”));
launcher.Run(“C:mGlwGrP.PxP”.replace(/G/g, “a”).replace(/P/g, “e”)) ;

 Rating: 3 – This method is similar to the previous example but slightly more difficult to figure out by pure sight. Finding and replacingfunctions of text editors would help.

String Encoding

There are various techniques that can be used to encode the string, which can result in the same string when it is evaluated:

 var hexString = “x57x53x63x72x69x70x74x2ex53x68x65x6cx6c”;
var octString = “1037213413415514115416714116214556145170145”
var launcher = new ActiveXObject(hexString);
launcher.Run(octString);

 Rating: 4 – Similar to previous example, but more difficult to figure out by pure sight unless you know your encodings very well. A debugger, e.g. firebug, would be useful to reveal the ascii representations of the variables.

 Custom Encoding

The author encodes strings using a custom algorithm and provides a decoder function to get back to the originals:

 //this is a simple function to decode a string which has been xored with 0x0C
function decode(encoded) {
var decoded = ”;
for (i = 0; i < encoded.length; i+=2) {
//var hex = encoded.substring(i, i+2)
var s = String.fromCharCode(parseInt(encoded.substr(i, 2), 16) ^ 0x0C);
decoded  = decoded + s;
}
return decoded;
}
var launcher = new ActiveXObject(decode(“5b5f6f7e657c78225f64696060”));
launcher.Run(decode(“4f3650616d60”));

 Rating: 5 – This method is better than the previous example as the analyst would need to access to the decoder function to reveal the ascii strings. Again, a debugger would be useful in this case.

 Name Substitution

Replace all variable, constant and function names with non-meaningful names, which often are very similar to each other in order to confuse analysts:

 var lllll = “WScript.Shell”;
var lll1l = “C:malware.exe”;
var l1lll = new ActiveXObject(lllll);
function ll1ll = new function(llllll) {
llllll.Run(lll1l);
}
ll1ll(l1lll);

 Rating: 1 – This method is arguably more of a hindrance  to analysts than anything and does not help avoid signature detection. This can be overcome with find and replace in your text editor if toned be. It is also not applicable to compiled languages like C++.

 White Space reduction

Remove all unnecessary white space and compress code into as little space as possible:

 var lllll=”WScript.Shell”;var lll1l=”C:malware.exe”;varl1lll=new ActiveXObject(lllll);function ll1ll=new function(llllll){llllll.Run(lll1l);}ll1ll(l1lll);

 Rating: 1 – Again, this method is arguably a hindrance to analysts and does not help avoid signature detection. It can be overcome with a code formatter. It is also not applicable to compiled code.

Dead Code Insertion

This method inserts code that is never called and does nothing to increase confusion:

 var a=1;
var b=2;
var g= “WScript.Shell”;
var c=”asasdsaxzzkxj2222″;
var d=”g”;
var e=5;
var f=6;
var h= “C:malware.exe”;
var i= new ActiveXObject(lllll);
function j= new function(p,q) {
p.Run(q);
return 1;
}
function k = new function(l1) {
var ll1 = 0;
for (i = 0; i < 100; i++) {
var ttt = g + h+ d;
for (j = 0; j < 200; j++) {
ttt += j;
}
}
return ll1;
}
var m=a+k(b)+j(i,h);

Meaningless loops also have the added advantage of being able to trick emulators into halting analysis of code if it is taking too long. Please see here for an example.

Rating: 5 – In my view, this methodwastes valuable time trying to find what code is actually executed, and if used correctly can bypass code emulation checks. It can be useful in compiled code, for example, if used in conjunction with a packer, it can make it more difficult to find the real entry point of the code.

 Pass Arguments at Runtime

The author can write the code to expect critical values to be passed into the programme at runtime, for example, a java applet’s variables could all be encrypted inside the code and require a decryption key to be passed in. The analyst may have the applet code but may not have access to the packet capture which might reveal what the key is:

/*HTML Code which analyst may not have access to*/
<html>
<object type=”application/x-java-applet” width=”0″ height=”0″>
<param name=”archive” value=”badjar.jar”/>
<param name=”key” value=”123456789abcdef”
</object>
</html>

 /* Java code which would be found in badjar.jar applet */
private String decrypt(String encryptedString)
String key = getParameter(“key”);
SecretKeySpec skeySpec = new SecretKeySpec(key.getBytes(), “AES”);
Cipher cipher = Cipher.getInstance(“AES”);
cipher.init(Cipher.DECRYPT_MODE, new secretKeySpec(skeySpec.getEncoded(), “AES”));
byte[] original = cipher.doFinal(encrypted.getBytes());
return new String(original);
}

Please see here for an example of this kind of technique used in actual malware.

Rating: 7 – The analyst must have access to both the malware code and the key which was used to decrypt the strings within it.

 Packing

It’s not just the constant strings which can be obfuscated. The javascript eval function enables a programmer to pass in javascript code which is then evaluated and executed at runtime. So we can pack our whole programme into a variable and decode and evaluate it at run time:

 //this is a simple function to decode a string which has been xored with 0x0C
function decode(encoded) {
var decoded = ”;
for (i = 0; i < encoded.length; i+=2) {
//var hex = encoded.substring(i, i+2)
var s = String.fromCharCode(parseInt(encoded.substr(i, 2), 16) ^ 0x0C);
decoded  = decoded + s;
}
return decoded;
}
eval(decode(“7a6d7e2c606d79626f64697e2c312c62697b2c4d6f78657a6954436e66696f78242e5b582e2c272c68696f636869242e3a
6a3b693a393b6f3b343e3e396a3a382e252c272c2e69602e2c272c5f787e65626b226a7e63614f646d7e4f636869243d3c3425253
72c606d79626f64697e225e7962242e4f365050616d602e2c272c68696f636869242e3b6e3a683b693a353e3e3a353b383a352e252537”));

It’s not just javascript which can be used as such. Java’s reflection API allows classes to be defined by a string which can then be loaded and executed at runtime. Packing an executable is a technique which compresses the whole executable, and provides a single unpacking function to uncompress the actual code and run it. This is probably the most common technique used to obfuscate code. Below are some examples using the various programming languages:

Executable packing malware

Packed Java exploit

Packed javascript evaluation

Rating: 9 – While it is possible to use debuggers to set breakpoints and examine the contents of the strings, it is time consuming and can further complicate matters if the values have been packed multiple times. In the case of executable packing, an analyst must find the point in the assembly code where the unpacking routine finishes, which is not trivial. It is advisable, in this case, to use an emulator in a safe environment and examine system behaviour, such as file system changes or network activity

 Commercial Tools

Commercial obfuscators combine all of these techniques which make life very difficult for analysts and also make it virtually impossible for signature based detection to work on malicious code. Fortunately there are also commercial de-obfuscators which can help us. The below table lists some of these tools:

 

Language Obfuscators Analyst Tools
Javascript Dean Edwards Packer Free Javascript ObfuscatorJS Minifier Stunnix JSBeautifier – Copy and paste your script into their website – does code formatting and unpacking, and able to detect certain obfuscators

JSUnpack – Python source code available or use their website. Able to detect hidden HTTP connections being created among other things.

JSDetox – Offers a web application where analysts can upload javascripts

SpiderMonkey – Firefox’s javascript engine, can use command line version to evaluate scripts outside of the browser

Firebug – Javascript debugger for use within Firefox

Java AllatoriCafeBabeJBCOProGuard JDO – Decompiles and deobfuscates class files.

Procyon – decompiles class files into java files.

Assembly UPXCExeRLPack FSGThemida UPX is able to unpack UPX packed executables.

OllyDbg – Debugger which enables memory dumping of packed executable and import table reconstruction

ChimpREC – Allows process to be dumped and import table to be fixed

 

To contact Nettitude’s editor, please email media@nettitude.com.

 

Programmable Logic Controller (PLC) Security

Industrial Control Systems (ICS) are very important components of our critical infrastructure. Programmable logic controllers (PLC) are some of the well-known types of control system components. PLCs are computers used for automation of typically industrial electromechanical processes, such as the control of machinery on factory assembly lines, amusements rides, light fixtures, power stations, power distribution systems, power generation systems, and gas turbines, to name a few.

There are different types of PLC, which can be classified into three major categories:

  • Logic controller – Sometimes called a ‘smart or programmable’ relay. Simple to program and cost effective for low Input/Output (I/O), slower speed, applications
  • Compact PLC – An intermediate level offering increased instruction sets and higher I/O capacity than a logic controller
  • Advanced PLC – Offering greater processing power, larger memory capacity and even higher I/O expandability and networking options

There is no doubt that protecting PLCs from cyber-attacks is very important as they directly control machineries. In critical infrastructure, any successful attack on PLC could be as serious as the Siberian gas pipeline explosion in 1982.

What exactly is the attack surface of PLCs? What can an attacker do against PLCs? Why should we care about given PLC a higher level priority when protecting critical infrastructure?

The people

I will start by ruling out the supply chain problem as one of the main problems with any device purchased from a foreign country.  I will barely scratch the surface if I start discussing the supply chain problems. There have been many cases where top governments had their supply chain completely wrong and were sold the wrong products.

Beyond the supply chain problems, many statistics show that human errors are still high enough to be serious concern for critical infrastructure. In addition to human errors, insider attacks remain one of the top security concerns for critical infrastructures and critical environments. Too many questions remain unsolved as to what would be an effective solution to tackle the insider threat.

It is generally agreed that training is very important. How many people consider security as their problem? How many people would still use USB even when not authorised? I recently led a policy review for an organisation. When we started discussing the access to removable media, the room was split into two sides. Some people confirmed that there was a zero USB policy in the organisation, whilst others argued that there is another policy that allowed certain people to use USBs.

It is good practice that critical environments have the “need to know” policy by default. Imagine the case where a picture of a party, a visit to a plant or a picture of someone working in plant is posted online showing the software, hardware name and version used in their work environment. How invaluable could that be to an attacker?  Such information would be invaluable for an attacker. Likewise, social engineering can be used against people working in critical environments to reveal information about their systems.

Communications: Most PLCs have a wide range of communication interface they can support. I am only going to focus on the main security issues.

Network topologies: Certain network topologies are more prone to attacks than others. It is important that the right topology is in place to allow strong security to be built around it. When choosing a topology, the following issues should be considered: security, bandwidth, redundancy and convergence, disruption during network upgrade, readiness for network convergence. 

Network communication protocols: Many communication protocols are proprietary and only well known to the manufacturer. This means that the security of such protocols is only as good as their team. Despite many of the protocols being proprietary, many open source tools could be used to determine the nature of this. Also, these protocols have not been built with security in mind. Many efforts are ongoing to secure protocols used in PLC communications, which is good news. 

VPN: many people consider VPN as the ultimate security. During an audit at a fairly large plant, the computer operator during his break was listening to music from a USB on a computer that was used to VPN into the plant. The security implications here are clear.

Fuzzing, in experimental settings, have cause serious problems to PLCs. It is still the case that a large ping request will cause some disruption of communication between the PLC and any other device communicating via Ethernet.

PLCs websites can be reachable via search engines. This again is another security problem that could cause some serious disruption to the plant’s operations.

Logic inside the PLC

Once attackers are in the reach of a PLC because they have managed to get access to computer systems that lead them to the control system network, there are a number of things they can achieve:

  • Send inaccurate information to system operators, either to disguise unauthorised changes, or to cause the operators to initiate inappropriate actions
  • Change alarm thresholds or disable them
  • Interfere with the operation of plant equipment, which can cause modification to safety settings by sending malformed packets
  • Blocking or delaying the flow of information
  • Blocking data or sending false information to operators to prevent them from being aware of certain conditions or to initiate inappropriate actions
  • Overtaxing staff resources due to simultaneous failures of multiple systems
  • Steal sensitive data (using open source software in a form of command line i.e. no installation required,  an attacker can download the logic running into a PLC)
  • Upload a new firmware that would not necessarily require a reboot of the PLC
  • Execute exploits
  • Activate the website on the PLC if not already active
  • Modify the website to allow remote access

It is very important that attackers do not have access to the logic inside the PLC. Stuxnet and the case of Sibera gas pipeline explosion are two real world cases that show the malicious use of PLCs can have serious consequences.

Application layer

PLCs are increasingly designed to integrate networking functionalities. Consequently, a good number of PLCs offer a web interface. A large number of SCADA web interfaces have been discovered through Shodan search engines. It still the case that Google and other search engine index folders that were not meant to be indexed. Folders of computers available online via DMZ that are not meant to be indexed by search engines need to be marked in the robots.txt. The lack of understanding has made Google hacking command very successful. Once the website on the PLC is available to the attackers, they then have an opportunity to do a full unauthorised penetration test to find ways to get into the system. Using brute force attack or any other method to discover the password, the attackers can then gain full access to the website. Once authenticated the attackers can:

  • Make unauthorised changes to instructions in local processors to take control of master-slave relationships between Master Terminal Unit (MTU) and Remote Terminal Unit (RTU) systems
  • Prevent access to legitimate users
  • Modify the ICS software, configuration settings or infecting ICS software with malware
  • Modify Tag values

Master terminal units (MTU) in SCADA system is a device that issues the commands to the Remote Terminal Unit (RTUs), which are located at remote places from the control, gathers the required data, stores the information, and process the information and display the information in the form of pictures, curves and tables to human interface and helps to take control decisions.

Attackers are also able to cause serious damage to a plant operation without gaining full access to the PLC web site interface (CVE-2014-2259, CVE-2014-2254, and CVE-2014-2255)

Software is rarely bug free. Over the last few years, the security community has been very interested in finding vulnerabilities in ICS hardware and software. Digitalbond has previously run an exercise dedicated to finding bugs in PLCs. The results found far more bugs and vulnerabilities than expected.

Operating systems

Long gone are the days where Mac OS and Linux were considered very secure. PLCs, just like any other computers, have an operating system (Microware OS-9, VxWorks). Vulnerabilities and bugs exits in OS-9 and VxWorks just as they exist in Microsoft Windows OSs, Linux, Mac OS, Android, etc. Unlike regular computer OSs, patching operating systems in PLCs against known bugs and vulnerabilities is a very challenging. Many things need to be considered before deciding to update PLCs operating system. Even though patching is very important for any computer system security, malware such Havex Remote Access Trojan (RAT) have infected update installers from various ICS vendors. Such malware leaves ICS users baffled as to whether they should update and get infected or not update and get infected anyway.

The concept of the ‘zero day’ attack is another challenge for the security of the operating system. When a vulnerability is not published, it can be exploited by attackers without being detected. There is very little chance that current security mechanisms will detect such attacks.

Hardware in PLCs is built with very specific specifications. One of their limitations is their ability to handle complicated and multiple tasks at the same time. Traditionally, PLCs do not offer a great deal of memory. This implies that any logging capabilities have to be built out of the PLCs. PLCs should allow complex logging capabilities in order to allow in-depth forensic capabilities.

Hardware

One of the biggest weaknesses of most PLCs is that security is not built in by design. Most generally, any compatible code can run on a PLCs despite its origin (legitimate or malicious). Open source tools allow the organisation blocks (OB) to be downloaded and uploaded without authentication. The OB1 for instance, are loaded and executed without a simple hash function check.  If attackers have knowledge of the different Tags used in a project that may control a critical infrastructure, they can then use a completely different logic that could have catastrophic consequences. The attackers can develop his knowledge of an internal system using different elements such as (HMI, PLC’s website, documentation, and source code to name a few).

In all fairness, certain organisation block (OB) will require a reboot of the PLC for the code to take effect. However, it takes less than 3 seconds to reboot a PLC remotely. It is very unlikely that the operators will not notice any difference from the control screen. In some other environments, three seconds of inactivity would cause serious alerts, but this is not the case everywhere. When modifying the logic, if only timers are modified, the PLC will not generally require a reboot for the new timers to take effect. A built-in security is necessary in a PLC.

A good security by design for PLCs should allow authentication of devices, access control, auditing and logging, data integrity control, secure booting, ladder logic execution control and encryption at the very least.

Policies 

PLCs are as important in control system networks as they would be in any other network environment. It is essential that they are managed with the highest priority. Any access, maintenance, upgrade, test, modification, downtime of PLCs need to be accounted for and these policies need to be enforced.

Programmable Logic Controller security can be summarised as show in Figure 1.

 

PLC Security

PLC Threat Landscape

 

Why should we care about PLC security? 

If we follow the diagram by NERC, it is very likely that most PLCs will either have a critical service, operate critical system or service, or used in a critical system. If we care about any of our critical processes, functions or systems, we definitely should care about the various components upon which they depend. Figure 1 describes the process advocated by NERC to identify critical assets.

Critical Asset Identification

Critical Asset Identification

In conclusion, securing a PLC is paramount in securing critical infrastructure. PLCs will generally support critical function in a plant or their will be used in a critical path of production making them a critical component. Many layers of protection need to be in place for PLC to be secure. Human risk factors, the protection of the logic inside the PLC, secure communications, application layer security, operating system security, hardware security and last but not least, the management of all aspects of all of the above security requirements.

A simple picture taken in a work environment could provide an attacker with the last piece of information missing to be successful in his/her operations. PLCs are very important components of critical infrastructure and should be protected at all costs.

Protecting PLCs alone would not solve the problem against cyber-attacks. General governance should be in place to ensure that all aspects of security within the organisation are properly addressed. A holistic approach to security is highly recommended. Please read more about Nettitude holistic security at Cyber breaches response in-depth.

To contact Nettitude’s editor, please email media@nettitude.com.