DLL Injection: Part One

A High Level Overview

DLL injection is a technique that can be used by legitimate software to add functionality, aid with debugging, or reverse engineer software running on a Windows PC.  It is also often used by malware to subvert applications running on target systems, so from a security point of view, it’s useful to know how DLL injection works.
This blog post will attempt to explain code injection with a very simple, high level overview.  I am not planning on delving into the technical details here, which will follow in a subsequent post.

Why place our code into another process?

Virus checkers and personal security products largely use these techniques to place their own software routines into all processes running on the system.  This allows them to monitor each process while it’s running and install hooks into critical functions.  For example, you may want to monitor calls to the “CreateProcess” function in your web browser.  A call to this function could be an indicator that the browser has been compromised, as you don’t generally expect the browser to be running additional processes.
Another legitimate reason would be for reverse engineering purposes.  We may have a malware executable that we want to monitor the behaviour of, and loading our code into the malware would enable us to monitor this.
There are also nefarious reasons. From a penetration testing point of view, we may want to retrieve private information from the memory of an application, for example, password hashes from the lsass process.
Malware will also use injection techniques to place shell-code, executable or DLL code into the memory of another process in order to hide itself.

How is injection achieved?

Broadly speaking, user mode processes running on Windows are ‘compartmentalised’ ,in that they have their own memory space and cannot usually read or write the memory of other running processes.  Furthermore, the basic unit of execution in Windows is called a Thread, and each process must have at least one Thread in order to be “running”.  In other words,all code execution happens in Threads.

How is Injection achieved?
So, in order to “inject” our code and get it to run inside another process, we are going to need to be able to write our code from our injector application into the target process memory and then create a Thread of execution in the target process which will then execute the memory we have injected.

Injection Process

Luckily for us, Microsoft provides some functionality that allows us to do this.
There are several different methods we can use to get our code to run; the simplest is a combination of the functions “OpenProcess”, “VirtualAllocEx” ,“WriteProcessMemory”, “ReadProcessMemory” and “CreateRemoteThread”.
I will cover the technical details of how these, and other techniques are used in a follow-up post.

Do we always need to inject our code?

To load our own code into a process, we don’t always need to use injection.  Another simpler method is available for legitimate purposes.
Microsoft provides some registry keys:

HKLMSoftwareMicrosoftWindows NTCurrentVersionWindowsAppInit_DLLs
HKLMSoftwareMicrosoftWindows NTCurrentVersionWindowsLoadAppInit_DLLs

On a 64 bit machine the following keys are also provided for 32bit executables:

HKLMSoftwareWow6432NodeMicrosoftWindows NTCurrentVersionWindowsAppInit_DLLs
HKLMSoftwareWow6432NodeMicrosoftWindows NTCurrentVersionWindowsLoadAppInit_DLLs

Placing a DLL filename into the AppInit_DLLs key will cause the DLL to be loaded when any application on the system loads User32.dll (which is broadly speaking, all applications on the system).  This works provided that the LoadAppInit_DLLs key is set to 0x00000001.
We would then need to create our own thread as the DLL loads from the “DllMain” entry point being careful not to attempt any thread synchronisation.


Using AppInit_DLLs registry key has a downside, which is that it will load our DLL into every application on the system.  If we want a more targeted approach, then we need to use an injection method to target a single application.



To contact Nettitude’s editor, please email media@nettitude.com.

A Beginners’ Guide to Obfuscation

Obfuscation is a technique used to change software code in order to make it harder for a human to understand. There are several reasons one might obfuscate code:

  • To make it harder for unauthorised parties to copy the code
  • To reduce the size of the code in order to improve performance. For example a browser can download a javascript file quicker
  • For fun! There are code obfuscation competitions
  • To avoid detection from security products, such as Intrusion Detection systems
  • To make any analysis of the code more difficult. For instance, reverse engineering a malicious executable

I will focus on the last two, which are of most interest to a security researcher. I  will use JavaScript as an example, but the techniques are mostly transferable to other languages.

For example, take the following code exploit, being utilised in JavaScript:

var launcher = new ActiveXObject(“WScript.Shell”);

One could easily write a snort signature to detect this activity on a network level, for example:

 alert tcp any any -> any 80 (content: “WScript.Shell”; content: “.Run|22|malware.exe”; distance: 0; within 100; msg: “malware.exe executed by javascript”;)

 By looking at the code it is quite obvious that it is trying to run an executable called ‘malware.exe’.

I will now demonstrate and rate nine common obfuscation techniques which an attacker could utilise to avoid detection from security products and make the understanding of what the code does difficult for a security analyst.

String Concatenation

The author can split the strings. which can be signatured or give indication of what the code does into substrings. which can then be concatenated to get the desired result:

 var a = “cript.Sh”;
var b = “.e”;
var launcher = new ActiveXObject(“WS” + a + “ell”);
launcher.Run(“C:mal” + “ware” + b + “xe”);

 Rating: 2 (out of 10) – It would be relatively easy to figure out the true intentions of the code by looking at it, but this method helps the code avoid static signature checks when used with an interpreted language. It is likely to be removed during the optimisation process if used in a compiled language.

String replacement methods

This method uses a particular language’s char or string replacement methods to replace certain character sequences in strings in order to get the desired result:

 var launcher = new ActiveXObject(“WxxSxcxrxixpxtx.xSxhxexlxlx”.replace(/x/g, “”));
launcher.Run(“C:mGlwGrP.PxP”.replace(/G/g, “a”).replace(/P/g, “e”)) ;

 Rating: 3 – This method is similar to the previous example but slightly more difficult to figure out by pure sight. Finding and replacingfunctions of text editors would help.

String Encoding

There are various techniques that can be used to encode the string, which can result in the same string when it is evaluated:

 var hexString = “x57x53x63x72x69x70x74x2ex53x68x65x6cx6c”;
var octString = “1037213413415514115416714116214556145170145”
var launcher = new ActiveXObject(hexString);

 Rating: 4 – Similar to previous example, but more difficult to figure out by pure sight unless you know your encodings very well. A debugger, e.g. firebug, would be useful to reveal the ascii representations of the variables.

 Custom Encoding

The author encodes strings using a custom algorithm and provides a decoder function to get back to the originals:

 //this is a simple function to decode a string which has been xored with 0x0C
function decode(encoded) {
var decoded = ”;
for (i = 0; i < encoded.length; i+=2) {
//var hex = encoded.substring(i, i+2)
var s = String.fromCharCode(parseInt(encoded.substr(i, 2), 16) ^ 0x0C);
decoded  = decoded + s;
return decoded;
var launcher = new ActiveXObject(decode(“5b5f6f7e657c78225f64696060”));

 Rating: 5 – This method is better than the previous example as the analyst would need to access to the decoder function to reveal the ascii strings. Again, a debugger would be useful in this case.

 Name Substitution

Replace all variable, constant and function names with non-meaningful names, which often are very similar to each other in order to confuse analysts:

 var lllll = “WScript.Shell”;
var lll1l = “C:malware.exe”;
var l1lll = new ActiveXObject(lllll);
function ll1ll = new function(llllll) {

 Rating: 1 – This method is arguably more of a hindrance  to analysts than anything and does not help avoid signature detection. This can be overcome with find and replace in your text editor if toned be. It is also not applicable to compiled languages like C++.

 White Space reduction

Remove all unnecessary white space and compress code into as little space as possible:

 var lllll=”WScript.Shell”;var lll1l=”C:malware.exe”;varl1lll=new ActiveXObject(lllll);function ll1ll=new function(llllll){llllll.Run(lll1l);}ll1ll(l1lll);

 Rating: 1 – Again, this method is arguably a hindrance to analysts and does not help avoid signature detection. It can be overcome with a code formatter. It is also not applicable to compiled code.

Dead Code Insertion

This method inserts code that is never called and does nothing to increase confusion:

 var a=1;
var b=2;
var g= “WScript.Shell”;
var c=”asasdsaxzzkxj2222″;
var d=”g”;
var e=5;
var f=6;
var h= “C:malware.exe”;
var i= new ActiveXObject(lllll);
function j= new function(p,q) {
return 1;
function k = new function(l1) {
var ll1 = 0;
for (i = 0; i < 100; i++) {
var ttt = g + h+ d;
for (j = 0; j < 200; j++) {
ttt += j;
return ll1;
var m=a+k(b)+j(i,h);

Meaningless loops also have the added advantage of being able to trick emulators into halting analysis of code if it is taking too long. Please see here for an example.

Rating: 5 – In my view, this methodwastes valuable time trying to find what code is actually executed, and if used correctly can bypass code emulation checks. It can be useful in compiled code, for example, if used in conjunction with a packer, it can make it more difficult to find the real entry point of the code.

 Pass Arguments at Runtime

The author can write the code to expect critical values to be passed into the programme at runtime, for example, a java applet’s variables could all be encrypted inside the code and require a decryption key to be passed in. The analyst may have the applet code but may not have access to the packet capture which might reveal what the key is:

/*HTML Code which analyst may not have access to*/
<object type=”application/x-java-applet” width=”0″ height=”0″>
<param name=”archive” value=”badjar.jar”/>
<param name=”key” value=”123456789abcdef”

 /* Java code which would be found in badjar.jar applet */
private String decrypt(String encryptedString)
String key = getParameter(“key”);
SecretKeySpec skeySpec = new SecretKeySpec(key.getBytes(), “AES”);
Cipher cipher = Cipher.getInstance(“AES”);
cipher.init(Cipher.DECRYPT_MODE, new secretKeySpec(skeySpec.getEncoded(), “AES”));
byte[] original = cipher.doFinal(encrypted.getBytes());
return new String(original);

Please see here for an example of this kind of technique used in actual malware.

Rating: 7 – The analyst must have access to both the malware code and the key which was used to decrypt the strings within it.


It’s not just the constant strings which can be obfuscated. The javascript eval function enables a programmer to pass in javascript code which is then evaluated and executed at runtime. So we can pack our whole programme into a variable and decode and evaluate it at run time:

 //this is a simple function to decode a string which has been xored with 0x0C
function decode(encoded) {
var decoded = ”;
for (i = 0; i < encoded.length; i+=2) {
//var hex = encoded.substring(i, i+2)
var s = String.fromCharCode(parseInt(encoded.substr(i, 2), 16) ^ 0x0C);
decoded  = decoded + s;
return decoded;

It’s not just javascript which can be used as such. Java’s reflection API allows classes to be defined by a string which can then be loaded and executed at runtime. Packing an executable is a technique which compresses the whole executable, and provides a single unpacking function to uncompress the actual code and run it. This is probably the most common technique used to obfuscate code. Below are some examples using the various programming languages:

Executable packing malware

Packed Java exploit

Packed javascript evaluation

Rating: 9 – While it is possible to use debuggers to set breakpoints and examine the contents of the strings, it is time consuming and can further complicate matters if the values have been packed multiple times. In the case of executable packing, an analyst must find the point in the assembly code where the unpacking routine finishes, which is not trivial. It is advisable, in this case, to use an emulator in a safe environment and examine system behaviour, such as file system changes or network activity

 Commercial Tools

Commercial obfuscators combine all of these techniques which make life very difficult for analysts and also make it virtually impossible for signature based detection to work on malicious code. Fortunately there are also commercial de-obfuscators which can help us. The below table lists some of these tools:


Language Obfuscators Analyst Tools
Javascript Dean Edwards Packer Free Javascript ObfuscatorJS Minifier Stunnix JSBeautifier – Copy and paste your script into their website – does code formatting and unpacking, and able to detect certain obfuscators

JSUnpack – Python source code available or use their website. Able to detect hidden HTTP connections being created among other things.

JSDetox – Offers a web application where analysts can upload javascripts

SpiderMonkey – Firefox’s javascript engine, can use command line version to evaluate scripts outside of the browser

Firebug – Javascript debugger for use within Firefox

Java AllatoriCafeBabeJBCOProGuard JDO – Decompiles and deobfuscates class files.

Procyon – decompiles class files into java files.

Assembly UPXCExeRLPack FSGThemida UPX is able to unpack UPX packed executables.

OllyDbg – Debugger which enables memory dumping of packed executable and import table reconstruction

ChimpREC – Allows process to be dumped and import table to be fixed


To contact Nettitude’s editor, please email media@nettitude.com.


Programmable Logic Controller (PLC) Security

Industrial Control Systems (ICS) are very important components of our critical infrastructure. Programmable logic controllers (PLC) are some of the well-known types of control system components. PLCs are computers used for automation of typically industrial electromechanical processes, such as the control of machinery on factory assembly lines, amusements rides, light fixtures, power stations, power distribution systems, power generation systems, and gas turbines, to name a few.

There are different types of PLC, which can be classified into three major categories:

  • Logic controller – Sometimes called a ‘smart or programmable’ relay. Simple to program and cost effective for low Input/Output (I/O), slower speed, applications
  • Compact PLC – An intermediate level offering increased instruction sets and higher I/O capacity than a logic controller
  • Advanced PLC – Offering greater processing power, larger memory capacity and even higher I/O expandability and networking options

There is no doubt that protecting PLCs from cyber-attacks is very important as they directly control machineries. In critical infrastructure, any successful attack on PLC could be as serious as the Siberian gas pipeline explosion in 1982.

What exactly is the attack surface of PLCs? What can an attacker do against PLCs? Why should we care about given PLC a higher level priority when protecting critical infrastructure?

The people

I will start by ruling out the supply chain problem as one of the main problems with any device purchased from a foreign country.  I will barely scratch the surface if I start discussing the supply chain problems. There have been many cases where top governments had their supply chain completely wrong and were sold the wrong products.

Beyond the supply chain problems, many statistics show that human errors are still high enough to be serious concern for critical infrastructure. In addition to human errors, insider attacks remain one of the top security concerns for critical infrastructures and critical environments. Too many questions remain unsolved as to what would be an effective solution to tackle the insider threat.

It is generally agreed that training is very important. How many people consider security as their problem? How many people would still use USB even when not authorised? I recently led a policy review for an organisation. When we started discussing the access to removable media, the room was split into two sides. Some people confirmed that there was a zero USB policy in the organisation, whilst others argued that there is another policy that allowed certain people to use USBs.

It is good practice that critical environments have the “need to know” policy by default. Imagine the case where a picture of a party, a visit to a plant or a picture of someone working in plant is posted online showing the software, hardware name and version used in their work environment. How invaluable could that be to an attacker?  Such information would be invaluable for an attacker. Likewise, social engineering can be used against people working in critical environments to reveal information about their systems.

Communications: Most PLCs have a wide range of communication interface they can support. I am only going to focus on the main security issues.

Network topologies: Certain network topologies are more prone to attacks than others. It is important that the right topology is in place to allow strong security to be built around it. When choosing a topology, the following issues should be considered: security, bandwidth, redundancy and convergence, disruption during network upgrade, readiness for network convergence. 

Network communication protocols: Many communication protocols are proprietary and only well known to the manufacturer. This means that the security of such protocols is only as good as their team. Despite many of the protocols being proprietary, many open source tools could be used to determine the nature of this. Also, these protocols have not been built with security in mind. Many efforts are ongoing to secure protocols used in PLC communications, which is good news. 

VPN: many people consider VPN as the ultimate security. During an audit at a fairly large plant, the computer operator during his break was listening to music from a USB on a computer that was used to VPN into the plant. The security implications here are clear.

Fuzzing, in experimental settings, have cause serious problems to PLCs. It is still the case that a large ping request will cause some disruption of communication between the PLC and any other device communicating via Ethernet.

PLCs websites can be reachable via search engines. This again is another security problem that could cause some serious disruption to the plant’s operations.

Logic inside the PLC

Once attackers are in the reach of a PLC because they have managed to get access to computer systems that lead them to the control system network, there are a number of things they can achieve:

  • Send inaccurate information to system operators, either to disguise unauthorised changes, or to cause the operators to initiate inappropriate actions
  • Change alarm thresholds or disable them
  • Interfere with the operation of plant equipment, which can cause modification to safety settings by sending malformed packets
  • Blocking or delaying the flow of information
  • Blocking data or sending false information to operators to prevent them from being aware of certain conditions or to initiate inappropriate actions
  • Overtaxing staff resources due to simultaneous failures of multiple systems
  • Steal sensitive data (using open source software in a form of command line i.e. no installation required,  an attacker can download the logic running into a PLC)
  • Upload a new firmware that would not necessarily require a reboot of the PLC
  • Execute exploits
  • Activate the website on the PLC if not already active
  • Modify the website to allow remote access

It is very important that attackers do not have access to the logic inside the PLC. Stuxnet and the case of Sibera gas pipeline explosion are two real world cases that show the malicious use of PLCs can have serious consequences.

Application layer

PLCs are increasingly designed to integrate networking functionalities. Consequently, a good number of PLCs offer a web interface. A large number of SCADA web interfaces have been discovered through Shodan search engines. It still the case that Google and other search engine index folders that were not meant to be indexed. Folders of computers available online via DMZ that are not meant to be indexed by search engines need to be marked in the robots.txt. The lack of understanding has made Google hacking command very successful. Once the website on the PLC is available to the attackers, they then have an opportunity to do a full unauthorised penetration test to find ways to get into the system. Using brute force attack or any other method to discover the password, the attackers can then gain full access to the website. Once authenticated the attackers can:

  • Make unauthorised changes to instructions in local processors to take control of master-slave relationships between Master Terminal Unit (MTU) and Remote Terminal Unit (RTU) systems
  • Prevent access to legitimate users
  • Modify the ICS software, configuration settings or infecting ICS software with malware
  • Modify Tag values

Master terminal units (MTU) in SCADA system is a device that issues the commands to the Remote Terminal Unit (RTUs), which are located at remote places from the control, gathers the required data, stores the information, and process the information and display the information in the form of pictures, curves and tables to human interface and helps to take control decisions.

Attackers are also able to cause serious damage to a plant operation without gaining full access to the PLC web site interface (CVE-2014-2259, CVE-2014-2254, and CVE-2014-2255)

Software is rarely bug free. Over the last few years, the security community has been very interested in finding vulnerabilities in ICS hardware and software. Digitalbond has previously run an exercise dedicated to finding bugs in PLCs. The results found far more bugs and vulnerabilities than expected.

Operating systems

Long gone are the days where Mac OS and Linux were considered very secure. PLCs, just like any other computers, have an operating system (Microware OS-9, VxWorks). Vulnerabilities and bugs exits in OS-9 and VxWorks just as they exist in Microsoft Windows OSs, Linux, Mac OS, Android, etc. Unlike regular computer OSs, patching operating systems in PLCs against known bugs and vulnerabilities is a very challenging. Many things need to be considered before deciding to update PLCs operating system. Even though patching is very important for any computer system security, malware such Havex Remote Access Trojan (RAT) have infected update installers from various ICS vendors. Such malware leaves ICS users baffled as to whether they should update and get infected or not update and get infected anyway.

The concept of the ‘zero day’ attack is another challenge for the security of the operating system. When a vulnerability is not published, it can be exploited by attackers without being detected. There is very little chance that current security mechanisms will detect such attacks.

Hardware in PLCs is built with very specific specifications. One of their limitations is their ability to handle complicated and multiple tasks at the same time. Traditionally, PLCs do not offer a great deal of memory. This implies that any logging capabilities have to be built out of the PLCs. PLCs should allow complex logging capabilities in order to allow in-depth forensic capabilities.


One of the biggest weaknesses of most PLCs is that security is not built in by design. Most generally, any compatible code can run on a PLCs despite its origin (legitimate or malicious). Open source tools allow the organisation blocks (OB) to be downloaded and uploaded without authentication. The OB1 for instance, are loaded and executed without a simple hash function check.  If attackers have knowledge of the different Tags used in a project that may control a critical infrastructure, they can then use a completely different logic that could have catastrophic consequences. The attackers can develop his knowledge of an internal system using different elements such as (HMI, PLC’s website, documentation, and source code to name a few).

In all fairness, certain organisation block (OB) will require a reboot of the PLC for the code to take effect. However, it takes less than 3 seconds to reboot a PLC remotely. It is very unlikely that the operators will not notice any difference from the control screen. In some other environments, three seconds of inactivity would cause serious alerts, but this is not the case everywhere. When modifying the logic, if only timers are modified, the PLC will not generally require a reboot for the new timers to take effect. A built-in security is necessary in a PLC.

A good security by design for PLCs should allow authentication of devices, access control, auditing and logging, data integrity control, secure booting, ladder logic execution control and encryption at the very least.


PLCs are as important in control system networks as they would be in any other network environment. It is essential that they are managed with the highest priority. Any access, maintenance, upgrade, test, modification, downtime of PLCs need to be accounted for and these policies need to be enforced.

Programmable Logic Controller security can be summarised as show in Figure 1.


PLC Security

PLC Threat Landscape


Why should we care about PLC security? 

If we follow the diagram by NERC, it is very likely that most PLCs will either have a critical service, operate critical system or service, or used in a critical system. If we care about any of our critical processes, functions or systems, we definitely should care about the various components upon which they depend. Figure 1 describes the process advocated by NERC to identify critical assets.

Critical Asset Identification

Critical Asset Identification

In conclusion, securing a PLC is paramount in securing critical infrastructure. PLCs will generally support critical function in a plant or their will be used in a critical path of production making them a critical component. Many layers of protection need to be in place for PLC to be secure. Human risk factors, the protection of the logic inside the PLC, secure communications, application layer security, operating system security, hardware security and last but not least, the management of all aspects of all of the above security requirements.

A simple picture taken in a work environment could provide an attacker with the last piece of information missing to be successful in his/her operations. PLCs are very important components of critical infrastructure and should be protected at all costs.

Protecting PLCs alone would not solve the problem against cyber-attacks. General governance should be in place to ensure that all aspects of security within the organisation are properly addressed. A holistic approach to security is highly recommended. Please read more about Nettitude holistic security at Cyber breaches response in-depth.

To contact Nettitude’s editor, please email media@nettitude.com.