Bypass with well-placed breakpoints using LLDB

Who owns your runtime?

Can mobile applications trust their own runtime environment? The answer to this burning question that has no doubt kept you awake at night is: nope.


Attacks against mobile applications’ runtime environments have been around for almost a decade now (i.e. since mobile applications became a thing). Attaching a debugger to the application or hooking its code is extremely useful in order to reverse engineer the application. It provides an insight into the internal workings of an application and allows an attacker to modify the control flow or internal code structures to influence application behaviour. This can have significant consequences for a security-conscious application; some example use cases where debugging might be applied are to extract cryptographic key material from the application, manipulate its runtime by invoking methods on existing objects or to understand the significance of an attacker-generated fault.

Additionally, there are free and widely available tools that allow attackers to instrument the runtime environment of mobile applications. These tools make it a relatively straightforward process and can often be leveraged to modify application behaviour, bypass security controls or steal sensitive data. In some cases they have also been abused by malware that targets rooted/jailbroken devices.

Although not necessarily the same thing, debugging and runtime hooking have a very similar goal: instrumenting the application’s runtime environment. Ultimately, this instrumentation leads to a situation whereby an application cannot trust its own runtime environment. There are many techniques to achieve this, but it is generally performed by either attaching a debugger to the application’s process or injecting dynamic libraries into its runtime.



Android applications can prevent being debugged by declaring the android:debuggable="false" attribute in the AndroidManifest.xml file. However, a reverse engineer may have modified the application’s manifest to include android:debuggable="<strong>true</strong>" or used a runtime manipulation tool that makes the process debuggable in order to circumvent this.

Fortunately, there are ways the application can detect in runtime if it’s being debugged or if any of its methods have been hooked.

To verify that the application is not set as debuggable, the following code can be used:

It is also good practice to periodically check whether the application has a debugger attached to it by using the isDebuggerConnected() method provided in the android.os.Debug class (for Java/JDB debuggers), as well as System.Diagnostics.Debugger.IsAttached (for Mono debuggers).


Regarding hooking detection, various techniques can be implemented to raise the bar of skill necessary to attack the application’s runtime. We suggest a few below, but feel free to get creative!

Look for common hooking frameworks

Cydia Substrate and Xposed are the two most common hooking frameworks used by attackers targeting Android applications’ runtime. The PackageManager can provide a list of installed packages, and any suspicious packages can be flagged:

Suspicious shared objects or JARs

Within Linux, the /proc/{pid}/maps file contains the process’ currently mapped memory regions and their access permissions. So by investigating the maps file, the application can look for any suspicious pathnames associated with the Xposed framework or Cydia Substrate. As an example, the following IOCs are provided:

  • Cydia Substrate
    • /data/app-lib/com.saurik.substrate-1/
    • /data/app-lib/com.saurik.substrate-1/
    • /data/app-lib/com.saurik.substrate-1/
    • /data/app-lib/com.saurik.substrate-1/
    • /data/app-lib/com.saurik.substrate-1/
    • /data/app-lib/com.saurik.substrate-1/
  • Xposed
    • /data/data/

The following is a proof-of-concept code snippet that uses this technique to detect such IOCs:

Unexpected native methods

The Xposed framework works by changing the method type of hooked methods to “native” and replacing the method within its own code (it calls hookedMethodCallback instead). Given that the Xposed framework (and Cydia Substrate which works in a somewhat similar fashion) changes the properties of the hooked method, the application leverages this to detect the presence of hooking[1]. This can be achieved by:

  • Finding the location of the application’s DEX file;
  • Enumerating all the classes within the DEX file; and
  • For each class in the DEX file, use reflection to check for the existence of native methods that shouldn’t be native.

The following is a proof-of-concept code snippet that leverages this technique to identify native methods. If the application has native methods itself, then those should (obviously) be whitelisted.

Stack trace inspection

If a hooked method throws an exception, artefacts left by the hooking procedure might be present in the stack trace. For Xposed and Cydia Substrate, the following IOCs might be useful:

  • Xposed
    • If the Xposed framework is active the method will show up in the stack trace after the dalvik.system.NativeStart.main method;
    • When the Xposed framework hooks a specific method included in the stack trace, it will contain calls to the and methods; and
    • Finally, the hooked method will appear twice in the stack trace.
  • Cydia Substrate
    • If Cydia Substrate is active two calls to the android.internal.os.ZygoteInit.main method will be present after the dalvik.system.NativeStart.main method – as opposed to the usual single call;
    • When Cydia Substrate hooks a method included in the stack trace, it will call saurik.substrate.MS$2.invoked, com.saurik.substrate.MS$MethodPointer.invoke and a third method call associated with the Substrate extension (this might vary in a case-by-case basis, but will be evident it is not from the application itself); and
    • Finally, as with Xposed, the hooked method appears twice in the stack trace.

The following is a proof-of-concept code snippet that can detect Cydia Substrate or the Xposed framework based on a stack trace:



On iOS, debugging is usually achieved using the ptrace() system call (if using GDB or LLDB). However, this function can be called from within third-party applications and provides a specific operation that tells the system to prevent tracing from a debugger. If the process is currently being traced then it will exit with the ENOTSUP status. The following is a simple implementation of this technique. It should be implemented not only throughout the application’s codebase but also as close to the process start (such as in the main function) as possible:

GDB fails to attach to the process because of PT_DENY_ATTACH

GDB fails to attach to the process because of PT_DENY_ATTACH

Although this does provide an additional hurdle to overcome, it is unlikely to thwart a skilled adversary. Despite ptrace() being usually referenced as system call, the term is used loosely. The reality is that most of the time when “system calls” are used, developers are in reality invoking higher level functions that wrap the actual system call. The previous example invoked the ptrace function in the ptrace.o library exposed via the &lt;sys/ptrace.h&gt; API. Therefore, a more experienced attacker will know all they need to do is attach a debugger to the process and manipulate the wrapper ptrace() function before it is invoked.

Bypass with well-placed breakpoints using LLDB

Bypass with well placed breakpoints using LLDB

A more robust solution would be:

The code snippet above bypasses calling the wrapper ptrace() function and instead performs the system call directly in assembly.  For reference, the syscall number for ptrace is 26 and the value of the PT_DENY_ATTACH flag is 31. The call is made to the iOS kernel directly, making it much harder to instrument.

The sysctl() function can be used to get an indication that the process might be being debugged. It could be used as a secondary measure of detecting whether the application is being debugged, and to add further resilience in the event that the PT_DENY_ATTACH operation has been overcome. It does not explicitly prevent a debugger from being attached to the application, but returns sufficient information about the process to determine whether it is being debugged. When invoked with the appropriate arguments, the sysctl() function returns a structure with a kp_proc.p_flag flag that indicates the status of the process and whether or not it is being debugged. The following is a simple example of how to implement this:

These are just two examples of strategies that exist for debugger detection; others do exist. There is scope for more convoluted strategies such as checking the execution timings – where the application records the amount of time it takes to complete a set of operations and if it is outside a margin of acceptable execution times it can have some assurance it is being debugged.


For a more hardened application, additional validation of the runtime environment is recommended. As discussed before, the typical approach for runtime hooking is to inject a dynamic library into the applications address space and replace the implementation of a method that the attacker wants to instrument. This typically leaves behind a trail that can be used to gain some confidence as to whether the application is being instrumented.

Suspicious dylibs

Looking through each of the modules loaded into the application’s memory for known signatures or image names can help you determine whether a library has been injected. Consider the following example that iterates the list of currently loaded images, retrieves the image name using _dyld_get_image_name(), and looks for sub-strings of known injection libraries:

Where do your methods come from?

Methods residing from within Apple SDKs will typically originate from a finite set of locations, specifically:

  • /System/Library/TextInput
  • /System/Library/Accessibility
  • /System/Library/PrivateFrameworks/
  • /System/Library/Frameworks/
  • /usr/lib/

Furthermore, methods internal to the application should reside from within the application’s binary itself. One can verify the source location of a method using the dladdr() function, which takes a function pointer to the target function. The following is a simple implementation that iterates a given class’ methods and checks the source location of the image against a set of known possible image locations. Finally, it checks whether the function resides within a path relative to the application itself:

Common signatures

Further checks can be implemented, such as identifying signatures of common frameworks inside the application’s functions. One could write a routine that would detect these signatures inside the application’s methods before calling them. For example, it is common for the Cydia Substrate framework to insert indirect jump vectors (a.k.a. trampolines) at the beginning of hooked functions. An illustrative example of how this could be detected would be:

Final Thoughts

Upon detection of debugging or hooking, the application should enforce reactive counter-measures, for example:

  • Warning users and asking them to accept liability;
  • Preventing the application from running by gracefully exiting or crashing;
  • Wiping any sensitive stored data on the device; or
  • Reporting back to a management server to, for example, flag the user as a fraud risk.

Furthermore, although we only covered Java hooking detection techniques for Android, similar techniques to the ones described for iOS can also be used to detect hooking of native code in Android functions.

All of these protections should be implemented both in Java (Android) or Objective-C (iOS), and C (both) – using the respective SDK and POSIX APIs. The protective checks should run periodically throughout the application lifecycle (i.e., not only when the application starts!), invoked from different locations and inlined at all times if written in native code. Inlining the detection functions will force attackers to find all occurrences of each protection should they want to patch the application’s binary.

Note that if an application is being hooked, than it is very likely running on a rooted/jailbroken device. Therefore, these protections also “double” as root/jailbreak detection mechanisms. However, they should not be relied on for that purpose since the application can be running on a rooted device but not being hooked. Root and jailbreak detection will be a topic for another time.

The examples we suggest in this post are not nearly exhaustive, but they establish a good baseline to start with. There is plenty of room to get more creative and develop more obscure detection mechanisms. Tamper-proofing the code of relevant functions is also a good idea since it would protect against hooking and debugging protections being patched as well. Tamper-proofing might also be a topic for a future post.

Finally, securing mobile applications (and endpoint security in general) is always a game of “cat and mouse”. These approaches do not provide an infallible way of preventing application debugging nor runtime hooking but, as far as binary protections go, pairing different protections and checks will certainly slow down a reverse engineer since the protections work together to increase the complexity further than they would individually. Namely, when paired with strong obfuscation, root/jailbreak detection, and tamper protection, these checks will be a considerably greater nuisance to bypass, requiring more experienced and dedicated attackers.

[1] This hooking detection technique will not work for the ART version of Xposed since altering the method type to native is unnecessary.

[2] Do not use strcmp at home! 🙂