How Rooting Works -- A Technical Explanation of the Android Rooting Process

I have always been curious how rooting actually worked behind the scenes. After recently acquiring a new Eee Pad Slider, a Honeycomb tablet that so far no one has been able to root, the frustration of being locked out of this amazing piece of hardware with so much potential led me to finally sit down and figure out what exactly rooting means, what it entails from a technical perspective, and how hackers out in the wild are approaching the rooting of a new device. Although all this information is out there, I have not been able to find a good article that had both the level of technical detail that I wanted and an appropriate introduction to the big picture, and so I decided to write my own.

This is NOT a noob-friendly guide to rooting a particular Android device. Rather, it is a general explanation of how stock Android ROMs try to prevent unprivileged access, how hackers attack this problem and how rooting software leverage various exploits to defeat these security mechanisms.

I. The Goal

Let us first take a step back and consider exactly what we mean by rooting. Forget flashing custom ROMs, enabling WiFi tethering or installing Superuser.apk; fundamentally, rooting is about obtaining root access to the underlying Linux system beneath Android and thus gaining absolute control over the software that is running on the device. Things that require root access on a typical Linux system — mounting and unmounting file systems, starting your favorite SSH or HTTP or DHCP or DNS or proxy servers, killing system processes, chroot-ing, etc., — require root access on Android as well. Being able to run arbitrary commands as the root user allows you to do absolutely anything on a Linux / Android system, and this is real goal of rooting.

Stock OEM Android builds typically do not allow users to execute arbitrary code as root. This essentially means that you as a user are granted only limited control over your own device; you can make your device do task X only if the manufacturer explicitly decided to allow it and shipped a program to do it. You will not be able to use third-party apps to accomplish a task that your manufacturer does not wish you to do. WiFi tethering is a good example of this. Cell phone carriers obviously do not want you to tether your phone without paying them additional charges. Therefore, many phones come pre-packaged with their own proprietary WiFi tethering apps that demand extraneous fees. But without root access, you will not be able to install a free alternative like Wireless Tether For Root Users. Why this is accepted practice in the industry is a mystery to me. The only difference between cell phones, tablets and computers is their form factor; but while a PC vendor would fail spectacularly if they tried to prevent users from running arbitrary programs on their machines, cell phone vendors are clearly not judged along the same lines. But such arguments would belong to another article.

II. The Enemy: Protection Mechanisms On A Stock OEM Android ROM

1. Bootloader and Recovery

The bootloader, the first piece of code executed when your device is powered on, is responsible for loading the Android OS and the recovery system and flashing a new ROM. People refer to some bootloaders as "unlocked" if a user can flash and boot arbitrary ROMs without hacking; unfortunately, many Android devices have locked bootloaders that you would have to hack around in order to make them do anything other than boot the stock ROM. A Samsung smartphone I had used some months ago had an unlocked bootloader; I could press a certain combination of hardware keys on the phone, connect it to my computer, and flash any custom ROM onto it using Samsung’s utilities without having to circumvent any protection mechanisms. The same is not true for my Motorola Droid 2 Global; the bootloader, as far as I know, cannot be hacked. The Eee Pad Slider, on the other hand, is an interesting beast; as with other nVidia Tegra 2 based devices, its bootloader is controllable through the nvflash utility, but only if you know the secure boot key (SBK) of the device. (The SBK is a private AES key used to encrypt the commands sent to the bootloader; the bootloader will only accept the command if it has been encrypted by the particular key of the device.) Currently, as the SBK of the Eee Pad Slider is not publicly known, the bootloader remains inaccessible.

System recovery is the second piece of low-level code on board any Android device. It is separate from the Android userland and is typically located on its own partition; it is usually booted by the bootloader when you press a certain combination of hardware keys. It is important to understand that it is a totally independent program; Linux and the Android userland is not loaded when you boot into recovery, and any high-level concept such as root does not exist here. It is simple program that really is a very primitive OS, and it has absolute control over the system and will do anything you want as long as the code to do it is built in. Stock recovery varies with the manufacturer, but often includes functionalities like reformatting the /data partition (factory reset) and flashing an update ROM (update.zip, located at the root of the external microSD card) signed by the manufacturer. Note I said signed by the manufacturer; typically it is not possible to flash custom update files unless you obtain the private key of the manufacturer and sign your custom update with it, which is both impossible for most and illegal under certain jurisdictions. However, since recovery is stored in a partition just like /system, /data and /cache (more about that later), you can replace it with a custom recovery if you have root access in Linux / Android. Most people do just that upon rooting their device; ClockworkMod Recovery is a popular third-party recovery image, and allows you to flash arbitrary ROMs, backup and restore partitions, and lots of other magic.

2. ADB

ADB (see the official documentation for ADB) allows a PC or a Mac to connect to an Android device and perform certain operations. One such operation is to launch a simple shell on the device, using the command adb shell. The real question is what user do the commands executed by that shell process run as. It turns out that it depends on the value of an Android system property, named ro.secure. (You can view the value of this property by typing getprop ro.secure either through an ADB shell or on a terminal emulator on the device.) If ro.secure=0, an ADB shell will run commands as the root user on the device. But if ro.secure=1, an ADB shell will run commands as an unprivileged user on the device. Guess what ro.secure is set to on almost every stock OEM Android build. But can we change the value of ro.secure on a system? The answer is no, as implied by the ro in the name of the property. The value of this property is set at boot time from the default.prop file in the root directory. The contents of the root directory are essentially copied from a partition in the internal storage on boot, but you cannot write to the partition if you are not already root. In other words, this property denies root access via ADB, and the only way you could change it is by gaining root access in the first place. Thus, it is secure.

3. Android UI

On an Android system, all Android applications that you can see or interact with directly are running as _un_privileged users in sandboxes. Logically, a program running as an unprivileged user cannot start another program that is run as the privileged user; otherwise any program can simply start another copy of itself in privileged mode and gain privileged access to everything. On the other hand, a program running as root can start another program as root or as an unprivileged user. On Linux, privilege escalation is usually accomplished via the su and sudo programs; they are often the only programs in the system that are able to execute the system call setuid(0) that changes the current program from running as an unprivileged user to running as root. Apps that label themselves as requiring root are in reality just executing other programs (often just native binaries packaged with the app) through su. Unsurprisingly, stock OEM ROMs never come with these su. You cannot just download it or copy it over either; it needs to have its SUID bit set, which indicates to the system that the programs this allowed to escalate its runtime privileges to root. But of course, if you are not root, you cannot set the SUID bit on a program. To summarize, what this means is that any program that you can interact with on Android (and hence running in unprivileged mode) is unable to either 1) gain privileged access and execute in privileged mode, or 2) start another program that executes in privileged mode. If this holds, the Android system by itself is pretty much immune to privilege escalation attempts. We will see the loophole exploited by on-device rooting applications in the next section.

III. Fighting the System

So how the hell do you root an Android? Well, from the security mechanisms described above, we can figure out how to attack each component in turn.

If your device happens to have an unlocked bootloader, you’re pretty much done. An example is the Samsung phone that I had had. Since the bootloader allowed the flashing of arbitrary ROMs, somebody essentially pulled the stock ROM from the phone (using dd), added su, and repackaged it into a modified ROM. All I as a user needed to do was to power off the phone, press a certain combination of hardware keys to start the phone in flashing mode, and use Samsung’s utilities to flash the modified ROM onto the phone.

Believe it or not, certain manufacturers don’t actually set ro.secure to 1. If that is the case, rooting is even easier; just plug the phone into your computer and run ADB, and you now have a shell that can execute any program as root. You can then mount /system as read-write, install su and all your dreams have come true.

But many other Android devices have locked bootloaders and ro.secure set. As explained above, they should not be root-able because you can only interact with unprivileged programs on the system and they cannot help you execute any privileged code. So what’s the solution?

We know that a number of important programs, including low-level system services, must run as root even on Android in order to access hardware resources. Typing ps on an Android shell (either via ADB or a terminal emulator on the device) will give you an idea. These programs are started by the init process, the first process started by the kernel (I often feel that the kernel and the init process are kind of analogous to Adam and Eve — the kernel spawns init in a particular fashion, and init then goes on and spawns all other processes) which has to run as root because it needs to start other privileged system processes.

Now here’s the key insight: if you can hack / trick one of these system processes running in privileged mode to execute your arbitrary code, you have just gained privileged access to the system. This how all one-click-root methods work, including z4root, gingerbreak, and so on. If you are truly curious, I highly recommend this excellent presentation on the various exploits used by current rooting tools, but the details are not as relevant here as the simple idea behind them. That idea is that there are vulnerabilities in the system processes running as root in the background that, if exploited, will allow us to execute arbitrary code as root. Well, that "arbitrary code" is most certainly a piece of code that mounts /system in read-write mode and installs a copy of su permanently on the system, so that from then on we don’t need to jump through the hoops to run the programs we really wanted to run in the first place.

Since Android is open source as is Linux, what people have done is to scrutinize and reason about the source code of the various system services until they find a security hole they can leverage. This becomes increasingly hard as Google and the maintainers of the various pieces of code fix those particular vulnerabilities when they are discovered and published, which means that the exploits will eventually become obsolete with newer devices. But the good news is that manufacturers are not stupid enough to push OTA updates to fix a vulnerability just to prevent rooting as it is very expensive for them; in addition, devices in the market are always lagging behind the newest software releases. Thus, it takes quite some time before these rooting tools are rendered useless by new patches, and by then hopefully other exploits would have been discovered.

IV. See It In Action!

To see all of this in action, you are invited to check out my follow-up article: Android Rooting: A Developer’s Guide, which explains how I applied this stuff to figure out how to root an actual device.

Internal input event handling in the Linux kernel and the Android userspace

While figuring out hardware buttons for my NITDroid project, I had the opportunity of exploring the way Linux and Android handle input events internally before passing them through to the user application. This post traces the propagation of an input event from the Linux kernel through the Android userspace as far as I understand it. Although the principles are likely the same for essentially any input device, I will be drawing on my investigations of the drivers for the LM8323 hardware keyboard (drivers/input/keyboard/lm8323.c) and the TSC2005 touchscreen (drivers/input/touchscreen/tsc2005.c) which are both found inside the Nokia N810.

I. Inside the Linux kernel

Firstly, Linux exposes externally a uniform input event interface for each device as /dev/input/eventX where X is an integer. This means these "devices" can be polled in the same way and the events they produce are in the same uniform format. To accomplish this, Linux has a standard set of routines that every device driver uses to register / unregister the hardware it manages and publish input events it receives.

When the driver module of an input device is first loaded into the kernel, its initialization routine usually sets up some sort of probing to detect the presence of the types of hardware it is supposed to manage. This probing is of course device-specific; however, if it is successful, the module will eventually invoke the function input_register_device(…) in include/linux/input.h which sets up a file representing the physical device as /dev/input/eventX where X is some integer. The module will also register a function to handle IRQs originating from the hardware it manages via request_irq(…) (include/linux/interrupt.h) so that the module will be notified when the user interacts with the physical device it manages.

When the user physically interacts with the hardware (for instance by pushing / releasing a key or exerting / lifting pressure on the touchscreen), an IRQ is fired and Linux invokes the IRQ handler registered by the corresponding device driver. However, IRQ handlers by custom must return quickly as they essentially block the entire system when executing and thus cannot perform any lengthy processing; typically, therefore, an IRQ handler would merely 1) save the data carried by the IRQ, 2) ask the kernel to schedule a method that would process the event later on when we have exited IRQ mode, and 3) tell the kernel we have handled the IRQ and exit. This could be very straightforward, as the IRQ handler in the driver for the LM8323 keyboard inside the N810:

/*
 * We cannot use I2C in interrupt context, so we just schedule work.
 */
static irqreturn_t lm8323_irq(int irq, void *data)
{
        struct lm8323_chip *lm = data;

        schedule_work(&lm->work);

        return IRQ_HANDLED;
}

It could also be more complex as the one in the driver of the TSC2005 touchscreen controller (tsc2005_ts_irq_handler(…)) as it integrates into the SPI framework (which I have never looked into…).

Some time later, the kernel executes the scheduled method to process the recently saved event. Invariably, this method would report the event in a standard format by calling one or more of the input_* functions in include/linux/input.h; these include input_event(…) (general purpose), input_report_key(…) (for key down and key up events), input_report_abs(…) (for position events e.g. from a touchscreen) among others. Note that the input_report_*(…) functions are really just convenience functions that call input_event(…) internally, as defined in include/linux/input.h. It is likely that a lot of processing happens before the event is published via these methods; the LM8323 driver for instance does an internal key code mapping step and the TSC2005 driver goes through this crazy arithmetic involving Ohms (to calculate a pressure index from resistance data?). Furthermore, one physical IRQ could correspond to multiple published input events, and vice versa. Finally, when all event publishing is finished, the event processing method calls input_sync(…) to flush the event out. The event is now ready to be accessed by the userspace at /dev/input/eventX.

II. Inside the Android userspace

When the Android GUI starts up, an instance of the class WindowManagerService (frameworks/base/services/java/com/android/server/WindowManagerservice.java) is created. This class, when constructed, initializes the member field

final KeyQ mQueue;

where KeyQ, defined as a private class inside the same file, extends Android’s basic input handling class, the abstract class KeyInputQueue (frameworks/base/services/java/com/android/server/KeyInputQueue.java and frameworks/base/services/jni/com_android_server_KeyInputQueue.cpp). As mQueue is instantiated, it of course calls the constructor of KeyInputQueue; the latter, inconspicuously, starts an anonymous thread it owns that is at the heart of the event handling system in Android:

Thread mThread = new Thread("InputDeviceReader") {
    public void run() {
        ...
        RawInputEvent ev = new RawInputEvent();
        while (true) {
            try {
                readEvent(ev);  // block, doesn't release the monitor

                boolean send = false;
                ...
                if (ev.type == RawInputEvent.EV_DEVICE_ADDED) {
                    ...
                } else if (ev.type == RawInputEvent.EV_DEVICE_REMOVED) {
                    ...
                } else {
                    di = getInputDevice(ev.deviceId);
                    ...
                    // first crack at it
                    send = preprocessEvent(di, ev);
                }
                ...
                if (!send) {
                    continue;
                }
                synchronized (mFirst) {
                    ...
                    // Is it a key event?
                    if (type == RawInputEvent.EV_KEY &&
                            (classes&RawInputEvent.CLASS_KEYBOARD) != 0 &&
                            (scancode < RawInputEvent.BTN_FIRST ||
                                    scancode > RawInputEvent.BTN_LAST)) {
                        boolean down;
                        if (ev.value != 0) {
                            down = true;
                            di.mKeyDownTime = curTime;
                        } else {
                            down = false;
                        }
                        int keycode = rotateKeyCodeLocked(ev.keycode);
                        addLocked(di, curTimeNano, ev.flags,
                                RawInputEvent.CLASS_KEYBOARD,
                                newKeyEvent(di, di.mKeyDownTime, curTime, down,
                                        keycode, 0, scancode,
                                        ((ev.flags & WindowManagerPolicy.FLAG_WOKE_HERE) != 0)
                                         ? KeyEvent.FLAG_WOKE_HERE : 0));
                    } else if (ev.type == RawInputEvent.EV_KEY) {
                        ...
                    } else if (ev.type == RawInputEvent.EV_ABS &&
                            (classes&RawInputEvent.CLASS_TOUCHSCREEN_MT) != 0) {
                        // Process position events from multitouch protocol.
                        ...
                    } else if (ev.type == RawInputEvent.EV_ABS &&
                            (classes&RawInputEvent.CLASS_TOUCHSCREEN) != 0) {
                        // Process position events from single touch protocol.
                        ...
                    } else if (ev.type == RawInputEvent.EV_REL &&
                            (classes&RawInputEvent.CLASS_TRACKBALL) != 0) {
                        // Process movement events from trackball (mouse) protocol.
                        ...
                    }
                    ...
                }

            } catch (RuntimeException exc) {
                Slog.e(TAG, "InputReaderThread uncaught exception", exc);
            }
        }
    }
};

I have removed most of this ~350 lined function that is irrelevant to our discussion and reformatted the code for easier reading. The key idea is that this independent thread will

  1. Read an event

  2. Call the preprocess(…) method of its derived class, offering the latter a chance to prevent the event from being propagated further

3.Add it to the event queue owned by the class

This InputDeviceReader thread started by WindowManagerService (indirectly via KeyInputQueue’s constructor) is thus THE event loop of the Android UI.

But we are still missing the link from the kernel to this InputDeviceReader. What exactly is this magical readEvent(…)? It turns out that this is actually a native method implemented in the C++ half of KeyInputQueue:

static Mutex gLock;
static sp<EventHub> gHub;

static jboolean
android_server_KeyInputQueue_readEvent(JNIEnv* env, jobject clazz,
                                          jobject event)
{
    gLock.lock();
    sp<EventHub> hub = gHub;
    if (hub == NULL) {
        hub = new EventHub;
        gHub = hub;
    }
    gLock.unlock();

    ...
    bool res = hub->getEvent(&deviceId, &type, &scancode, &keycode,
            &flags, &value, &when);
    ...

    return res;
}

Ah, so readEvent is really just a proxy for EventHub::getEvent(…). If we proceed to look up EventHub in frameworks/base/libs/ui/EventHub.cpp, we find

int EventHub::scan_dir(const char *dirname)
{
    ...
    dir = opendir(dirname);
    ...
    while((de = readdir(dir))) {
        ...
        open_device(devname);
    }
    closedir(dir);
    return 0;
}
...

static const char *device_path = "/dev/input";
...

bool EventHub::openPlatformInput(void)
{
    ...
    res = scan_dir(device_path);
    ...
    return true;
}

bool EventHub::getEvent(int32_t* outDeviceId, int32_t* outType,
        int32_t* outScancode, int32_t* outKeycode, uint32_t *outFlags,
        int32_t* outValue, nsecs_t* outWhen)
{
    ...
    if (!mOpened) {
        mError = openPlatformInput() ? NO_ERROR : UNKNOWN_ERROR;
        mOpened = true;
    }

    while(1) {
        // First, report any devices that had last been added/removed.
        if (mClosingDevices != NULL) {
            ...
            *outType = DEVICE_REMOVED;
            delete device;
            return true;
        }
        if (mOpeningDevices != NULL) {
            ...
            *outType = DEVICE_ADDED;
            return true;
        }

        ...
        pollres = poll(mFDs, mFDCount, -1);
        ...

        // mFDs[0] is used for inotify, so process regular events starting at mFDs[1]
        for(i = 1; i < mFDCount; i++) {
            if(mFDs[i].revents) {
                if(mFDs[i].revents & POLLIN) {
                    res = read(mFDs[i].fd, &iev, sizeof(iev));
                    if (res == sizeof(iev)) {
                        ...
                        *outType = iev.type;
                        *outScancode = iev.code;
                        if (iev.type == EV_KEY) {
                            err = mDevices[i]->layoutMap->map(iev.code, outKeycode, outFlags);
                            ...
                        } else {
                            *outKeycode = iev.code;
                        }
                        ...
                        return true;
                    } else {
                        // Error handling
                        ...
                        continue;
                    }
                }
            }
        }
        ...
    }
}

Again, most of the details have been stripped out from the above code, but we now see how readEvent() in KeyInputQueue is getting these events from Linux: on first call, EventHub::getEvent scans the directory /dev/input for input devices, opens them and saves their file descriptors in an array called mFDs. Then whenever it is called again, it tries to read from each of these input devices by simply calling the read(2) Linux system call.

OK, now we know how an event propagates through EventHub::getEvent(…) to KeyInputQueue::readEvent(…) then to InputDeviceReader.run(…) where it could get queued inside WindowManagerService.mQueue (which, as a reminder, extends the otherwise abstract KeyInputQueue). But what happens then? How does that event get to the client application?

Well, it turns out that WindowManagerService has yet another private member class that handles just that:

private final class InputDispatcherThread extends Thread {
    ...
    @Override public void run() {
        while (true) {
            try {
                process();
            } catch (Exception e) {
                Slog.e(TAG, "Exception in input dispatcher", e);
            }
        }
    }

    private void process() {
        android.os.Process.setThreadPriority(
                android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
        ...
        while (true) {
            ...
            // Retrieve next event, waiting only as long as the next
            // repeat timeout.  If the configuration has changed, then
            // don't wait at all -- we'll report the change as soon as
            // we have processed all events.
            QueuedEvent ev = mQueue.getEvent(
                (int)((!configChanged && curTime < nextKeyTime)
                        ? (nextKeyTime-curTime) : 0));
            ...
            try {
                if (ev != null) {
                    curTime = SystemClock.uptimeMillis();
                    int eventType;
                    if (ev.classType == RawInputEvent.CLASS_TOUCHSCREEN) {
                        eventType = eventType((MotionEvent)ev.event);
                    } else if (ev.classType == RawInputEvent.CLASS_KEYBOARD ||
                                ev.classType == RawInputEvent.CLASS_TRACKBALL) {
                        eventType = LocalPowerManager.BUTTON_EVENT;
                    } else {
                        eventType = LocalPowerManager.OTHER_EVENT;
                    }
                    ...
                    switch (ev.classType) {
                        case RawInputEvent.CLASS_KEYBOARD:
                            KeyEvent ke = (KeyEvent)ev.event;
                            if (ke.isDown()) {
                                lastKey = ke;
                                downTime = curTime;
                                keyRepeatCount = 0;
                                lastKeyTime = curTime;
                                nextKeyTime = lastKeyTime
                                        + ViewConfiguration.getLongPressTimeout();
                            } else {
                                lastKey = null;
                                downTime = 0;
                                // Arbitrary long timeout.
                                lastKeyTime = curTime;
                                nextKeyTime = curTime + LONG_WAIT;
                            }
                            dispatchKey((KeyEvent)ev.event, 0, 0);
                            mQueue.recycleEvent(ev);
                            break;
                        case RawInputEvent.CLASS_TOUCHSCREEN:
                            dispatchPointer(ev, (MotionEvent)ev.event, 0, 0);
                            break;
                        case RawInputEvent.CLASS_TRACKBALL:
                            dispatchTrackball(ev, (MotionEvent)ev.event, 0, 0);
                            break;
                        case RawInputEvent.CLASS_CONFIGURATION_CHANGED:
                            configChanged = true;
                            break;
                        default:
                            mQueue.recycleEvent(ev);
                        break;
                    }
                } else if (configChanged) {
                    ...
                } else if (lastKey != null) {
                    ...
                } else {
                    ...
                }
            } catch (Exception e) {
                Slog.e(TAG,
                    "Input thread received uncaught exception: " + e, e);
            }
        }
    }
}

As we can see, this thread started by WindowManagerService is very simple; all it does is

  1. Grabs events queued into WindowManagerService.mQueue

  2. Calls WindowManagerService.dispatchKey(…) when appropriate.

If we next inspect WindowManagerService.dispatchKey(…), we would see that it checks the currently focused window, and calls android.view.IWindow.dispatchKey(…) on that window. The event is now in the user space.

I put together some nice diagrams that illustrate these interactions. The conceptual model:

Event propagation flow on Android - simplified version Event propagation flow on Android - simplified version

The full model:

Event propagation flow on Android Event propagation flow on Android

The yellow boxes are Java implementations, and the blue boxes native or other.

Prevent Android app from restarting on rotate / hardware keyboard state change

I came across a puzzling phenomenon while working on an Android application today.

Problem

When I rotate the device (change orientation) or pop out the hardware keyboard, my Activity gets paused (onPause()), stopped (onStop()), destroyed (onDestroy()), then recreated (onCreate()), started (onStart()), and resumed (onResume()). It was as if the user had quit the application and killed it and launched it again. It was crashing my application as nowhere in the documentation had I ever read that rotating the deviced would cause such a strange chain of events to happen.

Although my app had implemented its onSurfaceChanged() method which, according to the documentation, should be called when the device is rotated among other things, it wasn’t happening according to my logs.

Solution

It turns out that the <activity> tags in the AndroidManifest.xml have a configChanges attribute that specifies which of a number of events the application is set up to handle itself. Any of the events, if not explicitly listed in an <activity> tag in AndroidManifest.xml, would cause Android to destroy and then restart the Activity anew. (See official documentation.)

These include notably the orientation change (rotation) and the hardware keyboard state. To handle these, the <activity> tag in your AndroidManifest.xml must contain at least the following:

<activity
    ...
    android:configChanges="keyboard|keyboardHidden|orientation|screenSize">

Note that screenSize was added in API level 13 (Honeycomb 3.2).

I guess in a way this makes sense; if an application did not include adequate event handlers, all of these events could potentially cause the application to crash if left running on its own. Therefore, Android just kills it and restarts it so that it would presumably correctly initialize itself in the new environment when it starts up again. But this attribute really ought to be heavily publicized in any introductory text as it is likely to cause noobs like me quite some bewilderment.

EGL Context Preservation on Android

This is really a little note for myself but I came across the issue of the preservation of the OpenGL context (or rather, the EGL context as we’re dealing with OpenGL ES) of an application. On desktop systems, G3D assumes its GL context is always preserved, either by the GPU or by the system. On Android, however, it turns out that there are two possible scenarios that will cause a running application to lose its EGL context. The first is when the user leaves the application, e.g., by pressing the Home button or in the case of an incoming call. There is in fact a function called setPreserveEGLContextOnPause(boolean) in GLSurfaceView, which does exactly what the name suggests; however, in the words of the API documentation, "whether the EGL context is actually preserved or not depends upon whether the Android device that the program is running on can support an arbitrary number of EGL contexts or not. Devices that can only support a limited number of EGL contexts must release the EGL context in order to allow multiple applications to share the GPU." There is also a getPreserveEGLContextOnPause() function, but the documentation does not specify whether this will return the actual preservation policy of the machine or only whatever value we have set earlier via setPreserveEGLContextOnPause(boolean). I looked up the source file in the Android source tree but these two functions are newly added in Android 3.0 (whose code, being in beta, is apparently not released to the public) and do not show up in the published code base.

The second way we could lose the EGL context is when the user locks the screen and the device goes to sleep. Again quoting the API documentation, "[t]he EGL context will typically be lost when the Android device awakes after going to sleep…Note that when the EGL context is lost, all OpenGL resources associated with that context will be automatically deleted." There does not appear to exist a way to prevent this.

Whenever we will have lost a context and need to render again, the onSurfaceCreated function in our renderer class is called with a newly created context; this is where a Java app would send all its resources to the GPU. In the case of the G3D library, I think the best we can do is to require our client application to always provide some sort of callback on Android that we will invoke to resend resources to the GPU, for instance in their GApp subclass.