Tuesday 9 December 2008

Power Management on the Sun SPOT

It seems to me that a driving principle for all software tools should be: "Let the developer concentrate on the problem, and let the tool take care of the incidentals." I call this principle let-the-tool-take-the-strain (L4TS). The application of this principle to power management on the Sun SPOT, a small wireless sensor network device programmed in Java, was far from straightforward. In this article I want to discuss the principle, how it was applied, and why the result isn't universally liked.

It is the L4TS principle that led to the development of high-level programming languages, and to the gradual acceptance of automatic memory management in languages such as Java. The same principle has applied in the development of operating systems. Long ago programmers wanting to store data on disks had to worry about sector allocation, now we take it for granted that the operating system will manage disk space for us.

I was determined to apply the L4TS principle ruthlessly in the design of the SPOT libraries and system software because our brief was to make the SPOT easy to program. One area where I felt the principle should be applied was power management. The SPOT is a pretty powerful computing device, and it can use up its battery in a few hours if it runs continuously. Since we wanted SPOTs to be able to go for weeks or months between charges it was clear that some fairly sophisticated power management would be needed.

Hardware support for power management
The SPOT uses the Atmel RM9200 processor package, which is based around the ARM9 processor. It's possible to put this package into a low power mode - called shallow sleep - where the processor clock is stopped; it restarts when an interrupt occurs. The problem is that the power consumed in this mode is still too great for the battery to last for months. A typical SPOT application that requires long battery life is actually only active for very short periods. To support applications like this the SPOT has some clever hardware tricks. It uses a separate low-power processor, the power controller, as an alarm clock. Code running on the main ARM9 processor can request a wake-up call and then ask the power controller to turn off power to most of the SPOT (but not the RAM). Consumption then drops to a level that the battery can sustain for months. This mode is called deep sleep.

Automatic or manual?
Should we try to automate power management or should we put it under the direct control of the application developer? To automate it implies that the system software - the Java VM on the SPOT, since the VM is implemented directly on the hardware with no intervening operating system - must determine when there is no work to do and select the best possible power-saving mode. Since the only processes running on the SPOT are Java threads the VM is in an excellent position to determine when there is no work to do: when there are no threads ready to run. But it's a lot trickier for it to determine which power-saving mode to use, shallow sleep or deep sleep. To do that it must understand which I/O devices are active, because a SPOT in deep sleep cannot respond to most external stimuli (for example the radio is powered-down in deep sleep).

By contrast letting the application manage power is simple. The application knows what it is doing and whether it is interested in receiving interrupts while sleeping. So it can simply request the appropriate sleep mode. There are three drawbacks to this:
  1. The application writer needs to learn about the power management mechanisms and write code to use them.
  2. The result might not be optimal. The application writer might miss opportunities to use deep sleep.
  3. It doesn't work if there are two or more separate applications running on the SPOT. Early releases of the SPOT SDK supported only one active application at a time, but more recent releases support multiple applications. There is a way around this: the system software could arbitrate sleep requests from the applications and only enter a mode that is compatible with them all.
It was clear to me that there were many advantages to automatic power management, and that's what is in the current SPOT SDK, but it did result in significant complexity and, somewhat unfortunately, has been a source of confusion among application developers.

Automatic power management implementation
Here's how it works. Whenever the VM thread scheduler (written in Java) determines that no thread is ready to run in the near future it invokes the sleep manager. The sleep manager then checks whether any hardware devices are active and if not forces the SPOT into deep sleep, otherwise it just uses shallow sleep.

Every device driver on the SPOT - they are all written in Java - must register with the system. To determine whether any devices are active the sleep manager iterates over all the registered drivers asking each in turn whether it is okay to enter deep sleep. If all the drivers agree then the sleep manager uses deep sleep. This call also gives the driver the opportunity to copy any transient device state, such as the contents of device registers, into Java variables. Since RAM remains powered in deep sleep the state of Java objects is preserved. The sleep manager iterates over all the drivers again as the SPOT leaves deep sleep so that the devices can be reconfigured.

This implementation provides automatic selection of the best sleep mode based on the state of the I/O devices. So although the application writer doesn't need to worry about explicitly requesting power saving he or she does need to make sure that devices not in use are turned off otherwise they may inhibit deep sleep.

The biggest issue is with the radio receiver because applications rarely access it directly; instead applications open virtual connections and read and write data from and to them. These connections transmit and receive data over the radio. In line with the standard approach, the radio driver inhibits deep sleep whenever the radio receiver is on, so the application developer needs to manage connections carefully by attaching the appropriate policy ("receiver always on", "prefer off" or "prefer on") to each connection.

You don't like it?
At this point you may be asking "Why would anyone have a problem with such an elegant approach?" One problem stems from the implicit nature of the approach: the application doesn't request deep sleep, it merely has to ensure that everything is in the right state for it to occur. Some application writers feel a loss of control, feel their lives would be simpler if they could just tell the device what to do rather than having to coax it. There are two ways to address this concern. First, it's important that the defaults are chosen in a way that, for most applications, optimal power management occurs with no extra effort on the part of the developer. To a large extent I think we achieved this, but as the complexity of the SPOT system code increases with each release it's vital that SPOTs remain configured out-of-the-box in a way that ensures a simple application of the form
public void startApp() throws MIDletStateChangeException {
Utils.sleep(10000);
notifyDestroyed();
}
will always cause the SPOT to deep sleep.

The other way to address application writers' feelings of a loss of control is to provide tools and instrumentation that make it easy to tell that the SPOT is doing what you want and expect. We provided APIs that allow applications to check that deep sleep was happening but that didn't really help. The problem is that having used the API to check, the application needs to communicate the result to the developer and the obvious ways of doing this - displaying a message over the USB cable to the host PC or sending a radio message - themselves will inhibit deep sleep! So we have to fall back on telling developers to watch the power LED, which flashes in a distinctive pattern as the SPOT enters and leaves deep sleep. Hardly a foolproof reassurance. This isn't a problem we anticipated but we should have done.

No comments:

Post a Comment