This is summary of a few issues related to waking up from the U1, U2 or U3 low-power states on a SuperSpeed USB link, with emphasis on the details that need to be decided upon when implementing a USB device.

A similar discussion on invoking these low-power states can be found on this page. There is also a page with an broader view of power states.

Waking up from a low power state is essentially a simple procedure: The side requesting the wakeup (the “initiator”) starts transmitting a continuous LFPS signal, waits for the other side to acknowledge the wakeup request by transmitting an LFPS signal as well. When each side has detected the other side’s LFPS burst (subject to a minimal LFPS burst length), it turns on its Gigabit transceiver in Recovery state. Both sides reach Recovery, and move on to U0 just like after any transition to Recovery.


The USB 3.0 spec defines two timeouts related to exiting a low-power state:

  • tNoLFPSResponseTimeout, for successfully completing the LFPS handshake: How long time the initiator may wait for the other side to transmit an LFPS signal in response. It’s 2 ms for U1 and U2, 10 ms for U3.
  • Ux_EXIT_TIMER, for reaching the U0 (link active state): The total time from initiation to a proper link is up in U0 state. It’s 6 ms for U1 and U2, and not applied for U3. Note that this includes the LFPS handshake and the Recovery state. Note that plain Recovery from U0 (due to some link error) has a 12 ms timeout.

The port shall move to SS.Inactive state when any of these timeouts expire when waking up from U1 or U2. Timed out attempts to wake up from U3 are repeated in 100 ms intervals. These repeated attempts are stopped only if the reason for wakeup is disabled, or due to a physical disconnection between the link partner, in which case the ports enter the SS.Disabled state.

In large, the LFPS handshake is defined as successful (in section 6.9.2 of the USB 3.0 spec, which defines the handshake and its timing in general) if the handshake’s initiator detects its link partner’s LFPS signal within tNoLFPSResponseTimeout. The transition to Recovery isn’t included in the timeout period.

Another timing restriction on both sides is that the time gap between the end of LFPS and the Gigabit stream is maximum 20 ns.

Timing parameters

When implementing the power management module for a device, there are in fact only three timing parameters that need to be decided upon, separately for U1, U2 and U3:

  1. When initiating a wakeup, the minimal length of the transmitted LFPS burst (i.e. when to move on to turning on the Gigabit transceiver, if an LFPS burst is sensed).
  2. When responding to a wakeup LFPS burst, the length of the transmitted burst to issue before switching to the Gigabit transceiver)
  3. The minimal length of a detected LFPS burst to be considered a wakeup request. In other words, how fast the detector needs to be. This may have influence on the complexity of the detection mechanism, in particular for the short bursts possibly used to wake up from U1.
Note that it’s pointless to define the maximal burst time for item 1, as it should run until response or a tNoLFPSResponseTimeout timeout. As for the maximum for item 3, it’s unlikely that the signal will be longer than tNoLFPSResponseTimeout, and besides, if it reaches 80 ms, that burst can be considered a Warm Reset.

Section 6.9.2 defines a significant number of timing restrictions in Table 6-22 for each of the low-power modes. This table is the reference in the following analysis of the these three items.

Item #1: Minimal initiating burst

It may seem odd that a minimal time for the initiating burst is defined at all: If the link partner responds with an LFPS, why not jump into the Gigabit phase immediately?

The purpose of this limit is to cover the case when both sides initiate a wakeup LFPS burst at the same time: Even though sensing the other side’s LFPS burst, both sides must continue transmitting the bursts long enough so the other side is guaranteed to detect it.

This time period is given as t12-t10 in Table 6-22: At least 0.6 μs for U1, and at least 80 μs for U2 and U3. Table 6-21 in the spec details (among others) the minimal burst transmitter times for all types of LFPS bursts. For the low power exit bursts, the same numbers are given.

One may wonder why there is a difference between the power states. After all, if the link partner already transmits an LFPS, surely it’s ready to detect one. The answer is (see note 6 to Table 6-21) that a port isn’t required to maintain its common-mode voltage on the differential wires when in U2 and U3. As a result, a certain time segment at the beginning of the LFPS signal may not be properly transmitted when coming out of these states. These 80 μs ensure that at least the end of the burst can be properly detected.

Hence 0.6 μs is sufficient when the LFPS transmitter was in for U1, even if the receiver might be in U2 due to automatic transition (this is what Note 2 under Table 6-22 says).

However be sure to read the “Changes in USB 3.1″ part below for a longer U1 burst.

Item #2: Time of responding burst

In a given designed system, this is just the time that the LFPS burst is generated before moving on to transmitting the Gigabit stream. It should be no longer than the minimum required by spec, as this ensures its detection by the link partner, so this would unnecessarily slow down the resumption to U0. And of course it can’t be shorter.

This time period is given as t13-t11 in Table 6-22: At least 0.6 μs for U1, and at least 80 μs for U2 and U3. It’s the same figures as for item #1, for the exact same reasons.

Once again, be sure to read the “Changes in USB 3.1″ part below for a longer U1 burst.

Item #3: Minimal burst time for detection (part I)

A quick detection of the LFPS signal is important for not wasting time on the handshake procedure. On the other hand, every electrical engineer knows the connection between a fast detector and false alarms.

Table 6-22 seems to suggest 0.3 μs as the minimal LFPS that must be detected, as this is the given minimal t11-t10. This parameter can’t be interpreted as a restriction (both ports may start transmitting their LFPS bursts at the same time, for example), it can only be interpreted as an expectation from the detector.

However in section 6.9.2’s discussion on both side starting their LFPS burst simultaneously, an LFPS burst of 0.6 μs is required to ensure the other side has detected it properly.

The difference is explained in note 7 of Table 6-21, stating that the receiver must detect a burst of 0.3 μs, but the transmission to must last 0.6 μs, with the extra 0.3 μs being a “guard band”. This is important in particular for a downstream facing port, which may need to distinguish between a U1 state’s Ping.LFPS signal of up to 200 ns, and a wakeup LFPS burst, possibly as short as 600 ns. There are plenty of hardware out there that doesn’t manage this correctly.

But what is the real minimum? Consider for example a USB device that never initiates any wakeup. Seemingly it only needs to detect initiating LFPS bursts from its link partner. And since the initiating LFPS burst continues until the handshake is completed, can’t the detection be slow enough just to keep the handshake below tNoLFPSResponseTimeout? That’s at least 2 ms, a completely different order of magnitude.

The answer is that a so-so designed USB device that only responds to wakeups can have a very slow LFPS detector indeed, but it may have practical reliability issues, because of insufficient handling of false wakeups. So before concluding the minimal burst time detection, a slight detour.

False wakeups

In a perfect world, the hardware’s LFPS detector works without producing any false alarms. Unfortunately, in some real-life Gigabit transceiver hardware, there are many possible sources for a false alarm:

  • The other side’s Gigabit transmission may cause spurious false LFPS signal detections. This is problematic in particular when receiving a Gigabit transmission a few microseconds after going into a low power state. Such false detection would cause an unnecessary wakeup immediately when entering the low power state.
  • The link partner’s shutdown of the Gigabit transceiver (and possibly the common-mode bias voltage) may generate spurious ringing on the wires that may be mistaken for an LFPS burst.
  • The link partner’s invocation of U2 or U3 states, which may involve turning off the common-mode voltage of the physical wires, can be detected as activity. In particular, an FPGA’s transceiver has been observed detecting the link partner’s U1 to U2 transition as an LFPS signal.
  • If the U1 to U2 transition is disabled, and the U1 state lasts longer than 200 ms, the LFPS.Ping keepalive signal from the the device may be misinterpreted as a wakeup LFPS burst, even though the former is 200 ns at most, and the minimal for a wakeup burst detection is 300 ns. Still, a fairly decent USB 3.0 hub (Genesys Logic 05e3:0626) has been observed to repeatedly wake up on 100 ns LFPS.Ping bursts.
  • Electromagnetic noise from the environment can cause momentary activity on the wires that can be mistaken for an LFPS signal. As a device may in a low power state for hours and days, this is a possibility to take into account even if the probability seems small.

If properly handled, a false wakeup is fairly harmless from the U1 or U2 states: The device starts its LFPS signal in response to the false signal, but the link partner considers this a wakeup request, and responds with a real LFPS signal. Eventually, both sides start their Gigabit transceivers, invoke Recovery and U0. A short while later, the link should go back to a low power state due to lack of activity but not all hardware does that (for example, Intel’s 8086:a12f USB controller does this, but Renesas’ 1912:0015 waits in U0 until there’s traffic, and only then acts on link inactivity). The worst case scenario is probably energy wasted on a link that is held in U0 for no reason.

False wakeups from U3 is a different story however: They can power up a computer from a suspended state. On the other hand, the timing requirements are much easier, so the detection of LFPS signals from U3 can be made very safely.

Item #3: Minimal burst time for detection (part II: Conclusion)

The false wakeup scenario boils down to a reversal of roles: The port that falsely detected the LFPS signal thinks it responds to a wakeup request, but it’s actually initiating one. The link partner indeed responds to a wakeup request. This reversal is pretty harmless from U1 and U2 states if both sides receive LFPS bursts that are long enough for their detection.

Recall from items #1 and #2 above, that the burst time for responding to wakeup request is the same as the minimal time for the initiating LFPS burst. Hence it’s guaranteed that the link partner detects the burst that is transmitted as a response, but is actually an initiation.

So back to minimal detection time. The relevant scenario is that the opposite port has falsely detected an LFPS bursts, and now responds with an LFPS bursts that it considers a response.

To cope with this, any port must be able to detect an LFPS burst that is transmitted as a response. The minimal length of which is given as t13-t11 in Table 6-22: 0.6 μs for U1, and at least 80 μs for U2 and U3. Note that these power state relate to the link partner, so if the U1-to-U2 inactivity timer is enabled, it’s the link partner’s power state that needs to be taken into consideration.

So all in all, it boils down to a detection of a 0.6 μs burst for U1 and U2, even if it’s never going to initiate any wakeup requests. But recall the requirement in note 7 of Table 6-21, requiring 0.3 μs, so the latter is the preferred value. If the link partner is known to be in U2 (e.g. due to a direct transition from U0 to U2), 80 μs is fine as well, but this may include a time segment for which the LFPS isn’t properly transmitted. So there’s no need to push as low as 0.3 μs, but surely not 80 μs either.

As for U3, maybe as much as 1 ms for detecting initiations. Waking up a computer from suspend because of a falsely detected LFPS is really bad, and if the computer made such false detection, it might as well ditch the port the port that caused this.

Response time

As the LFPS detector must detect the LFPS signal 80 μs after the beginning of its transmission in the worst case, one may wonder why the timeout for the handshake is 2 ms. This isn’t discussed in the spec, but presumably the reason is that the control of the LFPS and Gigabit transmitter may be in the hand of software, which may add latencies.

Table 6-22, and its footnotes in particular, address this issue in particular for the case of waking up from U1 when the U2 Inactivity Timer is disabled, referring to a set with “short timing requirement”. This means that both ports are known to be in U1 (the timeout mechanism for slipping down to U2 is disabled).

In this case, two extra restrictions are added:

  • The responding port must start its LFPS burst no later than 0.9 μs after the initiating port starter its burst (t11 - t10 < 0.9 μs).
  • Both ports are required to turn on their Gigabit transceivers no later than 0.9 μs after the responding LFPS burst begins (t12 - t11 < 0.9 μs and t13 - t11 < 0.9 μs).

Recall that the conclusion from the analysis for U1 was the detector should detect a burst after 0.3 μs, so extra delays aside, t11 - t10 = 0.3 μs.

Also, based on the conclusions above, the LFPS burst length on both sides will be 0.6 μs: The responding burst starts 0.3 μs after the initiator’s, and then it takes 0.3 μs for the initiator to detect the responding burst. So 0.6 μs after the initiator began transmitting its burst, it has detected the responding burst, and switches Gigabit transmission after the minimal time for the LFPS burst. So this gives t12 - t11 = 0.3 μs and t13 - t11 = 0.6 μs.

So if there are no extra delays, these 0.9 μs limits are met easily. In an FPGA / ASIC design, the delay from a detection to action is typically a few tens of nanoseconds, so there’s no problem there. If the control is implemented in software, it might be a problem. Even an interrupt handler can be delayed by several microseconds.

So what happens if the 0.9 μs timing requirements aren’t met? The spec doesn’t say, but the answer is most likely nothing, except for the added latency. So if a device’s power state is controlled by software, and these requirements aren’t met, it’s unlikely that any problems will arise, in particular as the device is aware of the latencies it imposes.

Apparently, these requirements are intended for hubs and the USB controllers in computers, so that devices that are attached to them can rely on a quick wakeup from U1 if needed. As these are always implemented as logic in a chip, these restrictions make perfect sense.

And a practical note: This “short time requirement” is relevant when U2 is disabled, and both sides remain in U1. This situation is quite rare in real-life hardware, because it requires the upstream port to send an LFPS.Ping burst every 200 ms, which is mistaken for a wakeup burst by some hardware out there. Consequently, this setting is generally avoided.

Changes in USB 3.1

In a late revision of USB 3.1, the minimal LFPS burst time for U1 was raised from 0.6 μs to 0.9 μs, with several timing parameters adjusted accordingly in Tables 6-30 (Table 6-21 in USB 3.0) and Table 6-31 (Table 6-22 in USB 3.0).

Among others, the “short timing requirement” set was adjusted to allow these longer bursts.

The later USB specs recognize that some ports may transmit LFPS bursts of 0.6 μs only. However for new designs, it’s advisable to transmit 0.9 μs for U1, even if it breaks the “short timing requirement” set.

To prevent an immediate wakeup from U1, an additional restriction is added: In the USB 3.1 spec section, a U1_MIN_RESIDENCY_TIMER is defined, requiring either port to wait 3 μs in U1 after sending or receiving an LPMA (or after timing out waiting for it), before generating an LFPS wakeup burst, if so desired.