Wait for It: A Blueprint for Sporadic Tasks on Linux
Introduction
Welcome back to a second hands-on blog post in which we explore the implementation of classic real-time concepts in POSIX operating systems.
In a previous post, we examined the classic periodic task abstraction and how to implement it in Linux. This time around, let’s explore its equally well-known counterpart: the sporadic task model for event-driven workloads. As we’ll compare and contrast the two models, I recommend reading the prior blog post first if you are not already familiar with periodic tasks.
An important point worth mentioning up front is that periodic and sporadic tasks differ fundamentally with regard to overload control. Whereas a periodic task inherently limits the rate of activations itself, for reasons that I’ll explain below, a sporadic task is largely “at the mercy of its environment.” Therefore, the template we’ll look at in the following will not implement overload control. But before we get there, let’s start with an overview of the model to implement.
The Sporadic Task Model
The sporadic task model is originally due to Al Mok, who introduced it in 1983 in his seminal dissertation. It is difficult to overstate the model’s importance: together with Liu and Layland’s periodic task model, it forms the canonical foundation for schedulability analysis. There are countless refinements and extensions of the model but few (if any) serious alternatives.
Recall from the previous post that a periodic task’s defining feature is its predictable arrival times. Defined solely by the task’s known offset and period, all future arrival times are known a priori. Now, obviously, not all workloads in the world are so regular. Fortunately, for the rest, we have the sporadic task model, which complements the periodic task model by expressing uncertain future arrival times.
In a nutshell, a sporadic task is simply a process that monitors an external source of some kind of events. The sporadic task is activated to process the event whenever such an event occurs. When the event has been handled, it continues to wait for the next, and so on. Each task activation is commonly called a job of the task. A sporadic task is said to release one job for each event that it processes.
Now, what are these “events”? In fact, they could be anything. For example, they often represent actual events in the physical world, such as
- a user pressing a button,
- a light sensor triggering as it is obstructed by a passing physical object, or
- a gyroscope indicating a sudden deceleration.
Furthermore, they can represent events internal to the computing system but external to the process, such as
- the arrival of a network packet,
- another process writing data to a pipe, or
- another process increasing the counter of a semaphore.
From a timeliness point of view, it does not matter what the events truly are; the key aspect is that we cannot predict when future events will occur. The sporadic task model captures exactly this uncertainty.
To summarize, the difference between periodic and sporadic tasks is that a periodic task fundamentally is time-driven — it is regularly activated in response to the passage of physical time. In contrast, a sporadic task is an event-driven process that is irregularly activated in response to (unpredictable) events that occur externally to the task.
Model Parameters
In the literature on real-time systems, a sporadic task is usually characterized by three parameters:
- a worst-case execution cost C, which upper-bounds the maximum amount of processor time required to process a single event,
- a relative deadline D, which specifies how quickly the system must finish handling each event (i.e., respond to the event), and
- a minimum inter-arrival separation T, which lower-bounds the time between the occurrence of any two events.

The first two parameters, C and D, express the task’s resource needs and urgency in a straightforward manner. They have the same semantics as in the periodic task model and are hence not surprising in any way.
The third parameter, the minimum inter-arrival separation T, looks suspiciously similar to the period parameter in the periodic task model, and the two are frequently even treated as interchangeable. However, as discussed above, there is a crucial difference: whereas two consecutive jobs (i.e., activations) of a periodic task are separated by exactly T time units, any two jobs of a sporadic task are separated by at least T time units. This seemingly minor change profoundly impacts the space of possible analyses, especially in the context of multiprocessor systems, but that is a topic for another time. What’s important to realize is that the parameters C and T together characterize the maximum load on the system induced by the task, which enables analysis in the first place.
Arrival Curves: A Practical Model Extension
So far, so good. However, a reasonable objection to the sporadic task model as described so far is that, in practice, the minimum inter-arrival time T can actually be zero. For instance, if the external event that a sporadic task reacts to is the arrival of network packets originating from multiple uncoordinated senders, two packets may arrive at virtually the same time. Unfortunately, setting T = 0 prevents any meaningful analysis, as then the maximum load is “infinite,” i.e., no longer meaningfully bounded.
Luckily, there is a simple generalization that immediately restores practicality. Instead of characterizing the activation characteristics of a sporadic task with a single scalar minimum inter-arrival separation T, it is a much more flexible and versatile approach to describe it with an arrival curve α(Δ), with the following semantics: in any arbitrary interval [t, t+Δ) of length Δ, the task is activated at most α(Δ) times. That is, instead of lower-bounding the separation in time of any two events, the arrival curve upper-bounds the number of events in a given interval.
Remark: There exist several notational conventions for arrival curves. In the literature on compositional performance analysis (CPA), which popularized the arrival-curves concept in the real-time community, the common notation is η+(Δ) (called the “upper event arrival curve”). The basic idea, which is to express arbitrary arrival processes through an upper-bounding function on the interval-length domain, originates in network calculus (NC) and can also be found in real-time calculus (RTC).
To see how this helps, consider a sporadic task that reacts to UDP packets. Suppose there are three uncoordinated periodic senders with periods 3ms, 6ms, and 12ms (to keep things simple) and unknown offsets. The minimum inter-arrival time of packets is thus 0 since, in the worst, up to three packets can arrive simultaneously in a single burst. Still, the overall rate of packets is obviously bounded, so analysis should be possible. The arrival curve abstraction lets us express this easily, as shown in the following table:
Δ | α(Δ) |
---|---|
0ms | 0 |
0ms < Δ <= 3ms | 3 |
3ms < Δ <= 6ms | 4 |
6ms < Δ <= 9ms | 6 |
9ms < Δ <= 12ms | 7 |
12ms < Δ <= 15ms | 10 |
15ms < Δ <= 18ms | 11 |
… | … |
For example, for Δ = 10ms, α(Δ) takes on the value 7, which means that at most 7 packets arrive (i.e., at most 7 jobs are released) in any 10ms-interval (specifically, 1 sent by the 12ms sender, 2 by the 6ms sender, and 4 by the 3ms sender).
More compactly, this particular arrival curve can be expressed as α(Δ) = ⌈3ms/Δ⌉ + ⌈6ms/Δ⌉ + ⌈12ms/Δ⌉ since the senders are themselves periodic (and since we ignore the effects of network-induced jitter for simplicity). In general, the table representation can also express “unwieldy” or empirically measured arrival processes that do not have a convenient closed-form expression.
Bottom line: the sporadic task model is a highly flexible task model that can capture a wide range of non-periodic real-world processes. In particular, the natural extension to arrival curves can elegantly represent bursty processes with simultaneous arrivals or otherwise unstructured arrival patterns. As it is exceedingly well-studied, it makes for an excellent starting point for any effort to connect theory and practice.
However, one may now worry: “it’s great that there are flexible variants to choose from, but in the implementation, which model variant should we actually choose?” That’s, on the face of it, a very reasonable concern. Fortunately, though, it is not an issue in practice since it is purely a modeling choice. As we shall see next, in fact, none of the model parameters show up in the actual implementation.
A Sporadic Task Template
With the conceptual framework in place, it’s time to finally get to the code. Since the sporadic task model is quite general and flexible, there is no “one true way” of implementing sporadic tasks. Rather, there are many viable ways. Therefore, the following template should be seen as an illustrated example and a recommended starting point for your own implementation, but not as prescriptive.
The first question we have to answer is: what actually is an “event source” in a POSIX environment? How does a process learn of the occurrence of an external “event”? There are many possible answers to this question. In the case of network packets, the canonical interface is a socket. In the case of devices (like a sensor), a device driver will usually expose the data via a character device file somewhere in the /dev
file system. In the case of inter-process communication, it will be a pipe endpoint or a named pipe special file. Fortunately for us, the UNIX design already comes with the perfect unifying abstraction for all of these possible event channels: the file descriptor API. In fact, Linux goes a step further and conveniently exposes also signal delivery, timer expiration, and a generic event-signaling mechanism as file descriptors. Therefore, our programming task is the same no matter what kind of events our program needs to react to — namely, simply open and read from a file descriptor.
To this end, we keep track of the following state:

As the name suggests, the event_source_fd
is the file descriptor representing the event channel of these sporadic task monitors.
The Main Loop
Ultimately, a sporadic task is little more than simply some initialization code followed by a blocking “main loop.” Each main loop iteration represents one job, as shown next. (The called subroutines will be discussed afterward.)

There are a couple of subroutines that require explanation.
First, open_file_descriptor(…)
is simply a placeholder. It represents application-specific code to open and configure a specific event channel (e.g., a device file, socket, pipe, etc.) and return a non-blocking file descriptor. A complete example is given further below.
Second, process_one_activation(&tsk)
is the application-specific job logic, i.e., the code that actually handles an event. For instance, this might receive a network packet, parse and validate its contents, and then take some action. Importantly, process_one_activation(&tsk)
should run to completion: it should not block on I/O or suspend for any other reason. Otherwise, a more advanced task model with self-suspension support must be considered and analyzed.[^suspend] We take a look at a specific implementation of process_one_activation(&tsk)
further below.
Remark: Discussing self-suspensions would be another post in itself. Please let me know if that would be interesting to you. You can find me on Twitter or send me an email.
Waiting for Events
Last but not least, the routine at the heart of a sporadic task is wait_for_next_activation(&tsk)
, which looks like this.

The routine is a straightforward application of the poll()
system call. The semantics of the poll()
system call are as follows: given one or more file descriptors, the calling process is suspended until (one of) the given file descriptor(s) experiences a state change (i.e., becomes readable, writable, or a connection is closed, etc.). Here, we have only a single file descriptor of interest. To poll the event_source_fd
file descriptor, the code sets up a struct pollfd
record. To ensure that the process will block until there is data to read from the event-source file descriptor, it configures the struct pollfd
record with the POLLIN
flag. The flag indicates that the process is interesting (only) in incoming data.
That’s it — the essence of a sporadic task is a loop that blocks the process until a new event becomes available. Note that poll
does not consume any data. Thus, when wait_for_next_activation(&tsk) is returned, the file descriptor is ready for reading by the application-specific logic.
Per-Event Job Logic
In fact, the application logic must consume the available data, as poll
immediately returns if unread data remains available. Let’s have a look at a minimal example.

Here, the code simply consumes a message from the event-source file descriptor. Finally, some debugging output is generated with printf
for illustrative purposes.
Remark: The astute reader will notice that strictly speaking, this is actually a violation of the premise that process_one_activation(&tsk)
must run to completion. As printf
can block on I/O, in practice, this code should be replaced with a suitable nonblocking logging mechanism.
Complete Example: A Sporadic Task Reacting to UDP Packets
Finally, to complete the example, let us have a concrete look at a possible implementation of the open_file_descriptor(…)
placeholder. The following code opens a UDP socket and configures it to be nonblocking, and then returns the file descriptor that represents the socket.

In a similar fashion, the application could open a device file, a UNIX domain socket, a CAN socket, a serial port, device files representing GPIOs, etc. The presented template works unchanged as long as it can be referenced with a non-blocking file descriptor.
Putting everything together (see the complete example on GitHub ), we obtain a sporadic task that will react in a model-compliant way to UDP messages. The task will listen to UDP port 2006 (an arbitrary choice) in the provided example.
Let’s see it in action. After compiling and launching the example task, we can send packets to port 2006 on the local host with the nc utility (one packet per line).

As intended, the sporadic task reacts with one job per UDP message (i.e., one job per line entered).

What Happened to the Minimum Inter-Arrival Separation?
Perhaps surprisingly, the parameters that define the essence of a sporadic task — T or α(Δ) — have not manifested in any way in the given template. Why is that?
There are two reasons for this. First, due to the event-driven nature of a sporadic task, the task does not know or control when the next activation occurs. By definition, that aspect is under control of the environment; the task hence cannot meaningfully program timers to check for future events.
However, that begs the question: what happens if the environment does not comply with the model assumptions? For instance, what if network packets or sensor inputs arrive at a faster rate, or in a more bursty fashion, than expressed in the model? Would it not be desirable to enforce the assumed minimum inter-arrival separation? For example, why not delay the processing of “early” activations until the next earliest permissible activation time? At the very least, surely it would be a good idea to at least detect violations of the assumed minimum inter-arrival separation (or arrival curve) at runtime?
Naturally, yes, it would be desirable to check or enforce a sporadic task’s timing behavior at runtime. However, that brings us to the second reason: the POSIX API is unfortunately insufficient to precisely identify violations.
The core problem is that, in the general case (i.e., in the presence of potentially interfering higher-priority tasks), the task itself does not know exactly when it was last activated. From within the running process, it is possible to observe only when it is first scheduled (by reading a clock). However, if higher-priority tasks cause delays, a task may be scheduled only some unknown time after activation.

Consider the example in the above figure, which shows four jobs of a task with a presumed minimum inter-arrival time of T = 50ms. Suppose the task takes a clock reading immediately when it is first scheduled after being activated. As illustrated in the figure, this can easily lead to wrong conclusions due to preemptions.
First, consider jobs 47 and 48. Job 47 is activated at time 1000 and scheduled immediately. The task observes time 1000 as its time of activation. According to the modeled minimum inter-arrival time, job 48 should be activated no earlier than at time 1050. However, in the example, it is activated already at time 1040. This is an erroneous deviation from the model, which ideally should be mitigated (by inserting a delay) or at least detected. However, job 48 is not scheduled until time 1060 due to higher-priority interference. Hence, the task observes a minimum inter-arrival separation of 60ms > T and thus does not notice the incorrect behavior.
The converse effect is demonstrated by jobs 49 and 50, released at times 1100ms and 1150ms, respectively. These two jobs are actually activated T = 50ms apart; there is no model deviation to detect. However, again due to preemption-induced delays, the task observes 1120 as its activation time. It hence incorrectly flags job 50 as having been released too early.
These examples show that from a vantage point within the process itself, it is generally not possible to accurately detect violations of the minimum inter-arrival separation. The critical information missing is the precise time of activation, which Linux, unfortunately, does not provide via its standard system call interface.
One possible solution is to rely on process-external timestamps. For example, when using high-end sensors that automatically timestamp each data sample, the inter-arrival separation can be precisely inferred from the embedded timestamps in the data. Similarly, in the case of events delivered over a network, violations of the minimum inter-arrival separation can be accurately inferred if the NIC automatically timestamps each network packet with the time of its arrival. However, not all sensors and NICs have such capabilities (e.g., due to cost reasons or lack of access to a reliable clock), so this is unfortunately not a general solution.
All hope is not lost, though — precise enforcement of sporadic activations is still possible with Linux’s reservation-based scheduler SCHED_DEADLINE
, and process-external tracing facilities can properly detect model deviations (e.g., via ftrace
), which are both interesting topics for another time.
Scheduling, SCHED_FIFO
, and Shared Priority Levels
Typically, a system consists of more than just one task, which requires the use of an OS-level real-time scheduling policy. Under Linux, this means using SCHED_FIFO
, SCHED_RR
, or SCHED_DEADLINE
. As provided, the above sporadic task template works without surprises if provisioned under SCHED_DEADLINE
or SCHED_RR
, meaning that its behavior matches common analysis assumptions. However, when combined with SCHED_FIFO
, there is one potential “gotcha” that one should be aware of.
The SCHED_FIFO
and SCHED_RR
scheduling policies are the two standard POSIX fixed-priority scheduling policies. As the name implies, under SCHED_FIFO
, processes of equal priority are scheduled in FIFO order (with respect to the last time they entered the scheduler’s ready queue). It is important to realize that this FIFO order applies only to processes, but not to jobs (of different tasks), because the OS-level process scheduler is unaware of the processes-internal notion of jobs. Hence, if there are multiple sporadic tasks of equal priority, there is no guarantee that jobs will be interleaved in FIFO order, which a SCHED_FIFO
-specific analysis will assume. Therefore, when applying response-time analysis under SCHED_FIFO
, it’s best to use unique priorities for all real-time processes so that this distinction becomes irrelevant.
Bonus: Reacting to Multiple Event Channels
One of the advantages of the provided template is that it is easily extended to multiple event channels. For example, suppose our task needs to react to:
- two UDP ports (say, 2006 and 1619),
- two UNIX signals (
SIGUSR1
andSIGINT
), and - input delivered via the
STDIN
file descriptor.
This is trivially accomplished by opening a nonblocking file descriptor for each event channel and setting up a struct pollfd
record for each. A discussion of all necessary changes (which are rather mundane anyway) would go beyond the limits of this blog post. The interested reader may find a complete multi-channel example on GitHub.
Conclusion
The sporadic task model combines three attractive properties:
- from a practical point of view, it is sufficiently expressive to capture event-driven workload behavior that is irregular and non-periodic, especially when combined with arrival curves,
- from an analytical point of view, it is simple, elegant, and supported by a wealth of publications, and
- from a programmer’s point of view, it is easily implemented in Linux such that the task’s behavior matches the model semantics.
Ultimately, the sporadic task model can be applied to any event-driven thread that waits for input in a blocking manner, so there are countless other ways of implementing sporadic tasks. Nonetheless, I hope that the provided template may prove helpful as an illustration connecting theory and practice.
Thanks for reading this far. If you have any comments or questions, hit me up on Twitter or via email.
Authors: Björn Brandenburg is a tenured faculty member at the Max Planck Institute for Software Systems (MPI-SWS), a position equivalent to a US associate professorship, and head of the Real-Time Systems Group. His main research interests are real-time systems, operating systems, and embedded systems.
Disclaimer: Any views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.