0001 ========================
0002 Deadline Task Scheduling
0003 ========================
0004
0005 .. CONTENTS
0006
0007 0. WARNING
0008 1. Overview
0009 2. Scheduling algorithm
0010 2.1 Main algorithm
0011 2.2 Bandwidth reclaiming
0012 3. Scheduling Real-Time Tasks
0013 3.1 Definitions
0014 3.2 Schedulability Analysis for Uniprocessor Systems
0015 3.3 Schedulability Analysis for Multiprocessor Systems
0016 3.4 Relationship with SCHED_DEADLINE Parameters
0017 4. Bandwidth management
0018 4.1 System-wide settings
0019 4.2 Task interface
0020 4.3 Default behavior
0021 4.4 Behavior of sched_yield()
0022 5. Tasks CPU affinity
0023 5.1 SCHED_DEADLINE and cpusets HOWTO
0024 6. Future plans
0025 A. Test suite
0026 B. Minimal main()
0027
0028
0029 0. WARNING
0030 ==========
0031
0032 Fiddling with these settings can result in an unpredictable or even unstable
0033 system behavior. As for -rt (group) scheduling, it is assumed that root users
0034 know what they're doing.
0035
0036
0037 1. Overview
0038 ===========
0039
0040 The SCHED_DEADLINE policy contained inside the sched_dl scheduling class is
0041 basically an implementation of the Earliest Deadline First (EDF) scheduling
0042 algorithm, augmented with a mechanism (called Constant Bandwidth Server, CBS)
0043 that makes it possible to isolate the behavior of tasks between each other.
0044
0045
0046 2. Scheduling algorithm
0047 =======================
0048
0049 2.1 Main algorithm
0050 ------------------
0051
0052 SCHED_DEADLINE [18] uses three parameters, named "runtime", "period", and
0053 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive
0054 "runtime" microseconds of execution time every "period" microseconds, and
0055 these "runtime" microseconds are available within "deadline" microseconds
0056 from the beginning of the period. In order to implement this behavior,
0057 every time the task wakes up, the scheduler computes a "scheduling deadline"
0058 consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then
0059 scheduled using EDF[1] on these scheduling deadlines (the task with the
0060 earliest scheduling deadline is selected for execution). Notice that the
0061 task actually receives "runtime" time units within "deadline" if a proper
0062 "admission control" strategy (see Section "4. Bandwidth management") is used
0063 (clearly, if the system is overloaded this guarantee cannot be respected).
0064
0065 Summing up, the CBS[2,3] algorithm assigns scheduling deadlines to tasks so
0066 that each task runs for at most its runtime every period, avoiding any
0067 interference between different tasks (bandwidth isolation), while the EDF[1]
0068 algorithm selects the task with the earliest scheduling deadline as the one
0069 to be executed next. Thanks to this feature, tasks that do not strictly comply
0070 with the "traditional" real-time task model (see Section 3) can effectively
0071 use the new policy.
0072
0073 In more details, the CBS algorithm assigns scheduling deadlines to
0074 tasks in the following way:
0075
0076 - Each SCHED_DEADLINE task is characterized by the "runtime",
0077 "deadline", and "period" parameters;
0078
0079 - The state of the task is described by a "scheduling deadline", and
0080 a "remaining runtime". These two parameters are initially set to 0;
0081
0082 - When a SCHED_DEADLINE task wakes up (becomes ready for execution),
0083 the scheduler checks if::
0084
0085 remaining runtime runtime
0086 ---------------------------------- > ---------
0087 scheduling deadline - current time period
0088
0089 then, if the scheduling deadline is smaller than the current time, or
0090 this condition is verified, the scheduling deadline and the
0091 remaining runtime are re-initialized as
0092
0093 scheduling deadline = current time + deadline
0094 remaining runtime = runtime
0095
0096 otherwise, the scheduling deadline and the remaining runtime are
0097 left unchanged;
0098
0099 - When a SCHED_DEADLINE task executes for an amount of time t, its
0100 remaining runtime is decreased as::
0101
0102 remaining runtime = remaining runtime - t
0103
0104 (technically, the runtime is decreased at every tick, or when the
0105 task is descheduled / preempted);
0106
0107 - When the remaining runtime becomes less or equal than 0, the task is
0108 said to be "throttled" (also known as "depleted" in real-time literature)
0109 and cannot be scheduled until its scheduling deadline. The "replenishment
0110 time" for this task (see next item) is set to be equal to the current
0111 value of the scheduling deadline;
0112
0113 - When the current time is equal to the replenishment time of a
0114 throttled task, the scheduling deadline and the remaining runtime are
0115 updated as::
0116
0117 scheduling deadline = scheduling deadline + period
0118 remaining runtime = remaining runtime + runtime
0119
0120 The SCHED_FLAG_DL_OVERRUN flag in sched_attr's sched_flags field allows a task
0121 to get informed about runtime overruns through the delivery of SIGXCPU
0122 signals.
0123
0124
0125 2.2 Bandwidth reclaiming
0126 ------------------------
0127
0128 Bandwidth reclaiming for deadline tasks is based on the GRUB (Greedy
0129 Reclamation of Unused Bandwidth) algorithm [15, 16, 17] and it is enabled
0130 when flag SCHED_FLAG_RECLAIM is set.
0131
0132 The following diagram illustrates the state names for tasks handled by GRUB::
0133
0134 ------------
0135 (d) | Active |
0136 ------------->| |
0137 | | Contending |
0138 | ------------
0139 | A |
0140 ---------- | |
0141 | | | |
0142 | Inactive | |(b) | (a)
0143 | | | |
0144 ---------- | |
0145 A | V
0146 | ------------
0147 | | Active |
0148 --------------| Non |
0149 (c) | Contending |
0150 ------------
0151
0152 A task can be in one of the following states:
0153
0154 - ActiveContending: if it is ready for execution (or executing);
0155
0156 - ActiveNonContending: if it just blocked and has not yet surpassed the 0-lag
0157 time;
0158
0159 - Inactive: if it is blocked and has surpassed the 0-lag time.
0160
0161 State transitions:
0162
0163 (a) When a task blocks, it does not become immediately inactive since its
0164 bandwidth cannot be immediately reclaimed without breaking the
0165 real-time guarantees. It therefore enters a transitional state called
0166 ActiveNonContending. The scheduler arms the "inactive timer" to fire at
0167 the 0-lag time, when the task's bandwidth can be reclaimed without
0168 breaking the real-time guarantees.
0169
0170 The 0-lag time for a task entering the ActiveNonContending state is
0171 computed as::
0172
0173 (runtime * dl_period)
0174 deadline - ---------------------
0175 dl_runtime
0176
0177 where runtime is the remaining runtime, while dl_runtime and dl_period
0178 are the reservation parameters.
0179
0180 (b) If the task wakes up before the inactive timer fires, the task re-enters
0181 the ActiveContending state and the "inactive timer" is canceled.
0182 In addition, if the task wakes up on a different runqueue, then
0183 the task's utilization must be removed from the previous runqueue's active
0184 utilization and must be added to the new runqueue's active utilization.
0185 In order to avoid races between a task waking up on a runqueue while the
0186 "inactive timer" is running on a different CPU, the "dl_non_contending"
0187 flag is used to indicate that a task is not on a runqueue but is active
0188 (so, the flag is set when the task blocks and is cleared when the
0189 "inactive timer" fires or when the task wakes up).
0190
0191 (c) When the "inactive timer" fires, the task enters the Inactive state and
0192 its utilization is removed from the runqueue's active utilization.
0193
0194 (d) When an inactive task wakes up, it enters the ActiveContending state and
0195 its utilization is added to the active utilization of the runqueue where
0196 it has been enqueued.
0197
0198 For each runqueue, the algorithm GRUB keeps track of two different bandwidths:
0199
0200 - Active bandwidth (running_bw): this is the sum of the bandwidths of all
0201 tasks in active state (i.e., ActiveContending or ActiveNonContending);
0202
0203 - Total bandwidth (this_bw): this is the sum of all tasks "belonging" to the
0204 runqueue, including the tasks in Inactive state.
0205
0206
0207 The algorithm reclaims the bandwidth of the tasks in Inactive state.
0208 It does so by decrementing the runtime of the executing task Ti at a pace equal
0209 to
0210
0211 dq = -max{ Ui / Umax, (1 - Uinact - Uextra) } dt
0212
0213 where:
0214
0215 - Ui is the bandwidth of task Ti;
0216 - Umax is the maximum reclaimable utilization (subjected to RT throttling
0217 limits);
0218 - Uinact is the (per runqueue) inactive utilization, computed as
0219 (this_bq - running_bw);
0220 - Uextra is the (per runqueue) extra reclaimable utilization
0221 (subjected to RT throttling limits).
0222
0223
0224 Let's now see a trivial example of two deadline tasks with runtime equal
0225 to 4 and period equal to 8 (i.e., bandwidth equal to 0.5)::
0226
0227 A Task T1
0228 |
0229 | |
0230 | |
0231 |-------- |----
0232 | | V
0233 |---|---|---|---|---|---|---|---|--------->t
0234 0 1 2 3 4 5 6 7 8
0235
0236
0237 A Task T2
0238 |
0239 | |
0240 | |
0241 | ------------------------|
0242 | | V
0243 |---|---|---|---|---|---|---|---|--------->t
0244 0 1 2 3 4 5 6 7 8
0245
0246
0247 A running_bw
0248 |
0249 1 ----------------- ------
0250 | | |
0251 0.5- -----------------
0252 | |
0253 |---|---|---|---|---|---|---|---|--------->t
0254 0 1 2 3 4 5 6 7 8
0255
0256
0257 - Time t = 0:
0258
0259 Both tasks are ready for execution and therefore in ActiveContending state.
0260 Suppose Task T1 is the first task to start execution.
0261 Since there are no inactive tasks, its runtime is decreased as dq = -1 dt.
0262
0263 - Time t = 2:
0264
0265 Suppose that task T1 blocks
0266 Task T1 therefore enters the ActiveNonContending state. Since its remaining
0267 runtime is equal to 2, its 0-lag time is equal to t = 4.
0268 Task T2 start execution, with runtime still decreased as dq = -1 dt since
0269 there are no inactive tasks.
0270
0271 - Time t = 4:
0272
0273 This is the 0-lag time for Task T1. Since it didn't woken up in the
0274 meantime, it enters the Inactive state. Its bandwidth is removed from
0275 running_bw.
0276 Task T2 continues its execution. However, its runtime is now decreased as
0277 dq = - 0.5 dt because Uinact = 0.5.
0278 Task T2 therefore reclaims the bandwidth unused by Task T1.
0279
0280 - Time t = 8:
0281
0282 Task T1 wakes up. It enters the ActiveContending state again, and the
0283 running_bw is incremented.
0284
0285
0286 2.3 Energy-aware scheduling
0287 ---------------------------
0288
0289 When cpufreq's schedutil governor is selected, SCHED_DEADLINE implements the
0290 GRUB-PA [19] algorithm, reducing the CPU operating frequency to the minimum
0291 value that still allows to meet the deadlines. This behavior is currently
0292 implemented only for ARM architectures.
0293
0294 A particular care must be taken in case the time needed for changing frequency
0295 is of the same order of magnitude of the reservation period. In such cases,
0296 setting a fixed CPU frequency results in a lower amount of deadline misses.
0297
0298
0299 3. Scheduling Real-Time Tasks
0300 =============================
0301
0302
0303
0304 .. BIG FAT WARNING ******************************************************
0305
0306 .. warning::
0307
0308 This section contains a (not-thorough) summary on classical deadline
0309 scheduling theory, and how it applies to SCHED_DEADLINE.
0310 The reader can "safely" skip to Section 4 if only interested in seeing
0311 how the scheduling policy can be used. Anyway, we strongly recommend
0312 to come back here and continue reading (once the urge for testing is
0313 satisfied :P) to be sure of fully understanding all technical details.
0314
0315 .. ************************************************************************
0316
0317 There are no limitations on what kind of task can exploit this new
0318 scheduling discipline, even if it must be said that it is particularly
0319 suited for periodic or sporadic real-time tasks that need guarantees on their
0320 timing behavior, e.g., multimedia, streaming, control applications, etc.
0321
0322 3.1 Definitions
0323 ------------------------
0324
0325 A typical real-time task is composed of a repetition of computation phases
0326 (task instances, or jobs) which are activated on a periodic or sporadic
0327 fashion.
0328 Each job J_j (where J_j is the j^th job of the task) is characterized by an
0329 arrival time r_j (the time when the job starts), an amount of computation
0330 time c_j needed to finish the job, and a job absolute deadline d_j, which
0331 is the time within which the job should be finished. The maximum execution
0332 time max{c_j} is called "Worst Case Execution Time" (WCET) for the task.
0333 A real-time task can be periodic with period P if r_{j+1} = r_j + P, or
0334 sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally,
0335 d_j = r_j + D, where D is the task's relative deadline.
0336 Summing up, a real-time task can be described as
0337
0338 Task = (WCET, D, P)
0339
0340 The utilization of a real-time task is defined as the ratio between its
0341 WCET and its period (or minimum inter-arrival time), and represents
0342 the fraction of CPU time needed to execute the task.
0343
0344 If the total utilization U=sum(WCET_i/P_i) is larger than M (with M equal
0345 to the number of CPUs), then the scheduler is unable to respect all the
0346 deadlines.
0347 Note that total utilization is defined as the sum of the utilizations
0348 WCET_i/P_i over all the real-time tasks in the system. When considering
0349 multiple real-time tasks, the parameters of the i-th task are indicated
0350 with the "_i" suffix.
0351 Moreover, if the total utilization is larger than M, then we risk starving
0352 non- real-time tasks by real-time tasks.
0353 If, instead, the total utilization is smaller than M, then non real-time
0354 tasks will not be starved and the system might be able to respect all the
0355 deadlines.
0356 As a matter of fact, in this case it is possible to provide an upper bound
0357 for tardiness (defined as the maximum between 0 and the difference
0358 between the finishing time of a job and its absolute deadline).
0359 More precisely, it can be proven that using a global EDF scheduler the
0360 maximum tardiness of each task is smaller or equal than
0361
0362 ((M − 1) · WCET_max − WCET_min)/(M − (M − 2) · U_max) + WCET_max
0363
0364 where WCET_max = max{WCET_i} is the maximum WCET, WCET_min=min{WCET_i}
0365 is the minimum WCET, and U_max = max{WCET_i/P_i} is the maximum
0366 utilization[12].
0367
0368 3.2 Schedulability Analysis for Uniprocessor Systems
0369 ----------------------------------------------------
0370
0371 If M=1 (uniprocessor system), or in case of partitioned scheduling (each
0372 real-time task is statically assigned to one and only one CPU), it is
0373 possible to formally check if all the deadlines are respected.
0374 If D_i = P_i for all tasks, then EDF is able to respect all the deadlines
0375 of all the tasks executing on a CPU if and only if the total utilization
0376 of the tasks running on such a CPU is smaller or equal than 1.
0377 If D_i != P_i for some task, then it is possible to define the density of
0378 a task as WCET_i/min{D_i,P_i}, and EDF is able to respect all the deadlines
0379 of all the tasks running on a CPU if the sum of the densities of the tasks
0380 running on such a CPU is smaller or equal than 1:
0381
0382 sum(WCET_i / min{D_i, P_i}) <= 1
0383
0384 It is important to notice that this condition is only sufficient, and not
0385 necessary: there are task sets that are schedulable, but do not respect the
0386 condition. For example, consider the task set {Task_1,Task_2} composed by
0387 Task_1=(50ms,50ms,100ms) and Task_2=(10ms,100ms,100ms).
0388 EDF is clearly able to schedule the two tasks without missing any deadline
0389 (Task_1 is scheduled as soon as it is released, and finishes just in time
0390 to respect its deadline; Task_2 is scheduled immediately after Task_1, hence
0391 its response time cannot be larger than 50ms + 10ms = 60ms) even if
0392
0393 50 / min{50,100} + 10 / min{100, 100} = 50 / 50 + 10 / 100 = 1.1
0394
0395 Of course it is possible to test the exact schedulability of tasks with
0396 D_i != P_i (checking a condition that is both sufficient and necessary),
0397 but this cannot be done by comparing the total utilization or density with
0398 a constant. Instead, the so called "processor demand" approach can be used,
0399 computing the total amount of CPU time h(t) needed by all the tasks to
0400 respect all of their deadlines in a time interval of size t, and comparing
0401 such a time with the interval size t. If h(t) is smaller than t (that is,
0402 the amount of time needed by the tasks in a time interval of size t is
0403 smaller than the size of the interval) for all the possible values of t, then
0404 EDF is able to schedule the tasks respecting all of their deadlines. Since
0405 performing this check for all possible values of t is impossible, it has been
0406 proven[4,5,6] that it is sufficient to perform the test for values of t
0407 between 0 and a maximum value L. The cited papers contain all of the
0408 mathematical details and explain how to compute h(t) and L.
0409 In any case, this kind of analysis is too complex as well as too
0410 time-consuming to be performed on-line. Hence, as explained in Section
0411 4 Linux uses an admission test based on the tasks' utilizations.
0412
0413 3.3 Schedulability Analysis for Multiprocessor Systems
0414 ------------------------------------------------------
0415
0416 On multiprocessor systems with global EDF scheduling (non partitioned
0417 systems), a sufficient test for schedulability can not be based on the
0418 utilizations or densities: it can be shown that even if D_i = P_i task
0419 sets with utilizations slightly larger than 1 can miss deadlines regardless
0420 of the number of CPUs.
0421
0422 Consider a set {Task_1,...Task_{M+1}} of M+1 tasks on a system with M
0423 CPUs, with the first task Task_1=(P,P,P) having period, relative deadline
0424 and WCET equal to P. The remaining M tasks Task_i=(e,P-1,P-1) have an
0425 arbitrarily small worst case execution time (indicated as "e" here) and a
0426 period smaller than the one of the first task. Hence, if all the tasks
0427 activate at the same time t, global EDF schedules these M tasks first
0428 (because their absolute deadlines are equal to t + P - 1, hence they are
0429 smaller than the absolute deadline of Task_1, which is t + P). As a
0430 result, Task_1 can be scheduled only at time t + e, and will finish at
0431 time t + e + P, after its absolute deadline. The total utilization of the
0432 task set is U = M · e / (P - 1) + P / P = M · e / (P - 1) + 1, and for small
0433 values of e this can become very close to 1. This is known as "Dhall's
0434 effect"[7]. Note: the example in the original paper by Dhall has been
0435 slightly simplified here (for example, Dhall more correctly computed
0436 lim_{e->0}U).
0437
0438 More complex schedulability tests for global EDF have been developed in
0439 real-time literature[8,9], but they are not based on a simple comparison
0440 between total utilization (or density) and a fixed constant. If all tasks
0441 have D_i = P_i, a sufficient schedulability condition can be expressed in
0442 a simple way:
0443
0444 sum(WCET_i / P_i) <= M - (M - 1) · U_max
0445
0446 where U_max = max{WCET_i / P_i}[10]. Notice that for U_max = 1,
0447 M - (M - 1) · U_max becomes M - M + 1 = 1 and this schedulability condition
0448 just confirms the Dhall's effect. A more complete survey of the literature
0449 about schedulability tests for multi-processor real-time scheduling can be
0450 found in [11].
0451
0452 As seen, enforcing that the total utilization is smaller than M does not
0453 guarantee that global EDF schedules the tasks without missing any deadline
0454 (in other words, global EDF is not an optimal scheduling algorithm). However,
0455 a total utilization smaller than M is enough to guarantee that non real-time
0456 tasks are not starved and that the tardiness of real-time tasks has an upper
0457 bound[12] (as previously noted). Different bounds on the maximum tardiness
0458 experienced by real-time tasks have been developed in various papers[13,14],
0459 but the theoretical result that is important for SCHED_DEADLINE is that if
0460 the total utilization is smaller or equal than M then the response times of
0461 the tasks are limited.
0462
0463 3.4 Relationship with SCHED_DEADLINE Parameters
0464 -----------------------------------------------
0465
0466 Finally, it is important to understand the relationship between the
0467 SCHED_DEADLINE scheduling parameters described in Section 2 (runtime,
0468 deadline and period) and the real-time task parameters (WCET, D, P)
0469 described in this section. Note that the tasks' temporal constraints are
0470 represented by its absolute deadlines d_j = r_j + D described above, while
0471 SCHED_DEADLINE schedules the tasks according to scheduling deadlines (see
0472 Section 2).
0473 If an admission test is used to guarantee that the scheduling deadlines
0474 are respected, then SCHED_DEADLINE can be used to schedule real-time tasks
0475 guaranteeing that all the jobs' deadlines of a task are respected.
0476 In order to do this, a task must be scheduled by setting:
0477
0478 - runtime >= WCET
0479 - deadline = D
0480 - period <= P
0481
0482 IOW, if runtime >= WCET and if period is <= P, then the scheduling deadlines
0483 and the absolute deadlines (d_j) coincide, so a proper admission control
0484 allows to respect the jobs' absolute deadlines for this task (this is what is
0485 called "hard schedulability property" and is an extension of Lemma 1 of [2]).
0486 Notice that if runtime > deadline the admission control will surely reject
0487 this task, as it is not possible to respect its temporal constraints.
0488
0489 References:
0490
0491 1 - C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogram-
0492 ming in a hard-real-time environment. Journal of the Association for
0493 Computing Machinery, 20(1), 1973.
0494 2 - L. Abeni , G. Buttazzo. Integrating Multimedia Applications in Hard
0495 Real-Time Systems. Proceedings of the 19th IEEE Real-time Systems
0496 Symposium, 1998. http://retis.sssup.it/~giorgio/paps/1998/rtss98-cbs.pdf
0497 3 - L. Abeni. Server Mechanisms for Multimedia Applications. ReTiS Lab
0498 Technical Report. http://disi.unitn.it/~abeni/tr-98-01.pdf
0499 4 - J. Y. Leung and M.L. Merril. A Note on Preemptive Scheduling of
0500 Periodic, Real-Time Tasks. Information Processing Letters, vol. 11,
0501 no. 3, pp. 115-118, 1980.
0502 5 - S. K. Baruah, A. K. Mok and L. E. Rosier. Preemptively Scheduling
0503 Hard-Real-Time Sporadic Tasks on One Processor. Proceedings of the
0504 11th IEEE Real-time Systems Symposium, 1990.
0505 6 - S. K. Baruah, L. E. Rosier and R. R. Howell. Algorithms and Complexity
0506 Concerning the Preemptive Scheduling of Periodic Real-Time tasks on
0507 One Processor. Real-Time Systems Journal, vol. 4, no. 2, pp 301-324,
0508 1990.
0509 7 - S. J. Dhall and C. L. Liu. On a real-time scheduling problem. Operations
0510 research, vol. 26, no. 1, pp 127-140, 1978.
0511 8 - T. Baker. Multiprocessor EDF and Deadline Monotonic Schedulability
0512 Analysis. Proceedings of the 24th IEEE Real-Time Systems Symposium, 2003.
0513 9 - T. Baker. An Analysis of EDF Schedulability on a Multiprocessor.
0514 IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 8,
0515 pp 760-768, 2005.
0516 10 - J. Goossens, S. Funk and S. Baruah, Priority-Driven Scheduling of
0517 Periodic Task Systems on Multiprocessors. Real-Time Systems Journal,
0518 vol. 25, no. 2–3, pp. 187–205, 2003.
0519 11 - R. Davis and A. Burns. A Survey of Hard Real-Time Scheduling for
0520 Multiprocessor Systems. ACM Computing Surveys, vol. 43, no. 4, 2011.
0521 http://www-users.cs.york.ac.uk/~robdavis/papers/MPSurveyv5.0.pdf
0522 12 - U. C. Devi and J. H. Anderson. Tardiness Bounds under Global EDF
0523 Scheduling on a Multiprocessor. Real-Time Systems Journal, vol. 32,
0524 no. 2, pp 133-189, 2008.
0525 13 - P. Valente and G. Lipari. An Upper Bound to the Lateness of Soft
0526 Real-Time Tasks Scheduled by EDF on Multiprocessors. Proceedings of
0527 the 26th IEEE Real-Time Systems Symposium, 2005.
0528 14 - J. Erickson, U. Devi and S. Baruah. Improved tardiness bounds for
0529 Global EDF. Proceedings of the 22nd Euromicro Conference on
0530 Real-Time Systems, 2010.
0531 15 - G. Lipari, S. Baruah, Greedy reclamation of unused bandwidth in
0532 constant-bandwidth servers, 12th IEEE Euromicro Conference on Real-Time
0533 Systems, 2000.
0534 16 - L. Abeni, J. Lelli, C. Scordino, L. Palopoli, Greedy CPU reclaiming for
0535 SCHED DEADLINE. In Proceedings of the Real-Time Linux Workshop (RTLWS),
0536 Dusseldorf, Germany, 2014.
0537 17 - L. Abeni, G. Lipari, A. Parri, Y. Sun, Multicore CPU reclaiming: parallel
0538 or sequential?. In Proceedings of the 31st Annual ACM Symposium on Applied
0539 Computing, 2016.
0540 18 - J. Lelli, C. Scordino, L. Abeni, D. Faggioli, Deadline scheduling in the
0541 Linux kernel, Software: Practice and Experience, 46(6): 821-839, June
0542 2016.
0543 19 - C. Scordino, L. Abeni, J. Lelli, Energy-Aware Real-Time Scheduling in
0544 the Linux Kernel, 33rd ACM/SIGAPP Symposium On Applied Computing (SAC
0545 2018), Pau, France, April 2018.
0546
0547
0548 4. Bandwidth management
0549 =======================
0550
0551 As previously mentioned, in order for -deadline scheduling to be
0552 effective and useful (that is, to be able to provide "runtime" time units
0553 within "deadline"), it is important to have some method to keep the allocation
0554 of the available fractions of CPU time to the various tasks under control.
0555 This is usually called "admission control" and if it is not performed, then
0556 no guarantee can be given on the actual scheduling of the -deadline tasks.
0557
0558 As already stated in Section 3, a necessary condition to be respected to
0559 correctly schedule a set of real-time tasks is that the total utilization
0560 is smaller than M. When talking about -deadline tasks, this requires that
0561 the sum of the ratio between runtime and period for all tasks is smaller
0562 than M. Notice that the ratio runtime/period is equivalent to the utilization
0563 of a "traditional" real-time task, and is also often referred to as
0564 "bandwidth".
0565 The interface used to control the CPU bandwidth that can be allocated
0566 to -deadline tasks is similar to the one already used for -rt
0567 tasks with real-time group scheduling (a.k.a. RT-throttling - see
0568 Documentation/scheduler/sched-rt-group.rst), and is based on readable/
0569 writable control files located in procfs (for system wide settings).
0570 Notice that per-group settings (controlled through cgroupfs) are still not
0571 defined for -deadline tasks, because more discussion is needed in order to
0572 figure out how we want to manage SCHED_DEADLINE bandwidth at the task group
0573 level.
0574
0575 A main difference between deadline bandwidth management and RT-throttling
0576 is that -deadline tasks have bandwidth on their own (while -rt ones don't!),
0577 and thus we don't need a higher level throttling mechanism to enforce the
0578 desired bandwidth. In other words, this means that interface parameters are
0579 only used at admission control time (i.e., when the user calls
0580 sched_setattr()). Scheduling is then performed considering actual tasks'
0581 parameters, so that CPU bandwidth is allocated to SCHED_DEADLINE tasks
0582 respecting their needs in terms of granularity. Therefore, using this simple
0583 interface we can put a cap on total utilization of -deadline tasks (i.e.,
0584 \Sum (runtime_i / period_i) < global_dl_utilization_cap).
0585
0586 4.1 System wide settings
0587 ------------------------
0588
0589 The system wide settings are configured under the /proc virtual file system.
0590
0591 For now the -rt knobs are used for -deadline admission control and the
0592 -deadline runtime is accounted against the -rt runtime. We realize that this
0593 isn't entirely desirable; however, it is better to have a small interface for
0594 now, and be able to change it easily later. The ideal situation (see 5.) is to
0595 run -rt tasks from a -deadline server; in which case the -rt bandwidth is a
0596 direct subset of dl_bw.
0597
0598 This means that, for a root_domain comprising M CPUs, -deadline tasks
0599 can be created while the sum of their bandwidths stays below:
0600
0601 M * (sched_rt_runtime_us / sched_rt_period_us)
0602
0603 It is also possible to disable this bandwidth management logic, and
0604 be thus free of oversubscribing the system up to any arbitrary level.
0605 This is done by writing -1 in /proc/sys/kernel/sched_rt_runtime_us.
0606
0607
0608 4.2 Task interface
0609 ------------------
0610
0611 Specifying a periodic/sporadic task that executes for a given amount of
0612 runtime at each instance, and that is scheduled according to the urgency of
0613 its own timing constraints needs, in general, a way of declaring:
0614
0615 - a (maximum/typical) instance execution time,
0616 - a minimum interval between consecutive instances,
0617 - a time constraint by which each instance must be completed.
0618
0619 Therefore:
0620
0621 * a new struct sched_attr, containing all the necessary fields is
0622 provided;
0623 * the new scheduling related syscalls that manipulate it, i.e.,
0624 sched_setattr() and sched_getattr() are implemented.
0625
0626 For debugging purposes, the leftover runtime and absolute deadline of a
0627 SCHED_DEADLINE task can be retrieved through /proc/<pid>/sched (entries
0628 dl.runtime and dl.deadline, both values in ns). A programmatic way to
0629 retrieve these values from production code is under discussion.
0630
0631
0632 4.3 Default behavior
0633 ---------------------
0634
0635 The default value for SCHED_DEADLINE bandwidth is to have rt_runtime equal to
0636 950000. With rt_period equal to 1000000, by default, it means that -deadline
0637 tasks can use at most 95%, multiplied by the number of CPUs that compose the
0638 root_domain, for each root_domain.
0639 This means that non -deadline tasks will receive at least 5% of the CPU time,
0640 and that -deadline tasks will receive their runtime with a guaranteed
0641 worst-case delay respect to the "deadline" parameter. If "deadline" = "period"
0642 and the cpuset mechanism is used to implement partitioned scheduling (see
0643 Section 5), then this simple setting of the bandwidth management is able to
0644 deterministically guarantee that -deadline tasks will receive their runtime
0645 in a period.
0646
0647 Finally, notice that in order not to jeopardize the admission control a
0648 -deadline task cannot fork.
0649
0650
0651 4.4 Behavior of sched_yield()
0652 -----------------------------
0653
0654 When a SCHED_DEADLINE task calls sched_yield(), it gives up its
0655 remaining runtime and is immediately throttled, until the next
0656 period, when its runtime will be replenished (a special flag
0657 dl_yielded is set and used to handle correctly throttling and runtime
0658 replenishment after a call to sched_yield()).
0659
0660 This behavior of sched_yield() allows the task to wake-up exactly at
0661 the beginning of the next period. Also, this may be useful in the
0662 future with bandwidth reclaiming mechanisms, where sched_yield() will
0663 make the leftoever runtime available for reclamation by other
0664 SCHED_DEADLINE tasks.
0665
0666
0667 5. Tasks CPU affinity
0668 =====================
0669
0670 -deadline tasks cannot have an affinity mask smaller that the entire
0671 root_domain they are created on. However, affinities can be specified
0672 through the cpuset facility (Documentation/admin-guide/cgroup-v1/cpusets.rst).
0673
0674 5.1 SCHED_DEADLINE and cpusets HOWTO
0675 ------------------------------------
0676
0677 An example of a simple configuration (pin a -deadline task to CPU0)
0678 follows (rt-app is used to create a -deadline task)::
0679
0680 mkdir /dev/cpuset
0681 mount -t cgroup -o cpuset cpuset /dev/cpuset
0682 cd /dev/cpuset
0683 mkdir cpu0
0684 echo 0 > cpu0/cpuset.cpus
0685 echo 0 > cpu0/cpuset.mems
0686 echo 1 > cpuset.cpu_exclusive
0687 echo 0 > cpuset.sched_load_balance
0688 echo 1 > cpu0/cpuset.cpu_exclusive
0689 echo 1 > cpu0/cpuset.mem_exclusive
0690 echo $$ > cpu0/tasks
0691 rt-app -t 100000:10000:d:0 -D5 # it is now actually superfluous to specify
0692 # task affinity
0693
0694 6. Future plans
0695 ===============
0696
0697 Still missing:
0698
0699 - programmatic way to retrieve current runtime and absolute deadline
0700 - refinements to deadline inheritance, especially regarding the possibility
0701 of retaining bandwidth isolation among non-interacting tasks. This is
0702 being studied from both theoretical and practical points of view, and
0703 hopefully we should be able to produce some demonstrative code soon;
0704 - (c)group based bandwidth management, and maybe scheduling;
0705 - access control for non-root users (and related security concerns to
0706 address), which is the best way to allow unprivileged use of the mechanisms
0707 and how to prevent non-root users "cheat" the system?
0708
0709 As already discussed, we are planning also to merge this work with the EDF
0710 throttling patches [https://lore.kernel.org/r/cover.1266931410.git.fabio@helm.retis] but we still are in
0711 the preliminary phases of the merge and we really seek feedback that would
0712 help us decide on the direction it should take.
0713
0714 Appendix A. Test suite
0715 ======================
0716
0717 The SCHED_DEADLINE policy can be easily tested using two applications that
0718 are part of a wider Linux Scheduler validation suite. The suite is
0719 available as a GitHub repository: https://github.com/scheduler-tools.
0720
0721 The first testing application is called rt-app and can be used to
0722 start multiple threads with specific parameters. rt-app supports
0723 SCHED_{OTHER,FIFO,RR,DEADLINE} scheduling policies and their related
0724 parameters (e.g., niceness, priority, runtime/deadline/period). rt-app
0725 is a valuable tool, as it can be used to synthetically recreate certain
0726 workloads (maybe mimicking real use-cases) and evaluate how the scheduler
0727 behaves under such workloads. In this way, results are easily reproducible.
0728 rt-app is available at: https://github.com/scheduler-tools/rt-app.
0729
0730 Thread parameters can be specified from the command line, with something like
0731 this::
0732
0733 # rt-app -t 100000:10000:d -t 150000:20000:f:10 -D5
0734
0735 The above creates 2 threads. The first one, scheduled by SCHED_DEADLINE,
0736 executes for 10ms every 100ms. The second one, scheduled at SCHED_FIFO
0737 priority 10, executes for 20ms every 150ms. The test will run for a total
0738 of 5 seconds.
0739
0740 More interestingly, configurations can be described with a json file that
0741 can be passed as input to rt-app with something like this::
0742
0743 # rt-app my_config.json
0744
0745 The parameters that can be specified with the second method are a superset
0746 of the command line options. Please refer to rt-app documentation for more
0747 details (`<rt-app-sources>/doc/*.json`).
0748
0749 The second testing application is a modification of schedtool, called
0750 schedtool-dl, which can be used to setup SCHED_DEADLINE parameters for a
0751 certain pid/application. schedtool-dl is available at:
0752 https://github.com/scheduler-tools/schedtool-dl.git.
0753
0754 The usage is straightforward::
0755
0756 # schedtool -E -t 10000000:100000000 -e ./my_cpuhog_app
0757
0758 With this, my_cpuhog_app is put to run inside a SCHED_DEADLINE reservation
0759 of 10ms every 100ms (note that parameters are expressed in microseconds).
0760 You can also use schedtool to create a reservation for an already running
0761 application, given that you know its pid::
0762
0763 # schedtool -E -t 10000000:100000000 my_app_pid
0764
0765 Appendix B. Minimal main()
0766 ==========================
0767
0768 We provide in what follows a simple (ugly) self-contained code snippet
0769 showing how SCHED_DEADLINE reservations can be created by a real-time
0770 application developer::
0771
0772 #define _GNU_SOURCE
0773 #include <unistd.h>
0774 #include <stdio.h>
0775 #include <stdlib.h>
0776 #include <string.h>
0777 #include <time.h>
0778 #include <linux/unistd.h>
0779 #include <linux/kernel.h>
0780 #include <linux/types.h>
0781 #include <sys/syscall.h>
0782 #include <pthread.h>
0783
0784 #define gettid() syscall(__NR_gettid)
0785
0786 #define SCHED_DEADLINE 6
0787
0788 /* XXX use the proper syscall numbers */
0789 #ifdef __x86_64__
0790 #define __NR_sched_setattr 314
0791 #define __NR_sched_getattr 315
0792 #endif
0793
0794 #ifdef __i386__
0795 #define __NR_sched_setattr 351
0796 #define __NR_sched_getattr 352
0797 #endif
0798
0799 #ifdef __arm__
0800 #define __NR_sched_setattr 380
0801 #define __NR_sched_getattr 381
0802 #endif
0803
0804 static volatile int done;
0805
0806 struct sched_attr {
0807 __u32 size;
0808
0809 __u32 sched_policy;
0810 __u64 sched_flags;
0811
0812 /* SCHED_NORMAL, SCHED_BATCH */
0813 __s32 sched_nice;
0814
0815 /* SCHED_FIFO, SCHED_RR */
0816 __u32 sched_priority;
0817
0818 /* SCHED_DEADLINE (nsec) */
0819 __u64 sched_runtime;
0820 __u64 sched_deadline;
0821 __u64 sched_period;
0822 };
0823
0824 int sched_setattr(pid_t pid,
0825 const struct sched_attr *attr,
0826 unsigned int flags)
0827 {
0828 return syscall(__NR_sched_setattr, pid, attr, flags);
0829 }
0830
0831 int sched_getattr(pid_t pid,
0832 struct sched_attr *attr,
0833 unsigned int size,
0834 unsigned int flags)
0835 {
0836 return syscall(__NR_sched_getattr, pid, attr, size, flags);
0837 }
0838
0839 void *run_deadline(void *data)
0840 {
0841 struct sched_attr attr;
0842 int x = 0;
0843 int ret;
0844 unsigned int flags = 0;
0845
0846 printf("deadline thread started [%ld]\n", gettid());
0847
0848 attr.size = sizeof(attr);
0849 attr.sched_flags = 0;
0850 attr.sched_nice = 0;
0851 attr.sched_priority = 0;
0852
0853 /* This creates a 10ms/30ms reservation */
0854 attr.sched_policy = SCHED_DEADLINE;
0855 attr.sched_runtime = 10 * 1000 * 1000;
0856 attr.sched_period = attr.sched_deadline = 30 * 1000 * 1000;
0857
0858 ret = sched_setattr(0, &attr, flags);
0859 if (ret < 0) {
0860 done = 0;
0861 perror("sched_setattr");
0862 exit(-1);
0863 }
0864
0865 while (!done) {
0866 x++;
0867 }
0868
0869 printf("deadline thread dies [%ld]\n", gettid());
0870 return NULL;
0871 }
0872
0873 int main (int argc, char **argv)
0874 {
0875 pthread_t thread;
0876
0877 printf("main thread [%ld]\n", gettid());
0878
0879 pthread_create(&thread, NULL, run_deadline, NULL);
0880
0881 sleep(10);
0882
0883 done = 1;
0884 pthread_join(thread, NULL);
0885
0886 printf("main dies [%ld]\n", gettid());
0887 return 0;
0888 }