0001 =========================================================
0002 Cluster-wide Power-up/power-down race avoidance algorithm
0003 =========================================================
0004
0005 This file documents the algorithm which is used to coordinate CPU and
0006 cluster setup and teardown operations and to manage hardware coherency
0007 controls safely.
0008
0009 The section "Rationale" explains what the algorithm is for and why it is
0010 needed. "Basic model" explains general concepts using a simplified view
0011 of the system. The other sections explain the actual details of the
0012 algorithm in use.
0013
0014
0015 Rationale
0016 ---------
0017
0018 In a system containing multiple CPUs, it is desirable to have the
0019 ability to turn off individual CPUs when the system is idle, reducing
0020 power consumption and thermal dissipation.
0021
0022 In a system containing multiple clusters of CPUs, it is also desirable
0023 to have the ability to turn off entire clusters.
0024
0025 Turning entire clusters off and on is a risky business, because it
0026 involves performing potentially destructive operations affecting a group
0027 of independently running CPUs, while the OS continues to run. This
0028 means that we need some coordination in order to ensure that critical
0029 cluster-level operations are only performed when it is truly safe to do
0030 so.
0031
0032 Simple locking may not be sufficient to solve this problem, because
0033 mechanisms like Linux spinlocks may rely on coherency mechanisms which
0034 are not immediately enabled when a cluster powers up. Since enabling or
0035 disabling those mechanisms may itself be a non-atomic operation (such as
0036 writing some hardware registers and invalidating large caches), other
0037 methods of coordination are required in order to guarantee safe
0038 power-down and power-up at the cluster level.
0039
0040 The mechanism presented in this document describes a coherent memory
0041 based protocol for performing the needed coordination. It aims to be as
0042 lightweight as possible, while providing the required safety properties.
0043
0044
0045 Basic model
0046 -----------
0047
0048 Each cluster and CPU is assigned a state, as follows:
0049
0050 - DOWN
0051 - COMING_UP
0052 - UP
0053 - GOING_DOWN
0054
0055 ::
0056
0057 +---------> UP ----------+
0058 | v
0059
0060 COMING_UP GOING_DOWN
0061
0062 ^ |
0063 +--------- DOWN <--------+
0064
0065
0066 DOWN:
0067 The CPU or cluster is not coherent, and is either powered off or
0068 suspended, or is ready to be powered off or suspended.
0069
0070 COMING_UP:
0071 The CPU or cluster has committed to moving to the UP state.
0072 It may be part way through the process of initialisation and
0073 enabling coherency.
0074
0075 UP:
0076 The CPU or cluster is active and coherent at the hardware
0077 level. A CPU in this state is not necessarily being used
0078 actively by the kernel.
0079
0080 GOING_DOWN:
0081 The CPU or cluster has committed to moving to the DOWN
0082 state. It may be part way through the process of teardown and
0083 coherency exit.
0084
0085
0086 Each CPU has one of these states assigned to it at any point in time.
0087 The CPU states are described in the "CPU state" section, below.
0088
0089 Each cluster is also assigned a state, but it is necessary to split the
0090 state value into two parts (the "cluster" state and "inbound" state) and
0091 to introduce additional states in order to avoid races between different
0092 CPUs in the cluster simultaneously modifying the state. The cluster-
0093 level states are described in the "Cluster state" section.
0094
0095 To help distinguish the CPU states from cluster states in this
0096 discussion, the state names are given a `CPU_` prefix for the CPU states,
0097 and a `CLUSTER_` or `INBOUND_` prefix for the cluster states.
0098
0099
0100 CPU state
0101 ---------
0102
0103 In this algorithm, each individual core in a multi-core processor is
0104 referred to as a "CPU". CPUs are assumed to be single-threaded:
0105 therefore, a CPU can only be doing one thing at a single point in time.
0106
0107 This means that CPUs fit the basic model closely.
0108
0109 The algorithm defines the following states for each CPU in the system:
0110
0111 - CPU_DOWN
0112 - CPU_COMING_UP
0113 - CPU_UP
0114 - CPU_GOING_DOWN
0115
0116 ::
0117
0118 cluster setup and
0119 CPU setup complete policy decision
0120 +-----------> CPU_UP ------------+
0121 | v
0122
0123 CPU_COMING_UP CPU_GOING_DOWN
0124
0125 ^ |
0126 +----------- CPU_DOWN <----------+
0127 policy decision CPU teardown complete
0128 or hardware event
0129
0130
0131 The definitions of the four states correspond closely to the states of
0132 the basic model.
0133
0134 Transitions between states occur as follows.
0135
0136 A trigger event (spontaneous) means that the CPU can transition to the
0137 next state as a result of making local progress only, with no
0138 requirement for any external event to happen.
0139
0140
0141 CPU_DOWN:
0142 A CPU reaches the CPU_DOWN state when it is ready for
0143 power-down. On reaching this state, the CPU will typically
0144 power itself down or suspend itself, via a WFI instruction or a
0145 firmware call.
0146
0147 Next state:
0148 CPU_COMING_UP
0149 Conditions:
0150 none
0151
0152 Trigger events:
0153 a) an explicit hardware power-up operation, resulting
0154 from a policy decision on another CPU;
0155
0156 b) a hardware event, such as an interrupt.
0157
0158
0159 CPU_COMING_UP:
0160 A CPU cannot start participating in hardware coherency until the
0161 cluster is set up and coherent. If the cluster is not ready,
0162 then the CPU will wait in the CPU_COMING_UP state until the
0163 cluster has been set up.
0164
0165 Next state:
0166 CPU_UP
0167 Conditions:
0168 The CPU's parent cluster must be in CLUSTER_UP.
0169 Trigger events:
0170 Transition of the parent cluster to CLUSTER_UP.
0171
0172 Refer to the "Cluster state" section for a description of the
0173 CLUSTER_UP state.
0174
0175
0176 CPU_UP:
0177 When a CPU reaches the CPU_UP state, it is safe for the CPU to
0178 start participating in local coherency.
0179
0180 This is done by jumping to the kernel's CPU resume code.
0181
0182 Note that the definition of this state is slightly different
0183 from the basic model definition: CPU_UP does not mean that the
0184 CPU is coherent yet, but it does mean that it is safe to resume
0185 the kernel. The kernel handles the rest of the resume
0186 procedure, so the remaining steps are not visible as part of the
0187 race avoidance algorithm.
0188
0189 The CPU remains in this state until an explicit policy decision
0190 is made to shut down or suspend the CPU.
0191
0192 Next state:
0193 CPU_GOING_DOWN
0194 Conditions:
0195 none
0196 Trigger events:
0197 explicit policy decision
0198
0199
0200 CPU_GOING_DOWN:
0201 While in this state, the CPU exits coherency, including any
0202 operations required to achieve this (such as cleaning data
0203 caches).
0204
0205 Next state:
0206 CPU_DOWN
0207 Conditions:
0208 local CPU teardown complete
0209 Trigger events:
0210 (spontaneous)
0211
0212
0213 Cluster state
0214 -------------
0215
0216 A cluster is a group of connected CPUs with some common resources.
0217 Because a cluster contains multiple CPUs, it can be doing multiple
0218 things at the same time. This has some implications. In particular, a
0219 CPU can start up while another CPU is tearing the cluster down.
0220
0221 In this discussion, the "outbound side" is the view of the cluster state
0222 as seen by a CPU tearing the cluster down. The "inbound side" is the
0223 view of the cluster state as seen by a CPU setting the CPU up.
0224
0225 In order to enable safe coordination in such situations, it is important
0226 that a CPU which is setting up the cluster can advertise its state
0227 independently of the CPU which is tearing down the cluster. For this
0228 reason, the cluster state is split into two parts:
0229
0230 "cluster" state: The global state of the cluster; or the state
0231 on the outbound side:
0232
0233 - CLUSTER_DOWN
0234 - CLUSTER_UP
0235 - CLUSTER_GOING_DOWN
0236
0237 "inbound" state: The state of the cluster on the inbound side.
0238
0239 - INBOUND_NOT_COMING_UP
0240 - INBOUND_COMING_UP
0241
0242
0243 The different pairings of these states results in six possible
0244 states for the cluster as a whole::
0245
0246 CLUSTER_UP
0247 +==========> INBOUND_NOT_COMING_UP -------------+
0248 # |
0249 |
0250 CLUSTER_UP <----+ |
0251 INBOUND_COMING_UP | v
0252
0253 ^ CLUSTER_GOING_DOWN CLUSTER_GOING_DOWN
0254 # INBOUND_COMING_UP <=== INBOUND_NOT_COMING_UP
0255
0256 CLUSTER_DOWN | |
0257 INBOUND_COMING_UP <----+ |
0258 |
0259 ^ |
0260 +=========== CLUSTER_DOWN <------------+
0261 INBOUND_NOT_COMING_UP
0262
0263 Transitions -----> can only be made by the outbound CPU, and
0264 only involve changes to the "cluster" state.
0265
0266 Transitions ===##> can only be made by the inbound CPU, and only
0267 involve changes to the "inbound" state, except where there is no
0268 further transition possible on the outbound side (i.e., the
0269 outbound CPU has put the cluster into the CLUSTER_DOWN state).
0270
0271 The race avoidance algorithm does not provide a way to determine
0272 which exact CPUs within the cluster play these roles. This must
0273 be decided in advance by some other means. Refer to the section
0274 "Last man and first man selection" for more explanation.
0275
0276
0277 CLUSTER_DOWN/INBOUND_NOT_COMING_UP is the only state where the
0278 cluster can actually be powered down.
0279
0280 The parallelism of the inbound and outbound CPUs is observed by
0281 the existence of two different paths from CLUSTER_GOING_DOWN/
0282 INBOUND_NOT_COMING_UP (corresponding to GOING_DOWN in the basic
0283 model) to CLUSTER_DOWN/INBOUND_COMING_UP (corresponding to
0284 COMING_UP in the basic model). The second path avoids cluster
0285 teardown completely.
0286
0287 CLUSTER_UP/INBOUND_COMING_UP is equivalent to UP in the basic
0288 model. The final transition to CLUSTER_UP/INBOUND_NOT_COMING_UP
0289 is trivial and merely resets the state machine ready for the
0290 next cycle.
0291
0292 Details of the allowable transitions follow.
0293
0294 The next state in each case is notated
0295
0296 <cluster state>/<inbound state> (<transitioner>)
0297
0298 where the <transitioner> is the side on which the transition
0299 can occur; either the inbound or the outbound side.
0300
0301
0302 CLUSTER_DOWN/INBOUND_NOT_COMING_UP:
0303 Next state:
0304 CLUSTER_DOWN/INBOUND_COMING_UP (inbound)
0305 Conditions:
0306 none
0307
0308 Trigger events:
0309 a) an explicit hardware power-up operation, resulting
0310 from a policy decision on another CPU;
0311
0312 b) a hardware event, such as an interrupt.
0313
0314
0315 CLUSTER_DOWN/INBOUND_COMING_UP:
0316
0317 In this state, an inbound CPU sets up the cluster, including
0318 enabling of hardware coherency at the cluster level and any
0319 other operations (such as cache invalidation) which are required
0320 in order to achieve this.
0321
0322 The purpose of this state is to do sufficient cluster-level
0323 setup to enable other CPUs in the cluster to enter coherency
0324 safely.
0325
0326 Next state:
0327 CLUSTER_UP/INBOUND_COMING_UP (inbound)
0328 Conditions:
0329 cluster-level setup and hardware coherency complete
0330 Trigger events:
0331 (spontaneous)
0332
0333
0334 CLUSTER_UP/INBOUND_COMING_UP:
0335
0336 Cluster-level setup is complete and hardware coherency is
0337 enabled for the cluster. Other CPUs in the cluster can safely
0338 enter coherency.
0339
0340 This is a transient state, leading immediately to
0341 CLUSTER_UP/INBOUND_NOT_COMING_UP. All other CPUs on the cluster
0342 should consider treat these two states as equivalent.
0343
0344 Next state:
0345 CLUSTER_UP/INBOUND_NOT_COMING_UP (inbound)
0346 Conditions:
0347 none
0348 Trigger events:
0349 (spontaneous)
0350
0351
0352 CLUSTER_UP/INBOUND_NOT_COMING_UP:
0353
0354 Cluster-level setup is complete and hardware coherency is
0355 enabled for the cluster. Other CPUs in the cluster can safely
0356 enter coherency.
0357
0358 The cluster will remain in this state until a policy decision is
0359 made to power the cluster down.
0360
0361 Next state:
0362 CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP (outbound)
0363 Conditions:
0364 none
0365 Trigger events:
0366 policy decision to power down the cluster
0367
0368
0369 CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP:
0370
0371 An outbound CPU is tearing the cluster down. The selected CPU
0372 must wait in this state until all CPUs in the cluster are in the
0373 CPU_DOWN state.
0374
0375 When all CPUs are in the CPU_DOWN state, the cluster can be torn
0376 down, for example by cleaning data caches and exiting
0377 cluster-level coherency.
0378
0379 To avoid wasteful unnecessary teardown operations, the outbound
0380 should check the inbound cluster state for asynchronous
0381 transitions to INBOUND_COMING_UP. Alternatively, individual
0382 CPUs can be checked for entry into CPU_COMING_UP or CPU_UP.
0383
0384
0385 Next states:
0386
0387 CLUSTER_DOWN/INBOUND_NOT_COMING_UP (outbound)
0388 Conditions:
0389 cluster torn down and ready to power off
0390 Trigger events:
0391 (spontaneous)
0392
0393 CLUSTER_GOING_DOWN/INBOUND_COMING_UP (inbound)
0394 Conditions:
0395 none
0396
0397 Trigger events:
0398 a) an explicit hardware power-up operation,
0399 resulting from a policy decision on another
0400 CPU;
0401
0402 b) a hardware event, such as an interrupt.
0403
0404
0405 CLUSTER_GOING_DOWN/INBOUND_COMING_UP:
0406
0407 The cluster is (or was) being torn down, but another CPU has
0408 come online in the meantime and is trying to set up the cluster
0409 again.
0410
0411 If the outbound CPU observes this state, it has two choices:
0412
0413 a) back out of teardown, restoring the cluster to the
0414 CLUSTER_UP state;
0415
0416 b) finish tearing the cluster down and put the cluster
0417 in the CLUSTER_DOWN state; the inbound CPU will
0418 set up the cluster again from there.
0419
0420 Choice (a) permits the removal of some latency by avoiding
0421 unnecessary teardown and setup operations in situations where
0422 the cluster is not really going to be powered down.
0423
0424
0425 Next states:
0426
0427 CLUSTER_UP/INBOUND_COMING_UP (outbound)
0428 Conditions:
0429 cluster-level setup and hardware
0430 coherency complete
0431
0432 Trigger events:
0433 (spontaneous)
0434
0435 CLUSTER_DOWN/INBOUND_COMING_UP (outbound)
0436 Conditions:
0437 cluster torn down and ready to power off
0438
0439 Trigger events:
0440 (spontaneous)
0441
0442
0443 Last man and First man selection
0444 --------------------------------
0445
0446 The CPU which performs cluster tear-down operations on the outbound side
0447 is commonly referred to as the "last man".
0448
0449 The CPU which performs cluster setup on the inbound side is commonly
0450 referred to as the "first man".
0451
0452 The race avoidance algorithm documented above does not provide a
0453 mechanism to choose which CPUs should play these roles.
0454
0455
0456 Last man:
0457
0458 When shutting down the cluster, all the CPUs involved are initially
0459 executing Linux and hence coherent. Therefore, ordinary spinlocks can
0460 be used to select a last man safely, before the CPUs become
0461 non-coherent.
0462
0463
0464 First man:
0465
0466 Because CPUs may power up asynchronously in response to external wake-up
0467 events, a dynamic mechanism is needed to make sure that only one CPU
0468 attempts to play the first man role and do the cluster-level
0469 initialisation: any other CPUs must wait for this to complete before
0470 proceeding.
0471
0472 Cluster-level initialisation may involve actions such as configuring
0473 coherency controls in the bus fabric.
0474
0475 The current implementation in mcpm_head.S uses a separate mutual exclusion
0476 mechanism to do this arbitration. This mechanism is documented in
0477 detail in vlocks.txt.
0478
0479
0480 Features and Limitations
0481 ------------------------
0482
0483 Implementation:
0484
0485 The current ARM-based implementation is split between
0486 arch/arm/common/mcpm_head.S (low-level inbound CPU operations) and
0487 arch/arm/common/mcpm_entry.c (everything else):
0488
0489 __mcpm_cpu_going_down() signals the transition of a CPU to the
0490 CPU_GOING_DOWN state.
0491
0492 __mcpm_cpu_down() signals the transition of a CPU to the CPU_DOWN
0493 state.
0494
0495 A CPU transitions to CPU_COMING_UP and then to CPU_UP via the
0496 low-level power-up code in mcpm_head.S. This could
0497 involve CPU-specific setup code, but in the current
0498 implementation it does not.
0499
0500 __mcpm_outbound_enter_critical() and __mcpm_outbound_leave_critical()
0501 handle transitions from CLUSTER_UP to CLUSTER_GOING_DOWN
0502 and from there to CLUSTER_DOWN or back to CLUSTER_UP (in
0503 the case of an aborted cluster power-down).
0504
0505 These functions are more complex than the __mcpm_cpu_*()
0506 functions due to the extra inter-CPU coordination which
0507 is needed for safe transitions at the cluster level.
0508
0509 A cluster transitions from CLUSTER_DOWN back to CLUSTER_UP via
0510 the low-level power-up code in mcpm_head.S. This
0511 typically involves platform-specific setup code,
0512 provided by the platform-specific power_up_setup
0513 function registered via mcpm_sync_init.
0514
0515 Deep topologies:
0516
0517 As currently described and implemented, the algorithm does not
0518 support CPU topologies involving more than two levels (i.e.,
0519 clusters of clusters are not supported). The algorithm could be
0520 extended by replicating the cluster-level states for the
0521 additional topological levels, and modifying the transition
0522 rules for the intermediate (non-outermost) cluster levels.
0523
0524
0525 Colophon
0526 --------
0527
0528 Originally created and documented by Dave Martin for Linaro Limited, in
0529 collaboration with Nicolas Pitre and Achin Gupta.
0530
0531 Copyright (C) 2012-2013 Linaro Limited
0532 Distributed under the terms of Version 2 of the GNU General Public
0533 License, as defined in linux/COPYING.