0001 Using TopDown metrics in user space
0002 -----------------------------------
0003
0004 Intel CPUs (since Sandy Bridge and Silvermont) support a TopDown
0005 methodology to break down CPU pipeline execution into 4 bottlenecks:
0006 frontend bound, backend bound, bad speculation, retiring.
0007
0008 For more details on Topdown see [1][5]
0009
0010 Traditionally this was implemented by events in generic counters
0011 and specific formulas to compute the bottlenecks.
0012
0013 perf stat --topdown implements this.
0014
0015 Full Top Down includes more levels that can break down the
0016 bottlenecks further. This is not directly implemented in perf,
0017 but available in other tools that can run on top of perf,
0018 such as toplev[2] or vtune[3]
0019
0020 New Topdown features in Ice Lake
0021 ===============================
0022
0023 With Ice Lake CPUs the TopDown metrics are directly available as
0024 fixed counters and do not require generic counters. This allows
0025 to collect TopDown always in addition to other events.
0026
0027 % perf stat -a --topdown -I1000
0028 # time retiring bad speculation frontend bound backend bound
0029 1.001281330 23.0% 15.3% 29.6% 32.1%
0030 2.003009005 5.0% 6.8% 46.6% 41.6%
0031 3.004646182 6.7% 6.7% 46.0% 40.6%
0032 4.006326375 5.0% 6.4% 47.6% 41.0%
0033 5.007991804 5.1% 6.3% 46.3% 42.3%
0034 6.009626773 6.2% 7.1% 47.3% 39.3%
0035 7.011296356 4.7% 6.7% 46.2% 42.4%
0036 8.012951831 4.7% 6.7% 47.5% 41.1%
0037 ...
0038
0039 This also enables measuring TopDown per thread/process instead
0040 of only per core.
0041
0042 Using TopDown through RDPMC in applications on Ice Lake
0043 ======================================================
0044
0045 For more fine grained measurements it can be useful to
0046 access the new directly from user space. This is more complicated,
0047 but drastically lowers overhead.
0048
0049 On Ice Lake, there is a new fixed counter 3: SLOTS, which reports
0050 "pipeline SLOTS" (cycles multiplied by core issue width) and a
0051 metric register that reports slots ratios for the different bottleneck
0052 categories.
0053
0054 The metrics counter is CPU model specific and is not available on older
0055 CPUs.
0056
0057 Example code
0058 ============
0059
0060 Library functions to do the functionality described below
0061 is also available in libjevents [4]
0062
0063 The application opens a group with fixed counter 3 (SLOTS) and any
0064 metric event, and allow user programs to read the performance counters.
0065
0066 Fixed counter 3 is mapped to a pseudo event event=0x00, umask=04,
0067 so the perf_event_attr structure should be initialized with
0068 { .config = 0x0400, .type = PERF_TYPE_RAW }
0069 The metric events are mapped to the pseudo event event=0x00, umask=0x8X.
0070 For example, the perf_event_attr structure can be initialized with
0071 { .config = 0x8000, .type = PERF_TYPE_RAW } for Retiring metric event
0072 The Fixed counter 3 must be the leader of the group.
0073
0074 #include <linux/perf_event.h>
0075 #include <sys/mman.h>
0076 #include <sys/syscall.h>
0077 #include <unistd.h>
0078
0079 /* Provide own perf_event_open stub because glibc doesn't */
0080 __attribute__((weak))
0081 int perf_event_open(struct perf_event_attr *attr, pid_t pid,
0082 int cpu, int group_fd, unsigned long flags)
0083 {
0084 return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
0085 }
0086
0087 /* Open slots counter file descriptor for current task. */
0088 struct perf_event_attr slots = {
0089 .type = PERF_TYPE_RAW,
0090 .size = sizeof(struct perf_event_attr),
0091 .config = 0x400,
0092 .exclude_kernel = 1,
0093 };
0094
0095 int slots_fd = perf_event_open(&slots, 0, -1, -1, 0);
0096 if (slots_fd < 0)
0097 ... error ...
0098
0099 /* Memory mapping the fd permits _rdpmc calls from userspace */
0100 void *slots_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, slots_fd, 0);
0101 if (!slot_p)
0102 .... error ...
0103
0104 /*
0105 * Open metrics event file descriptor for current task.
0106 * Set slots event as the leader of the group.
0107 */
0108 struct perf_event_attr metrics = {
0109 .type = PERF_TYPE_RAW,
0110 .size = sizeof(struct perf_event_attr),
0111 .config = 0x8000,
0112 .exclude_kernel = 1,
0113 };
0114
0115 int metrics_fd = perf_event_open(&metrics, 0, -1, slots_fd, 0);
0116 if (metrics_fd < 0)
0117 ... error ...
0118
0119 /* Memory mapping the fd permits _rdpmc calls from userspace */
0120 void *metrics_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, metrics_fd, 0);
0121 if (!metrics_p)
0122 ... error ...
0123
0124 Note: the file descriptors returned by the perf_event_open calls must be memory
0125 mapped to permit calls to the _rdpmd instruction. Permission may also be granted
0126 by writing the /sys/devices/cpu/rdpmc sysfs node.
0127
0128 The RDPMC instruction (or _rdpmc compiler intrinsic) can now be used
0129 to read slots and the topdown metrics at different points of the program:
0130
0131 #include <stdint.h>
0132 #include <x86intrin.h>
0133
0134 #define RDPMC_FIXED (1 << 30) /* return fixed counters */
0135 #define RDPMC_METRIC (1 << 29) /* return metric counters */
0136
0137 #define FIXED_COUNTER_SLOTS 3
0138 #define METRIC_COUNTER_TOPDOWN_L1_L2 0
0139
0140 static inline uint64_t read_slots(void)
0141 {
0142 return _rdpmc(RDPMC_FIXED | FIXED_COUNTER_SLOTS);
0143 }
0144
0145 static inline uint64_t read_metrics(void)
0146 {
0147 return _rdpmc(RDPMC_METRIC | METRIC_COUNTER_TOPDOWN_L1_L2);
0148 }
0149
0150 Then the program can be instrumented to read these metrics at different
0151 points.
0152
0153 It's not a good idea to do this with too short code regions,
0154 as the parallelism and overlap in the CPU program execution will
0155 cause too much measurement inaccuracy. For example instrumenting
0156 individual basic blocks is definitely too fine grained.
0157
0158 _rdpmc calls should not be mixed with reading the metrics and slots counters
0159 through system calls, as the kernel will reset these counters after each system
0160 call.
0161
0162 Decoding metrics values
0163 =======================
0164
0165 The value reported by read_metrics() contains four 8 bit fields
0166 that represent a scaled ratio that represent the Level 1 bottleneck.
0167 All four fields add up to 0xff (= 100%)
0168
0169 The binary ratios in the metric value can be converted to float ratios:
0170
0171 #define GET_METRIC(m, i) (((m) >> (i*8)) & 0xff)
0172
0173 /* L1 Topdown metric events */
0174 #define TOPDOWN_RETIRING(val) ((float)GET_METRIC(val, 0) / 0xff)
0175 #define TOPDOWN_BAD_SPEC(val) ((float)GET_METRIC(val, 1) / 0xff)
0176 #define TOPDOWN_FE_BOUND(val) ((float)GET_METRIC(val, 2) / 0xff)
0177 #define TOPDOWN_BE_BOUND(val) ((float)GET_METRIC(val, 3) / 0xff)
0178
0179 /*
0180 * L2 Topdown metric events.
0181 * Available on Sapphire Rapids and later platforms.
0182 */
0183 #define TOPDOWN_HEAVY_OPS(val) ((float)GET_METRIC(val, 4) / 0xff)
0184 #define TOPDOWN_BR_MISPREDICT(val) ((float)GET_METRIC(val, 5) / 0xff)
0185 #define TOPDOWN_FETCH_LAT(val) ((float)GET_METRIC(val, 6) / 0xff)
0186 #define TOPDOWN_MEM_BOUND(val) ((float)GET_METRIC(val, 7) / 0xff)
0187
0188 and then converted to percent for printing.
0189
0190 The ratios in the metric accumulate for the time when the counter
0191 is enabled. For measuring programs it is often useful to measure
0192 specific sections. For this it is needed to deltas on metrics.
0193
0194 This can be done by scaling the metrics with the slots counter
0195 read at the same time.
0196
0197 Then it's possible to take deltas of these slots counts
0198 measured at different points, and determine the metrics
0199 for that time period.
0200
0201 slots_a = read_slots();
0202 metric_a = read_metrics();
0203
0204 ... larger code region ...
0205
0206 slots_b = read_slots()
0207 metric_b = read_metrics()
0208
0209 # compute scaled metrics for measurement a
0210 retiring_slots_a = GET_METRIC(metric_a, 0) * slots_a
0211 bad_spec_slots_a = GET_METRIC(metric_a, 1) * slots_a
0212 fe_bound_slots_a = GET_METRIC(metric_a, 2) * slots_a
0213 be_bound_slots_a = GET_METRIC(metric_a, 3) * slots_a
0214
0215 # compute delta scaled metrics between b and a
0216 retiring_slots = GET_METRIC(metric_b, 0) * slots_b - retiring_slots_a
0217 bad_spec_slots = GET_METRIC(metric_b, 1) * slots_b - bad_spec_slots_a
0218 fe_bound_slots = GET_METRIC(metric_b, 2) * slots_b - fe_bound_slots_a
0219 be_bound_slots = GET_METRIC(metric_b, 3) * slots_b - be_bound_slots_a
0220
0221 Later the individual ratios of L1 metric events for the measurement period can
0222 be recreated from these counts.
0223
0224 slots_delta = slots_b - slots_a
0225 retiring_ratio = (float)retiring_slots / slots_delta
0226 bad_spec_ratio = (float)bad_spec_slots / slots_delta
0227 fe_bound_ratio = (float)fe_bound_slots / slots_delta
0228 be_bound_ratio = (float)be_bound_slots / slota_delta
0229
0230 printf("Retiring %.2f%% Bad Speculation %.2f%% FE Bound %.2f%% BE Bound %.2f%%\n",
0231 retiring_ratio * 100.,
0232 bad_spec_ratio * 100.,
0233 fe_bound_ratio * 100.,
0234 be_bound_ratio * 100.);
0235
0236 The individual ratios of L2 metric events for the measurement period can be
0237 recreated from L1 and L2 metric counters. (Available on Sapphire Rapids and
0238 later platforms)
0239
0240 # compute scaled metrics for measurement a
0241 heavy_ops_slots_a = GET_METRIC(metric_a, 4) * slots_a
0242 br_mispredict_slots_a = GET_METRIC(metric_a, 5) * slots_a
0243 fetch_lat_slots_a = GET_METRIC(metric_a, 6) * slots_a
0244 mem_bound_slots_a = GET_METRIC(metric_a, 7) * slots_a
0245
0246 # compute delta scaled metrics between b and a
0247 heavy_ops_slots = GET_METRIC(metric_b, 4) * slots_b - heavy_ops_slots_a
0248 br_mispredict_slots = GET_METRIC(metric_b, 5) * slots_b - br_mispredict_slots_a
0249 fetch_lat_slots = GET_METRIC(metric_b, 6) * slots_b - fetch_lat_slots_a
0250 mem_bound_slots = GET_METRIC(metric_b, 7) * slots_b - mem_bound_slots_a
0251
0252 slots_delta = slots_b - slots_a
0253 heavy_ops_ratio = (float)heavy_ops_slots / slots_delta
0254 light_ops_ratio = retiring_ratio - heavy_ops_ratio;
0255
0256 br_mispredict_ratio = (float)br_mispredict_slots / slots_delta
0257 machine_clears_ratio = bad_spec_ratio - br_mispredict_ratio;
0258
0259 fetch_lat_ratio = (float)fetch_lat_slots / slots_delta
0260 fetch_bw_ratio = fe_bound_ratio - fetch_lat_ratio;
0261
0262 mem_bound_ratio = (float)mem_bound_slots / slota_delta
0263 core_bound_ratio = be_bound_ratio - mem_bound_ratio;
0264
0265 printf("Heavy Operations %.2f%% Light Operations %.2f%% "
0266 "Branch Mispredict %.2f%% Machine Clears %.2f%% "
0267 "Fetch Latency %.2f%% Fetch Bandwidth %.2f%% "
0268 "Mem Bound %.2f%% Core Bound %.2f%%\n",
0269 heavy_ops_ratio * 100.,
0270 light_ops_ratio * 100.,
0271 br_mispredict_ratio * 100.,
0272 machine_clears_ratio * 100.,
0273 fetch_lat_ratio * 100.,
0274 fetch_bw_ratio * 100.,
0275 mem_bound_ratio * 100.,
0276 core_bound_ratio * 100.);
0277
0278 Resetting metrics counters
0279 ==========================
0280
0281 Since the individual metrics are only 8bit they lose precision for
0282 short regions over time because the number of cycles covered by each
0283 fraction bit shrinks. So the counters need to be reset regularly.
0284
0285 When using the kernel perf API the kernel resets on every read.
0286 So as long as the reading is at reasonable intervals (every few
0287 seconds) the precision is good.
0288
0289 When using perf stat it is recommended to always use the -I option,
0290 with no longer interval than a few seconds
0291
0292 perf stat -I 1000 --topdown ...
0293
0294 For user programs using RDPMC directly the counter can
0295 be reset explicitly using ioctl:
0296
0297 ioctl(perf_fd, PERF_EVENT_IOC_RESET, 0);
0298
0299 This "opens" a new measurement period.
0300
0301 A program using RDPMC for TopDown should schedule such a reset
0302 regularly, as in every few seconds.
0303
0304 Limits on Ice Lake
0305 ==================
0306
0307 Four pseudo TopDown metric events are exposed for the end-users,
0308 topdown-retiring, topdown-bad-spec, topdown-fe-bound and topdown-be-bound.
0309 They can be used to collect the TopDown value under the following
0310 rules:
0311 - All the TopDown metric events must be in a group with the SLOTS event.
0312 - The SLOTS event must be the leader of the group.
0313 - The PERF_FORMAT_GROUP flag must be applied for each TopDown metric
0314 events
0315
0316 The SLOTS event and the TopDown metric events can be counting members of
0317 a sampling read group. Since the SLOTS event must be the leader of a TopDown
0318 group, the second event of the group is the sampling event.
0319 For example, perf record -e '{slots, $sampling_event, topdown-retiring}:S'
0320
0321 Extension on Sapphire Rapids Server
0322 ===================================
0323 The metrics counter is extended to support TMA method level 2 metrics.
0324 The lower half of the register is the TMA level 1 metrics (legacy).
0325 The upper half is also divided into four 8-bit fields for the new level 2
0326 metrics. Four more TopDown metric events are exposed for the end-users,
0327 topdown-heavy-ops, topdown-br-mispredict, topdown-fetch-lat and
0328 topdown-mem-bound.
0329
0330 Each of the new level 2 metrics in the upper half is a subset of the
0331 corresponding level 1 metric in the lower half. Software can deduce the
0332 other four level 2 metrics by subtracting corresponding metrics as below.
0333
0334 Light_Operations = Retiring - Heavy_Operations
0335 Machine_Clears = Bad_Speculation - Branch_Mispredicts
0336 Fetch_Bandwidth = Frontend_Bound - Fetch_Latency
0337 Core_Bound = Backend_Bound - Memory_Bound
0338
0339
0340 [1] https://software.intel.com/en-us/top-down-microarchitecture-analysis-method-win
0341 [2] https://github.com/andikleen/pmu-tools/wiki/toplev-manual
0342 [3] https://software.intel.com/en-us/intel-vtune-amplifier-xe
0343 [4] https://github.com/andikleen/pmu-tools/tree/master/jevents
0344 [5] https://sites.google.com/site/analysismethods/yasin-pubs