Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 1 | User Interface for Resource Allocation in Intel Resource Director Technology |
| 2 | |
| 3 | Copyright (C) 2016 Intel Corporation |
| 4 | |
| 5 | Fenghua Yu <fenghua.yu@intel.com> |
| 6 | Tony Luck <tony.luck@intel.com> |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 7 | Vikas Shivappa <vikas.shivappa@intel.com> |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 8 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 9 | This feature is enabled by the CONFIG_INTEL_RDT Kconfig and the |
Fenghua Yu | 0ff8e08 | 2017-12-20 14:57:19 -0800 | [diff] [blame] | 10 | X86 /proc/cpuinfo flag bits: |
| 11 | RDT (Resource Director Technology) Allocation - "rdt_a" |
| 12 | CAT (Cache Allocation Technology) - "cat_l3", "cat_l2" |
Fenghua Yu | aa55d5a | 2017-12-20 14:57:20 -0800 | [diff] [blame] | 13 | CDP (Code and Data Prioritization ) - "cdp_l3", "cdp_l2" |
Fenghua Yu | 0ff8e08 | 2017-12-20 14:57:19 -0800 | [diff] [blame] | 14 | CQM (Cache QoS Monitoring) - "cqm_llc", "cqm_occup_llc" |
| 15 | MBM (Memory Bandwidth Monitoring) - "cqm_mbm_total", "cqm_mbm_local" |
| 16 | MBA (Memory Bandwidth Allocation) - "mba" |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 17 | |
| 18 | To use the feature mount the file system: |
| 19 | |
Vikas Shivappa | d6c64a4 | 2018-04-20 15:36:16 -0700 | [diff] [blame] | 20 | # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps]] /sys/fs/resctrl |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 21 | |
| 22 | mount options are: |
| 23 | |
| 24 | "cdp": Enable code/data prioritization in L3 cache allocations. |
Fenghua Yu | aa55d5a | 2017-12-20 14:57:20 -0800 | [diff] [blame] | 25 | "cdpl2": Enable code/data prioritization in L2 cache allocations. |
Vikas Shivappa | d6c64a4 | 2018-04-20 15:36:16 -0700 | [diff] [blame] | 26 | "mba_MBps": Enable the MBA Software Controller(mba_sc) to specify MBA |
| 27 | bandwidth in MBps |
Fenghua Yu | aa55d5a | 2017-12-20 14:57:20 -0800 | [diff] [blame] | 28 | |
| 29 | L2 and L3 CDP are controlled seperately. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 30 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 31 | RDT features are orthogonal. A particular system may support only |
Reinette Chatre | e17e733 | 2018-06-22 15:42:07 -0700 | [diff] [blame] | 32 | monitoring, only control, or both monitoring and control. Cache |
| 33 | pseudo-locking is a unique way of using cache control to "pin" or |
| 34 | "lock" data in the cache. Details can be found in |
| 35 | "Cache Pseudo-Locking". |
| 36 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 37 | |
| 38 | The mount succeeds if either of allocation or monitoring is present, but |
| 39 | only those files and directories supported by the system will be created. |
| 40 | For more details on the behavior of the interface during monitoring |
| 41 | and allocation, see the "Resource alloc and monitor groups" section. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 42 | |
Thomas Gleixner | 458b0d6e | 2016-11-07 11:58:12 +0100 | [diff] [blame] | 43 | Info directory |
| 44 | -------------- |
| 45 | |
| 46 | The 'info' directory contains information about the enabled |
| 47 | resources. Each resource has its own subdirectory. The subdirectory |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 48 | names reflect the resource names. |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 49 | |
| 50 | Each subdirectory contains the following files with respect to |
| 51 | allocation: |
| 52 | |
| 53 | Cache resource(L3/L2) subdirectory contains the following files |
| 54 | related to allocation: |
Thomas Gleixner | 458b0d6e | 2016-11-07 11:58:12 +0100 | [diff] [blame] | 55 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 56 | "num_closids": The number of CLOSIDs which are valid for this |
| 57 | resource. The kernel uses the smallest number of |
| 58 | CLOSIDs of all enabled resources as limit. |
Thomas Gleixner | 458b0d6e | 2016-11-07 11:58:12 +0100 | [diff] [blame] | 59 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 60 | "cbm_mask": The bitmask which is valid for this resource. |
| 61 | This mask is equivalent to 100%. |
Thomas Gleixner | 458b0d6e | 2016-11-07 11:58:12 +0100 | [diff] [blame] | 62 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 63 | "min_cbm_bits": The minimum number of consecutive bits which |
| 64 | must be set when writing a mask. |
Thomas Gleixner | 458b0d6e | 2016-11-07 11:58:12 +0100 | [diff] [blame] | 65 | |
Fenghua Yu | 0dd2d74 | 2017-07-25 15:39:04 -0700 | [diff] [blame] | 66 | "shareable_bits": Bitmask of shareable resource with other executing |
| 67 | entities (e.g. I/O). User can use this when |
| 68 | setting up exclusive cache partitions. Note that |
| 69 | some platforms support devices that have their |
| 70 | own settings for cache use which can over-ride |
| 71 | these bits. |
Reinette Chatre | cba1aab | 2018-06-22 15:41:53 -0700 | [diff] [blame] | 72 | "bit_usage": Annotated capacity bitmasks showing how all |
| 73 | instances of the resource are used. The legend is: |
| 74 | "0" - Corresponding region is unused. When the system's |
| 75 | resources have been allocated and a "0" is found |
| 76 | in "bit_usage" it is a sign that resources are |
| 77 | wasted. |
| 78 | "H" - Corresponding region is used by hardware only |
| 79 | but available for software use. If a resource |
| 80 | has bits set in "shareable_bits" but not all |
| 81 | of these bits appear in the resource groups' |
| 82 | schematas then the bits appearing in |
| 83 | "shareable_bits" but no resource group will |
| 84 | be marked as "H". |
| 85 | "X" - Corresponding region is available for sharing and |
| 86 | used by hardware and software. These are the |
| 87 | bits that appear in "shareable_bits" as |
| 88 | well as a resource group's allocation. |
| 89 | "S" - Corresponding region is used by software |
| 90 | and available for sharing. |
| 91 | "E" - Corresponding region is used exclusively by |
| 92 | one resource group. No sharing allowed. |
Reinette Chatre | e17e733 | 2018-06-22 15:42:07 -0700 | [diff] [blame] | 93 | "P" - Corresponding region is pseudo-locked. No |
| 94 | sharing allowed. |
Fenghua Yu | 0dd2d74 | 2017-07-25 15:39:04 -0700 | [diff] [blame] | 95 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 96 | Memory bandwitdh(MB) subdirectory contains the following files |
| 97 | with respect to allocation: |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 98 | |
| 99 | "min_bandwidth": The minimum memory bandwidth percentage which |
| 100 | user can request. |
| 101 | |
| 102 | "bandwidth_gran": The granularity in which the memory bandwidth |
| 103 | percentage is allocated. The allocated |
| 104 | b/w percentage is rounded off to the next |
| 105 | control step available on the hardware. The |
| 106 | available bandwidth control steps are: |
| 107 | min_bandwidth + N * bandwidth_gran. |
| 108 | |
| 109 | "delay_linear": Indicates if the delay scale is linear or |
| 110 | non-linear. This field is purely informational |
| 111 | only. |
Thomas Gleixner | 458b0d6e | 2016-11-07 11:58:12 +0100 | [diff] [blame] | 112 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 113 | If RDT monitoring is available there will be an "L3_MON" directory |
| 114 | with the following files: |
| 115 | |
| 116 | "num_rmids": The number of RMIDs available. This is the |
| 117 | upper bound for how many "CTRL_MON" + "MON" |
| 118 | groups can be created. |
| 119 | |
| 120 | "mon_features": Lists the monitoring events if |
| 121 | monitoring is enabled for the resource. |
| 122 | |
| 123 | "max_threshold_occupancy": |
| 124 | Read/write file provides the largest value (in |
| 125 | bytes) at which a previously used LLC_occupancy |
| 126 | counter can be considered for re-use. |
| 127 | |
Tony Luck | 165d3ad | 2017-09-25 16:39:38 -0700 | [diff] [blame] | 128 | Finally, in the top level of the "info" directory there is a file |
| 129 | named "last_cmd_status". This is reset with every "command" issued |
| 130 | via the file system (making new directories or writing to any of the |
| 131 | control files). If the command was successful, it will read as "ok". |
| 132 | If the command failed, it will provide more information that can be |
| 133 | conveyed in the error returns from file operations. E.g. |
| 134 | |
| 135 | # echo L3:0=f7 > schemata |
| 136 | bash: echo: write error: Invalid argument |
| 137 | # cat info/last_cmd_status |
| 138 | mask f7 has non-consecutive 1-bits |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 139 | |
| 140 | Resource alloc and monitor groups |
| 141 | --------------------------------- |
| 142 | |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 143 | Resource groups are represented as directories in the resctrl file |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 144 | system. The default group is the root directory which, immediately |
| 145 | after mounting, owns all the tasks and cpus in the system and can make |
| 146 | full use of all resources. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 147 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 148 | On a system with RDT control features additional directories can be |
| 149 | created in the root directory that specify different amounts of each |
| 150 | resource (see "schemata" below). The root and these additional top level |
| 151 | directories are referred to as "CTRL_MON" groups below. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 152 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 153 | On a system with RDT monitoring the root directory and other top level |
| 154 | directories contain a directory named "mon_groups" in which additional |
| 155 | directories can be created to monitor subsets of tasks in the CTRL_MON |
| 156 | group that is their ancestor. These are called "MON" groups in the rest |
| 157 | of this document. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 158 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 159 | Removing a directory will move all tasks and cpus owned by the group it |
| 160 | represents to the parent. Removing one of the created CTRL_MON groups |
| 161 | will automatically remove all MON groups below it. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 162 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 163 | All groups contain the following files: |
Jiri Olsa | 4ffa3c9 | 2017-04-10 16:52:32 +0200 | [diff] [blame] | 164 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 165 | "tasks": |
| 166 | Reading this file shows the list of all tasks that belong to |
| 167 | this group. Writing a task id to the file will add a task to the |
| 168 | group. If the group is a CTRL_MON group the task is removed from |
| 169 | whichever previous CTRL_MON group owned the task and also from |
| 170 | any MON group that owned the task. If the group is a MON group, |
| 171 | then the task must already belong to the CTRL_MON parent of this |
| 172 | group. The task is removed from any previous MON group. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 173 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 174 | |
| 175 | "cpus": |
| 176 | Reading this file shows a bitmask of the logical CPUs owned by |
| 177 | this group. Writing a mask to this file will add and remove |
| 178 | CPUs to/from this group. As with the tasks file a hierarchy is |
| 179 | maintained where MON groups may only include CPUs owned by the |
| 180 | parent CTRL_MON group. |
| 181 | |
| 182 | |
| 183 | "cpus_list": |
| 184 | Just like "cpus", only using ranges of CPUs instead of bitmasks. |
| 185 | |
| 186 | |
| 187 | When control is enabled all CTRL_MON groups will also contain: |
| 188 | |
| 189 | "schemata": |
| 190 | A list of all the resources available to this group. |
| 191 | Each resource has its own line and format - see below for details. |
| 192 | |
Reinette Chatre | cba1aab | 2018-06-22 15:41:53 -0700 | [diff] [blame] | 193 | "size": |
| 194 | Mirrors the display of the "schemata" file to display the size in |
| 195 | bytes of each allocation instead of the bits representing the |
| 196 | allocation. |
| 197 | |
| 198 | "mode": |
| 199 | The "mode" of the resource group dictates the sharing of its |
| 200 | allocations. A "shareable" resource group allows sharing of its |
Reinette Chatre | e17e733 | 2018-06-22 15:42:07 -0700 | [diff] [blame] | 201 | allocations while an "exclusive" resource group does not. A |
| 202 | cache pseudo-locked region is created by first writing |
| 203 | "pseudo-locksetup" to the "mode" file before writing the cache |
| 204 | pseudo-locked region's schemata to the resource group's "schemata" |
| 205 | file. On successful pseudo-locked region creation the mode will |
| 206 | automatically change to "pseudo-locked". |
Reinette Chatre | cba1aab | 2018-06-22 15:41:53 -0700 | [diff] [blame] | 207 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 208 | When monitoring is enabled all MON groups will also contain: |
| 209 | |
| 210 | "mon_data": |
| 211 | This contains a set of files organized by L3 domain and by |
| 212 | RDT event. E.g. on a system with two L3 domains there will |
| 213 | be subdirectories "mon_L3_00" and "mon_L3_01". Each of these |
| 214 | directories have one file per event (e.g. "llc_occupancy", |
| 215 | "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these |
| 216 | files provide a read out of the current value of the event for |
| 217 | all tasks in the group. In CTRL_MON groups these files provide |
| 218 | the sum for all tasks in the CTRL_MON group and all tasks in |
| 219 | MON groups. Please see example section for more details on usage. |
| 220 | |
| 221 | Resource allocation rules |
| 222 | ------------------------- |
| 223 | When a task is running the following rules define which resources are |
| 224 | available to it: |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 225 | |
| 226 | 1) If the task is a member of a non-default group, then the schemata |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 227 | for that group is used. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 228 | |
| 229 | 2) Else if the task belongs to the default group, but is running on a |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 230 | CPU that is assigned to some specific group, then the schemata for the |
| 231 | CPU's group is used. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 232 | |
| 233 | 3) Otherwise the schemata for the default group is used. |
| 234 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 235 | Resource monitoring rules |
| 236 | ------------------------- |
| 237 | 1) If a task is a member of a MON group, or non-default CTRL_MON group |
| 238 | then RDT events for the task will be reported in that group. |
| 239 | |
| 240 | 2) If a task is a member of the default CTRL_MON group, but is running |
| 241 | on a CPU that is assigned to some specific group, then the RDT events |
| 242 | for the task will be reported in that group. |
| 243 | |
| 244 | 3) Otherwise RDT events for the task will be reported in the root level |
| 245 | "mon_data" group. |
| 246 | |
| 247 | |
| 248 | Notes on cache occupancy monitoring and control |
| 249 | ----------------------------------------------- |
| 250 | When moving a task from one group to another you should remember that |
| 251 | this only affects *new* cache allocations by the task. E.g. you may have |
| 252 | a task in a monitor group showing 3 MB of cache occupancy. If you move |
| 253 | to a new group and immediately check the occupancy of the old and new |
| 254 | groups you will likely see that the old group is still showing 3 MB and |
| 255 | the new group zero. When the task accesses locations still in cache from |
| 256 | before the move, the h/w does not update any counters. On a busy system |
| 257 | you will likely see the occupancy in the old group go down as cache lines |
| 258 | are evicted and re-used while the occupancy in the new group rises as |
| 259 | the task accesses memory and loads into the cache are counted based on |
| 260 | membership in the new group. |
| 261 | |
| 262 | The same applies to cache allocation control. Moving a task to a group |
| 263 | with a smaller cache partition will not evict any cache lines. The |
| 264 | process may continue to use them from the old partition. |
| 265 | |
| 266 | Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID) |
| 267 | to identify a control group and a monitoring group respectively. Each of |
| 268 | the resource groups are mapped to these IDs based on the kind of group. The |
| 269 | number of CLOSid and RMID are limited by the hardware and hence the creation of |
| 270 | a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID |
| 271 | and creation of "MON" group may fail if we run out of RMIDs. |
| 272 | |
| 273 | max_threshold_occupancy - generic concepts |
| 274 | ------------------------------------------ |
| 275 | |
| 276 | Note that an RMID once freed may not be immediately available for use as |
| 277 | the RMID is still tagged the cache lines of the previous user of RMID. |
| 278 | Hence such RMIDs are placed on limbo list and checked back if the cache |
| 279 | occupancy has gone down. If there is a time when system has a lot of |
| 280 | limbo RMIDs but which are not ready to be used, user may see an -EBUSY |
| 281 | during mkdir. |
| 282 | |
| 283 | max_threshold_occupancy is a user configurable value to determine the |
| 284 | occupancy at which an RMID can be freed. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 285 | |
| 286 | Schemata files - general concepts |
| 287 | --------------------------------- |
| 288 | Each line in the file describes one resource. The line starts with |
| 289 | the name of the resource, followed by specific values to be applied |
| 290 | in each of the instances of that resource on the system. |
| 291 | |
| 292 | Cache IDs |
| 293 | --------- |
| 294 | On current generation systems there is one L3 cache per socket and L2 |
| 295 | caches are generally just shared by the hyperthreads on a core, but this |
| 296 | isn't an architectural requirement. We could have multiple separate L3 |
| 297 | caches on a socket, multiple cores could share an L2 cache. So instead |
| 298 | of using "socket" or "core" to define the set of logical cpus sharing |
| 299 | a resource we use a "Cache ID". At a given cache level this will be a |
| 300 | unique number across the whole system (but it isn't guaranteed to be a |
| 301 | contiguous sequence, there may be gaps). To find the ID for each logical |
| 302 | CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id |
| 303 | |
| 304 | Cache Bit Masks (CBM) |
| 305 | --------------------- |
| 306 | For cache resources we describe the portion of the cache that is available |
| 307 | for allocation using a bitmask. The maximum value of the mask is defined |
| 308 | by each cpu model (and may be different for different cache levels). It |
| 309 | is found using CPUID, but is also provided in the "info" directory of |
| 310 | the resctrl file system in "info/{resource}/cbm_mask". X86 hardware |
| 311 | requires that these masks have all the '1' bits in a contiguous block. So |
| 312 | 0x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9 |
| 313 | and 0xA are not. On a system with a 20-bit mask each bit represents 5% |
| 314 | of the capacity of the cache. You could partition the cache into four |
| 315 | equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000. |
| 316 | |
Vikas Shivappa | d6c64a4 | 2018-04-20 15:36:16 -0700 | [diff] [blame] | 317 | Memory bandwidth Allocation and monitoring |
| 318 | ------------------------------------------ |
| 319 | |
| 320 | For Memory bandwidth resource, by default the user controls the resource |
| 321 | by indicating the percentage of total memory bandwidth. |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 322 | |
| 323 | The minimum bandwidth percentage value for each cpu model is predefined |
| 324 | and can be looked up through "info/MB/min_bandwidth". The bandwidth |
| 325 | granularity that is allocated is also dependent on the cpu model and can |
| 326 | be looked up at "info/MB/bandwidth_gran". The available bandwidth |
| 327 | control steps are: min_bw + N * bw_gran. Intermediate values are rounded |
| 328 | to the next control step available on the hardware. |
| 329 | |
| 330 | The bandwidth throttling is a core specific mechanism on some of Intel |
| 331 | SKUs. Using a high bandwidth and a low bandwidth setting on two threads |
| 332 | sharing a core will result in both threads being throttled to use the |
Vikas Shivappa | d6c64a4 | 2018-04-20 15:36:16 -0700 | [diff] [blame] | 333 | low bandwidth. The fact that Memory bandwidth allocation(MBA) is a core |
| 334 | specific mechanism where as memory bandwidth monitoring(MBM) is done at |
| 335 | the package level may lead to confusion when users try to apply control |
| 336 | via the MBA and then monitor the bandwidth to see if the controls are |
| 337 | effective. Below are such scenarios: |
| 338 | |
| 339 | 1. User may *not* see increase in actual bandwidth when percentage |
| 340 | values are increased: |
| 341 | |
| 342 | This can occur when aggregate L2 external bandwidth is more than L3 |
| 343 | external bandwidth. Consider an SKL SKU with 24 cores on a package and |
| 344 | where L2 external is 10GBps (hence aggregate L2 external bandwidth is |
| 345 | 240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20 |
| 346 | threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3 |
| 347 | bandwidth of 100GBps although the percentage value specified is only 50% |
| 348 | << 100%. Hence increasing the bandwidth percentage will not yeild any |
| 349 | more bandwidth. This is because although the L2 external bandwidth still |
| 350 | has capacity, the L3 external bandwidth is fully used. Also note that |
| 351 | this would be dependent on number of cores the benchmark is run on. |
| 352 | |
| 353 | 2. Same bandwidth percentage may mean different actual bandwidth |
| 354 | depending on # of threads: |
| 355 | |
| 356 | For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4 |
| 357 | thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although |
| 358 | they have same percentage bandwidth of 10%. This is simply because as |
| 359 | threads start using more cores in an rdtgroup, the actual bandwidth may |
| 360 | increase or vary although user specified bandwidth percentage is same. |
| 361 | |
| 362 | In order to mitigate this and make the interface more user friendly, |
| 363 | resctrl added support for specifying the bandwidth in MBps as well. The |
| 364 | kernel underneath would use a software feedback mechanism or a "Software |
| 365 | Controller(mba_sc)" which reads the actual bandwidth using MBM counters |
| 366 | and adjust the memowy bandwidth percentages to ensure |
| 367 | |
| 368 | "actual bandwidth < user specified bandwidth". |
| 369 | |
| 370 | By default, the schemata would take the bandwidth percentage values |
| 371 | where as user can switch to the "MBA software controller" mode using |
| 372 | a mount option 'mba_MBps'. The schemata format is specified in the below |
| 373 | sections. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 374 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 375 | L3 schemata file details (code and data prioritization disabled) |
| 376 | ---------------------------------------------------------------- |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 377 | With CDP disabled the L3 schemata format is: |
| 378 | |
| 379 | L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... |
| 380 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 381 | L3 schemata file details (CDP enabled via mount option to resctrl) |
| 382 | ------------------------------------------------------------------ |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 383 | When CDP is enabled L3 control is split into two separate resources |
| 384 | so you can specify independent masks for code and data like this: |
| 385 | |
| 386 | L3data:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... |
| 387 | L3code:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... |
| 388 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 389 | L2 schemata file details |
| 390 | ------------------------ |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 391 | L2 cache does not support code and data prioritization, so the |
| 392 | schemata format is always: |
| 393 | |
| 394 | L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;... |
| 395 | |
Vikas Shivappa | d6c64a4 | 2018-04-20 15:36:16 -0700 | [diff] [blame] | 396 | Memory bandwidth Allocation (default mode) |
| 397 | ------------------------------------------ |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 398 | |
| 399 | Memory b/w domain is L3 cache. |
| 400 | |
| 401 | MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;... |
| 402 | |
Vikas Shivappa | d6c64a4 | 2018-04-20 15:36:16 -0700 | [diff] [blame] | 403 | Memory bandwidth Allocation specified in MBps |
| 404 | --------------------------------------------- |
| 405 | |
| 406 | Memory bandwidth domain is L3 cache. |
| 407 | |
| 408 | MB:<cache_id0>=bw_MBps0;<cache_id1>=bw_MBps1;... |
| 409 | |
Tony Luck | c4026b7b | 2017-04-03 14:44:16 -0700 | [diff] [blame] | 410 | Reading/writing the schemata file |
| 411 | --------------------------------- |
| 412 | Reading the schemata file will show the state of all resources |
| 413 | on all domains. When writing you only need to specify those values |
| 414 | which you wish to change. E.g. |
| 415 | |
| 416 | # cat schemata |
| 417 | L3DATA:0=fffff;1=fffff;2=fffff;3=fffff |
| 418 | L3CODE:0=fffff;1=fffff;2=fffff;3=fffff |
| 419 | # echo "L3DATA:2=3c0;" > schemata |
| 420 | # cat schemata |
| 421 | L3DATA:0=fffff;1=fffff;2=3c0;3=fffff |
| 422 | L3CODE:0=fffff;1=fffff;2=fffff;3=fffff |
| 423 | |
Reinette Chatre | e17e733 | 2018-06-22 15:42:07 -0700 | [diff] [blame] | 424 | Cache Pseudo-Locking |
| 425 | -------------------- |
| 426 | CAT enables a user to specify the amount of cache space that an |
| 427 | application can fill. Cache pseudo-locking builds on the fact that a |
| 428 | CPU can still read and write data pre-allocated outside its current |
| 429 | allocated area on a cache hit. With cache pseudo-locking, data can be |
| 430 | preloaded into a reserved portion of cache that no application can |
| 431 | fill, and from that point on will only serve cache hits. The cache |
| 432 | pseudo-locked memory is made accessible to user space where an |
| 433 | application can map it into its virtual address space and thus have |
| 434 | a region of memory with reduced average read latency. |
| 435 | |
| 436 | The creation of a cache pseudo-locked region is triggered by a request |
| 437 | from the user to do so that is accompanied by a schemata of the region |
| 438 | to be pseudo-locked. The cache pseudo-locked region is created as follows: |
| 439 | - Create a CAT allocation CLOSNEW with a CBM matching the schemata |
| 440 | from the user of the cache region that will contain the pseudo-locked |
| 441 | memory. This region must not overlap with any current CAT allocation/CLOS |
| 442 | on the system and no future overlap with this cache region is allowed |
| 443 | while the pseudo-locked region exists. |
| 444 | - Create a contiguous region of memory of the same size as the cache |
| 445 | region. |
| 446 | - Flush the cache, disable hardware prefetchers, disable preemption. |
| 447 | - Make CLOSNEW the active CLOS and touch the allocated memory to load |
| 448 | it into the cache. |
| 449 | - Set the previous CLOS as active. |
| 450 | - At this point the closid CLOSNEW can be released - the cache |
| 451 | pseudo-locked region is protected as long as its CBM does not appear in |
| 452 | any CAT allocation. Even though the cache pseudo-locked region will from |
| 453 | this point on not appear in any CBM of any CLOS an application running with |
| 454 | any CLOS will be able to access the memory in the pseudo-locked region since |
| 455 | the region continues to serve cache hits. |
| 456 | - The contiguous region of memory loaded into the cache is exposed to |
| 457 | user-space as a character device. |
| 458 | |
| 459 | Cache pseudo-locking increases the probability that data will remain |
| 460 | in the cache via carefully configuring the CAT feature and controlling |
| 461 | application behavior. There is no guarantee that data is placed in |
| 462 | cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict |
| 463 | “locked” data from cache. Power management C-states may shrink or |
Reinette Chatre | 6fc0de3 | 2018-06-22 15:42:30 -0700 | [diff] [blame^] | 464 | power off cache. Deeper C-states will automatically be restricted on |
| 465 | pseudo-locked region creation. |
Reinette Chatre | e17e733 | 2018-06-22 15:42:07 -0700 | [diff] [blame] | 466 | |
| 467 | It is required that an application using a pseudo-locked region runs |
| 468 | with affinity to the cores (or a subset of the cores) associated |
| 469 | with the cache on which the pseudo-locked region resides. A sanity check |
| 470 | within the code will not allow an application to map pseudo-locked memory |
| 471 | unless it runs with affinity to cores associated with the cache on which the |
| 472 | pseudo-locked region resides. The sanity check is only done during the |
| 473 | initial mmap() handling, there is no enforcement afterwards and the |
| 474 | application self needs to ensure it remains affine to the correct cores. |
| 475 | |
| 476 | Pseudo-locking is accomplished in two stages: |
| 477 | 1) During the first stage the system administrator allocates a portion |
| 478 | of cache that should be dedicated to pseudo-locking. At this time an |
| 479 | equivalent portion of memory is allocated, loaded into allocated |
| 480 | cache portion, and exposed as a character device. |
| 481 | 2) During the second stage a user-space application maps (mmap()) the |
| 482 | pseudo-locked memory into its address space. |
| 483 | |
| 484 | Cache Pseudo-Locking Interface |
| 485 | ------------------------------ |
| 486 | A pseudo-locked region is created using the resctrl interface as follows: |
| 487 | |
| 488 | 1) Create a new resource group by creating a new directory in /sys/fs/resctrl. |
| 489 | 2) Change the new resource group's mode to "pseudo-locksetup" by writing |
| 490 | "pseudo-locksetup" to the "mode" file. |
| 491 | 3) Write the schemata of the pseudo-locked region to the "schemata" file. All |
| 492 | bits within the schemata should be "unused" according to the "bit_usage" |
| 493 | file. |
| 494 | |
| 495 | On successful pseudo-locked region creation the "mode" file will contain |
| 496 | "pseudo-locked" and a new character device with the same name as the resource |
| 497 | group will exist in /dev/pseudo_lock. This character device can be mmap()'ed |
| 498 | by user space in order to obtain access to the pseudo-locked memory region. |
| 499 | |
| 500 | An example of cache pseudo-locked region creation and usage can be found below. |
| 501 | |
| 502 | Cache Pseudo-Locking Debugging Interface |
| 503 | --------------------------------------- |
| 504 | The pseudo-locking debugging interface is enabled by default (if |
| 505 | CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl. |
| 506 | |
| 507 | There is no explicit way for the kernel to test if a provided memory |
| 508 | location is present in the cache. The pseudo-locking debugging interface uses |
| 509 | the tracing infrastructure to provide two ways to measure cache residency of |
| 510 | the pseudo-locked region: |
| 511 | 1) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data |
| 512 | from these measurements are best visualized using a hist trigger (see |
| 513 | example below). In this test the pseudo-locked region is traversed at |
| 514 | a stride of 32 bytes while hardware prefetchers and preemption |
| 515 | are disabled. This also provides a substitute visualization of cache |
| 516 | hits and misses. |
| 517 | 2) Cache hit and miss measurements using model specific precision counters if |
| 518 | available. Depending on the levels of cache on the system the pseudo_lock_l2 |
| 519 | and pseudo_lock_l3 tracepoints are available. |
| 520 | WARNING: triggering this measurement uses from two (for just L2 |
| 521 | measurements) to four (for L2 and L3 measurements) precision counters on |
| 522 | the system, if any other measurements are in progress the counters and |
| 523 | their corresponding event registers will be clobbered. |
| 524 | |
| 525 | When a pseudo-locked region is created a new debugfs directory is created for |
| 526 | it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single |
| 527 | write-only file, pseudo_lock_measure, is present in this directory. The |
| 528 | measurement on the pseudo-locked region depends on the number, 1 or 2, |
| 529 | written to this debugfs file. Since the measurements are recorded with the |
| 530 | tracing infrastructure the relevant tracepoints need to be enabled before the |
| 531 | measurement is triggered. |
| 532 | |
| 533 | Example of latency debugging interface: |
| 534 | In this example a pseudo-locked region named "newlock" was created. Here is |
| 535 | how we can measure the latency in cycles of reading from this region and |
| 536 | visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS |
| 537 | is set: |
| 538 | # :> /sys/kernel/debug/tracing/trace |
| 539 | # echo 'hist:keys=latency' > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/trigger |
| 540 | # echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable |
| 541 | # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure |
| 542 | # echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable |
| 543 | # cat /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/hist |
| 544 | |
| 545 | # event histogram |
| 546 | # |
| 547 | # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active] |
| 548 | # |
| 549 | |
| 550 | { latency: 456 } hitcount: 1 |
| 551 | { latency: 50 } hitcount: 83 |
| 552 | { latency: 36 } hitcount: 96 |
| 553 | { latency: 44 } hitcount: 174 |
| 554 | { latency: 48 } hitcount: 195 |
| 555 | { latency: 46 } hitcount: 262 |
| 556 | { latency: 42 } hitcount: 693 |
| 557 | { latency: 40 } hitcount: 3204 |
| 558 | { latency: 38 } hitcount: 3484 |
| 559 | |
| 560 | Totals: |
| 561 | Hits: 8192 |
| 562 | Entries: 9 |
| 563 | Dropped: 0 |
| 564 | |
| 565 | Example of cache hits/misses debugging: |
| 566 | In this example a pseudo-locked region named "newlock" was created on the L2 |
| 567 | cache of a platform. Here is how we can obtain details of the cache hits |
| 568 | and misses using the platform's precision counters. |
| 569 | |
| 570 | # :> /sys/kernel/debug/tracing/trace |
| 571 | # echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable |
| 572 | # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure |
| 573 | # echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable |
| 574 | # cat /sys/kernel/debug/tracing/trace |
| 575 | |
| 576 | # tracer: nop |
| 577 | # |
| 578 | # _-----=> irqs-off |
| 579 | # / _----=> need-resched |
| 580 | # | / _---=> hardirq/softirq |
| 581 | # || / _--=> preempt-depth |
| 582 | # ||| / delay |
| 583 | # TASK-PID CPU# |||| TIMESTAMP FUNCTION |
| 584 | # | | | |||| | | |
| 585 | pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0 |
| 586 | |
| 587 | |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 588 | Examples for RDT allocation usage: |
| 589 | |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 590 | Example 1 |
| 591 | --------- |
| 592 | On a two socket machine (one L3 cache per socket) with just four bits |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 593 | for cache bit masks, minimum b/w of 10% with a memory bandwidth |
| 594 | granularity of 10% |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 595 | |
| 596 | # mount -t resctrl resctrl /sys/fs/resctrl |
| 597 | # cd /sys/fs/resctrl |
| 598 | # mkdir p0 p1 |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 599 | # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata |
| 600 | # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 601 | |
| 602 | The default resource group is unmodified, so we have access to all parts |
| 603 | of all caches (its schemata file reads "L3:0=f;1=f"). |
| 604 | |
| 605 | Tasks that are under the control of group "p0" may only allocate from the |
| 606 | "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1. |
| 607 | Tasks in group "p1" use the "lower" 50% of cache on both sockets. |
| 608 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 609 | Similarly, tasks that are under the control of group "p0" may use a |
| 610 | maximum memory b/w of 50% on socket0 and 50% on socket 1. |
| 611 | Tasks in group "p1" may also use 50% memory b/w on both sockets. |
| 612 | Note that unlike cache masks, memory b/w cannot specify whether these |
| 613 | allocations can overlap or not. The allocations specifies the maximum |
| 614 | b/w that the group may be able to use and the system admin can configure |
| 615 | the b/w accordingly. |
| 616 | |
Vikas Shivappa | d6c64a4 | 2018-04-20 15:36:16 -0700 | [diff] [blame] | 617 | If the MBA is specified in MB(megabytes) then user can enter the max b/w in MB |
| 618 | rather than the percentage values. |
| 619 | |
| 620 | # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata |
| 621 | # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata |
| 622 | |
| 623 | In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w |
| 624 | of 1024MB where as on socket 1 they would use 500MB. |
| 625 | |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 626 | Example 2 |
| 627 | --------- |
| 628 | Again two sockets, but this time with a more realistic 20-bit mask. |
| 629 | |
| 630 | Two real time tasks pid=1234 running on processor 0 and pid=5678 running on |
| 631 | processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy |
| 632 | neighbors, each of the two real-time tasks exclusively occupies one quarter |
| 633 | of L3 cache on socket 0. |
| 634 | |
| 635 | # mount -t resctrl resctrl /sys/fs/resctrl |
| 636 | # cd /sys/fs/resctrl |
| 637 | |
| 638 | First we reset the schemata for the default group so that the "upper" |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 639 | 50% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by |
| 640 | ordinary tasks: |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 641 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 642 | # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 643 | |
| 644 | Next we make a resource group for our first real time task and give |
| 645 | it access to the "top" 25% of the cache on socket 0. |
| 646 | |
| 647 | # mkdir p0 |
| 648 | # echo "L3:0=f8000;1=fffff" > p0/schemata |
| 649 | |
| 650 | Finally we move our first real time task into this resource group. We |
| 651 | also use taskset(1) to ensure the task always runs on a dedicated CPU |
| 652 | on socket 0. Most uses of resource groups will also constrain which |
| 653 | processors tasks run on. |
| 654 | |
| 655 | # echo 1234 > p0/tasks |
| 656 | # taskset -cp 1 1234 |
| 657 | |
| 658 | Ditto for the second real time task (with the remaining 25% of cache): |
| 659 | |
| 660 | # mkdir p1 |
| 661 | # echo "L3:0=7c00;1=fffff" > p1/schemata |
| 662 | # echo 5678 > p1/tasks |
| 663 | # taskset -cp 2 5678 |
| 664 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 665 | For the same 2 socket system with memory b/w resource and CAT L3 the |
| 666 | schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is |
| 667 | 10): |
| 668 | |
| 669 | For our first real time task this would request 20% memory b/w on socket |
| 670 | 0. |
| 671 | |
| 672 | # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata |
| 673 | |
| 674 | For our second real time task this would request an other 20% memory b/w |
| 675 | on socket 0. |
| 676 | |
| 677 | # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata |
| 678 | |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 679 | Example 3 |
| 680 | --------- |
| 681 | |
| 682 | A single socket system which has real-time tasks running on core 4-7 and |
| 683 | non real-time workload assigned to core 0-3. The real-time tasks share text |
| 684 | and data, so a per task association is not required and due to interaction |
| 685 | with the kernel it's desired that the kernel on these cores shares L3 with |
| 686 | the tasks. |
| 687 | |
| 688 | # mount -t resctrl resctrl /sys/fs/resctrl |
| 689 | # cd /sys/fs/resctrl |
| 690 | |
| 691 | First we reset the schemata for the default group so that the "upper" |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 692 | 50% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0 |
| 693 | cannot be used by ordinary tasks: |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 694 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 695 | # echo "L3:0=3ff\nMB:0=50" > schemata |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 696 | |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 697 | Next we make a resource group for our real time cores and give it access |
| 698 | to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on |
| 699 | socket 0. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 700 | |
| 701 | # mkdir p0 |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 702 | # echo "L3:0=ffc00\nMB:0=50" > p0/schemata |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 703 | |
| 704 | Finally we move core 4-7 over to the new group and make sure that the |
Vikas Shivappa | a9cad3d | 2017-04-07 17:33:50 -0700 | [diff] [blame] | 705 | kernel and the tasks running there get 50% of the cache. They should |
| 706 | also get 50% of memory bandwidth assuming that the cores 4-7 are SMT |
| 707 | siblings and only the real time threads are scheduled on the cores 4-7. |
Fenghua Yu | f20e578 | 2016-10-28 15:04:40 -0700 | [diff] [blame] | 708 | |
Xiaochen Shen | fb8fb46 | 2017-05-03 11:15:56 +0800 | [diff] [blame] | 709 | # echo F0 > p0/cpus |
Marcelo Tosatti | 3c2a769 | 2016-12-14 15:08:37 -0200 | [diff] [blame] | 710 | |
Reinette Chatre | cba1aab | 2018-06-22 15:41:53 -0700 | [diff] [blame] | 711 | Example 4 |
| 712 | --------- |
| 713 | |
| 714 | The resource groups in previous examples were all in the default "shareable" |
| 715 | mode allowing sharing of their cache allocations. If one resource group |
| 716 | configures a cache allocation then nothing prevents another resource group |
| 717 | to overlap with that allocation. |
| 718 | |
| 719 | In this example a new exclusive resource group will be created on a L2 CAT |
| 720 | system with two L2 cache instances that can be configured with an 8-bit |
| 721 | capacity bitmask. The new exclusive resource group will be configured to use |
| 722 | 25% of each cache instance. |
| 723 | |
| 724 | # mount -t resctrl resctrl /sys/fs/resctrl/ |
| 725 | # cd /sys/fs/resctrl |
| 726 | |
| 727 | First, we observe that the default group is configured to allocate to all L2 |
| 728 | cache: |
| 729 | |
| 730 | # cat schemata |
| 731 | L2:0=ff;1=ff |
| 732 | |
| 733 | We could attempt to create the new resource group at this point, but it will |
| 734 | fail because of the overlap with the schemata of the default group: |
| 735 | # mkdir p0 |
| 736 | # echo 'L2:0=0x3;1=0x3' > p0/schemata |
| 737 | # cat p0/mode |
| 738 | shareable |
| 739 | # echo exclusive > p0/mode |
| 740 | -sh: echo: write error: Invalid argument |
| 741 | # cat info/last_cmd_status |
| 742 | schemata overlaps |
| 743 | |
| 744 | To ensure that there is no overlap with another resource group the default |
| 745 | resource group's schemata has to change, making it possible for the new |
| 746 | resource group to become exclusive. |
| 747 | # echo 'L2:0=0xfc;1=0xfc' > schemata |
| 748 | # echo exclusive > p0/mode |
| 749 | # grep . p0/* |
| 750 | p0/cpus:0 |
| 751 | p0/mode:exclusive |
| 752 | p0/schemata:L2:0=03;1=03 |
| 753 | p0/size:L2:0=262144;1=262144 |
| 754 | |
| 755 | A new resource group will on creation not overlap with an exclusive resource |
| 756 | group: |
| 757 | # mkdir p1 |
| 758 | # grep . p1/* |
| 759 | p1/cpus:0 |
| 760 | p1/mode:shareable |
| 761 | p1/schemata:L2:0=fc;1=fc |
| 762 | p1/size:L2:0=786432;1=786432 |
| 763 | |
| 764 | The bit_usage will reflect how the cache is used: |
| 765 | # cat info/L2/bit_usage |
| 766 | 0=SSSSSSEE;1=SSSSSSEE |
| 767 | |
| 768 | A resource group cannot be forced to overlap with an exclusive resource group: |
| 769 | # echo 'L2:0=0x1;1=0x1' > p1/schemata |
| 770 | -sh: echo: write error: Invalid argument |
| 771 | # cat info/last_cmd_status |
| 772 | overlaps with exclusive group |
| 773 | |
Reinette Chatre | e17e733 | 2018-06-22 15:42:07 -0700 | [diff] [blame] | 774 | Example of Cache Pseudo-Locking |
| 775 | ------------------------------- |
| 776 | Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked |
| 777 | region is exposed at /dev/pseudo_lock/newlock that can be provided to |
| 778 | application for argument to mmap(). |
| 779 | |
| 780 | # mount -t resctrl resctrl /sys/fs/resctrl/ |
| 781 | # cd /sys/fs/resctrl |
| 782 | |
| 783 | Ensure that there are bits available that can be pseudo-locked, since only |
| 784 | unused bits can be pseudo-locked the bits to be pseudo-locked needs to be |
| 785 | removed from the default resource group's schemata: |
| 786 | # cat info/L2/bit_usage |
| 787 | 0=SSSSSSSS;1=SSSSSSSS |
| 788 | # echo 'L2:1=0xfc' > schemata |
| 789 | # cat info/L2/bit_usage |
| 790 | 0=SSSSSSSS;1=SSSSSS00 |
| 791 | |
| 792 | Create a new resource group that will be associated with the pseudo-locked |
| 793 | region, indicate that it will be used for a pseudo-locked region, and |
| 794 | configure the requested pseudo-locked region capacity bitmask: |
| 795 | |
| 796 | # mkdir newlock |
| 797 | # echo pseudo-locksetup > newlock/mode |
| 798 | # echo 'L2:1=0x3' > newlock/schemata |
| 799 | |
| 800 | On success the resource group's mode will change to pseudo-locked, the |
| 801 | bit_usage will reflect the pseudo-locked region, and the character device |
| 802 | exposing the pseudo-locked region will exist: |
| 803 | |
| 804 | # cat newlock/mode |
| 805 | pseudo-locked |
| 806 | # cat info/L2/bit_usage |
| 807 | 0=SSSSSSSS;1=SSSSSSPP |
| 808 | # ls -l /dev/pseudo_lock/newlock |
| 809 | crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock |
| 810 | |
| 811 | /* |
| 812 | * Example code to access one page of pseudo-locked cache region |
| 813 | * from user space. |
| 814 | */ |
| 815 | #define _GNU_SOURCE |
| 816 | #include <fcntl.h> |
| 817 | #include <sched.h> |
| 818 | #include <stdio.h> |
| 819 | #include <stdlib.h> |
| 820 | #include <unistd.h> |
| 821 | #include <sys/mman.h> |
| 822 | |
| 823 | /* |
| 824 | * It is required that the application runs with affinity to only |
| 825 | * cores associated with the pseudo-locked region. Here the cpu |
| 826 | * is hardcoded for convenience of example. |
| 827 | */ |
| 828 | static int cpuid = 2; |
| 829 | |
| 830 | int main(int argc, char *argv[]) |
| 831 | { |
| 832 | cpu_set_t cpuset; |
| 833 | long page_size; |
| 834 | void *mapping; |
| 835 | int dev_fd; |
| 836 | int ret; |
| 837 | |
| 838 | page_size = sysconf(_SC_PAGESIZE); |
| 839 | |
| 840 | CPU_ZERO(&cpuset); |
| 841 | CPU_SET(cpuid, &cpuset); |
| 842 | ret = sched_setaffinity(0, sizeof(cpuset), &cpuset); |
| 843 | if (ret < 0) { |
| 844 | perror("sched_setaffinity"); |
| 845 | exit(EXIT_FAILURE); |
| 846 | } |
| 847 | |
| 848 | dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR); |
| 849 | if (dev_fd < 0) { |
| 850 | perror("open"); |
| 851 | exit(EXIT_FAILURE); |
| 852 | } |
| 853 | |
| 854 | mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, |
| 855 | dev_fd, 0); |
| 856 | if (mapping == MAP_FAILED) { |
| 857 | perror("mmap"); |
| 858 | close(dev_fd); |
| 859 | exit(EXIT_FAILURE); |
| 860 | } |
| 861 | |
| 862 | /* Application interacts with pseudo-locked memory @mapping */ |
| 863 | |
| 864 | ret = munmap(mapping, page_size); |
| 865 | if (ret < 0) { |
| 866 | perror("munmap"); |
| 867 | close(dev_fd); |
| 868 | exit(EXIT_FAILURE); |
| 869 | } |
| 870 | |
| 871 | close(dev_fd); |
| 872 | exit(EXIT_SUCCESS); |
| 873 | } |
| 874 | |
Reinette Chatre | cba1aab | 2018-06-22 15:41:53 -0700 | [diff] [blame] | 875 | Locking between applications |
| 876 | ---------------------------- |
Marcelo Tosatti | 3c2a769 | 2016-12-14 15:08:37 -0200 | [diff] [blame] | 877 | |
| 878 | Certain operations on the resctrl filesystem, composed of read/writes |
| 879 | to/from multiple files, must be atomic. |
| 880 | |
| 881 | As an example, the allocation of an exclusive reservation of L3 cache |
| 882 | involves: |
| 883 | |
Reinette Chatre | cba1aab | 2018-06-22 15:41:53 -0700 | [diff] [blame] | 884 | 1. Read the cbmmasks from each directory or the per-resource "bit_usage" |
Marcelo Tosatti | 3c2a769 | 2016-12-14 15:08:37 -0200 | [diff] [blame] | 885 | 2. Find a contiguous set of bits in the global CBM bitmask that is clear |
| 886 | in any of the directory cbmmasks |
| 887 | 3. Create a new directory |
| 888 | 4. Set the bits found in step 2 to the new directory "schemata" file |
| 889 | |
| 890 | If two applications attempt to allocate space concurrently then they can |
| 891 | end up allocating the same bits so the reservations are shared instead of |
| 892 | exclusive. |
| 893 | |
| 894 | To coordinate atomic operations on the resctrlfs and to avoid the problem |
| 895 | above, the following locking procedure is recommended: |
| 896 | |
| 897 | Locking is based on flock, which is available in libc and also as a shell |
| 898 | script command |
| 899 | |
| 900 | Write lock: |
| 901 | |
| 902 | A) Take flock(LOCK_EX) on /sys/fs/resctrl |
| 903 | B) Read/write the directory structure. |
| 904 | C) funlock |
| 905 | |
| 906 | Read lock: |
| 907 | |
| 908 | A) Take flock(LOCK_SH) on /sys/fs/resctrl |
| 909 | B) If success read the directory structure. |
| 910 | C) funlock |
| 911 | |
| 912 | Example with bash: |
| 913 | |
| 914 | # Atomically read directory structure |
| 915 | $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl |
| 916 | |
| 917 | # Read directory contents and create new subdirectory |
| 918 | |
| 919 | $ cat create-dir.sh |
| 920 | find /sys/fs/resctrl/ > output.txt |
| 921 | mask = function-of(output.txt) |
| 922 | mkdir /sys/fs/resctrl/newres/ |
| 923 | echo mask > /sys/fs/resctrl/newres/schemata |
| 924 | |
| 925 | $ flock /sys/fs/resctrl/ ./create-dir.sh |
| 926 | |
| 927 | Example with C: |
| 928 | |
| 929 | /* |
| 930 | * Example code do take advisory locks |
| 931 | * before accessing resctrl filesystem |
| 932 | */ |
| 933 | #include <sys/file.h> |
| 934 | #include <stdlib.h> |
| 935 | |
| 936 | void resctrl_take_shared_lock(int fd) |
| 937 | { |
| 938 | int ret; |
| 939 | |
| 940 | /* take shared lock on resctrl filesystem */ |
| 941 | ret = flock(fd, LOCK_SH); |
| 942 | if (ret) { |
| 943 | perror("flock"); |
| 944 | exit(-1); |
| 945 | } |
| 946 | } |
| 947 | |
| 948 | void resctrl_take_exclusive_lock(int fd) |
| 949 | { |
| 950 | int ret; |
| 951 | |
| 952 | /* release lock on resctrl filesystem */ |
| 953 | ret = flock(fd, LOCK_EX); |
| 954 | if (ret) { |
| 955 | perror("flock"); |
| 956 | exit(-1); |
| 957 | } |
| 958 | } |
| 959 | |
| 960 | void resctrl_release_lock(int fd) |
| 961 | { |
| 962 | int ret; |
| 963 | |
| 964 | /* take shared lock on resctrl filesystem */ |
| 965 | ret = flock(fd, LOCK_UN); |
| 966 | if (ret) { |
| 967 | perror("flock"); |
| 968 | exit(-1); |
| 969 | } |
| 970 | } |
| 971 | |
| 972 | void main(void) |
| 973 | { |
| 974 | int fd, ret; |
| 975 | |
| 976 | fd = open("/sys/fs/resctrl", O_DIRECTORY); |
| 977 | if (fd == -1) { |
| 978 | perror("open"); |
| 979 | exit(-1); |
| 980 | } |
| 981 | resctrl_take_shared_lock(fd); |
| 982 | /* code to read directory contents */ |
| 983 | resctrl_release_lock(fd); |
| 984 | |
| 985 | resctrl_take_exclusive_lock(fd); |
| 986 | /* code to read and write directory contents */ |
| 987 | resctrl_release_lock(fd); |
| 988 | } |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 989 | |
| 990 | Examples for RDT Monitoring along with allocation usage: |
| 991 | |
| 992 | Reading monitored data |
| 993 | ---------------------- |
| 994 | Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would |
| 995 | show the current snapshot of LLC occupancy of the corresponding MON |
| 996 | group or CTRL_MON group. |
| 997 | |
| 998 | |
| 999 | Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group) |
| 1000 | --------- |
| 1001 | On a two socket machine (one L3 cache per socket) with just four bits |
| 1002 | for cache bit masks |
| 1003 | |
| 1004 | # mount -t resctrl resctrl /sys/fs/resctrl |
| 1005 | # cd /sys/fs/resctrl |
| 1006 | # mkdir p0 p1 |
| 1007 | # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata |
| 1008 | # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata |
| 1009 | # echo 5678 > p1/tasks |
| 1010 | # echo 5679 > p1/tasks |
| 1011 | |
| 1012 | The default resource group is unmodified, so we have access to all parts |
| 1013 | of all caches (its schemata file reads "L3:0=f;1=f"). |
| 1014 | |
| 1015 | Tasks that are under the control of group "p0" may only allocate from the |
| 1016 | "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1. |
| 1017 | Tasks in group "p1" use the "lower" 50% of cache on both sockets. |
| 1018 | |
| 1019 | Create monitor groups and assign a subset of tasks to each monitor group. |
| 1020 | |
| 1021 | # cd /sys/fs/resctrl/p1/mon_groups |
| 1022 | # mkdir m11 m12 |
| 1023 | # echo 5678 > m11/tasks |
| 1024 | # echo 5679 > m12/tasks |
| 1025 | |
| 1026 | fetch data (data shown in bytes) |
| 1027 | |
| 1028 | # cat m11/mon_data/mon_L3_00/llc_occupancy |
| 1029 | 16234000 |
| 1030 | # cat m11/mon_data/mon_L3_01/llc_occupancy |
| 1031 | 14789000 |
| 1032 | # cat m12/mon_data/mon_L3_00/llc_occupancy |
| 1033 | 16789000 |
| 1034 | |
| 1035 | The parent ctrl_mon group shows the aggregated data. |
| 1036 | |
| 1037 | # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy |
| 1038 | 31234000 |
| 1039 | |
| 1040 | Example 2 (Monitor a task from its creation) |
| 1041 | --------- |
| 1042 | On a two socket machine (one L3 cache per socket) |
| 1043 | |
| 1044 | # mount -t resctrl resctrl /sys/fs/resctrl |
| 1045 | # cd /sys/fs/resctrl |
| 1046 | # mkdir p0 p1 |
| 1047 | |
| 1048 | An RMID is allocated to the group once its created and hence the <cmd> |
| 1049 | below is monitored from its creation. |
| 1050 | |
| 1051 | # echo $$ > /sys/fs/resctrl/p1/tasks |
| 1052 | # <cmd> |
| 1053 | |
| 1054 | Fetch the data |
| 1055 | |
| 1056 | # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy |
| 1057 | 31789000 |
| 1058 | |
| 1059 | Example 3 (Monitor without CAT support or before creating CAT groups) |
| 1060 | --------- |
| 1061 | |
| 1062 | Assume a system like HSW has only CQM and no CAT support. In this case |
| 1063 | the resctrl will still mount but cannot create CTRL_MON directories. |
| 1064 | But user can create different MON groups within the root group thereby |
| 1065 | able to monitor all tasks including kernel threads. |
| 1066 | |
| 1067 | This can also be used to profile jobs cache size footprint before being |
| 1068 | able to allocate them to different allocation groups. |
| 1069 | |
| 1070 | # mount -t resctrl resctrl /sys/fs/resctrl |
| 1071 | # cd /sys/fs/resctrl |
| 1072 | # mkdir mon_groups/m01 |
| 1073 | # mkdir mon_groups/m02 |
| 1074 | |
| 1075 | # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks |
| 1076 | # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks |
| 1077 | |
| 1078 | Monitor the groups separately and also get per domain data. From the |
| 1079 | below its apparent that the tasks are mostly doing work on |
| 1080 | domain(socket) 0. |
| 1081 | |
| 1082 | # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy |
| 1083 | 31234000 |
| 1084 | # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy |
| 1085 | 34555 |
| 1086 | # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy |
| 1087 | 31234000 |
| 1088 | # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy |
| 1089 | 32789 |
| 1090 | |
| 1091 | |
| 1092 | Example 4 (Monitor real time tasks) |
| 1093 | ----------------------------------- |
| 1094 | |
| 1095 | A single socket system which has real time tasks running on cores 4-7 |
| 1096 | and non real time tasks on other cpus. We want to monitor the cache |
| 1097 | occupancy of the real time threads on these cores. |
| 1098 | |
| 1099 | # mount -t resctrl resctrl /sys/fs/resctrl |
| 1100 | # cd /sys/fs/resctrl |
| 1101 | # mkdir p1 |
| 1102 | |
| 1103 | Move the cpus 4-7 over to p1 |
Li RongQing | 3000974 | 2018-02-27 14:17:51 +0800 | [diff] [blame] | 1104 | # echo f0 > p1/cpus |
Vikas Shivappa | 1640ae9 | 2017-07-25 14:14:21 -0700 | [diff] [blame] | 1105 | |
| 1106 | View the llc occupancy snapshot |
| 1107 | |
| 1108 | # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy |
| 1109 | 11234000 |