blob: acac30b67c62547acef60a95a83ce11e96bd4ddf [file] [log] [blame]
Fenghua Yuf20e5782016-10-28 15:04:40 -07001User Interface for Resource Allocation in Intel Resource Director Technology
2
3Copyright (C) 2016 Intel Corporation
4
5Fenghua Yu <fenghua.yu@intel.com>
6Tony Luck <tony.luck@intel.com>
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -07007Vikas Shivappa <vikas.shivappa@intel.com>
Fenghua Yuf20e5782016-10-28 15:04:40 -07008
Vikas Shivappa1640ae92017-07-25 14:14:21 -07009This feature is enabled by the CONFIG_INTEL_RDT Kconfig and the
Fenghua Yu0ff8e082017-12-20 14:57:19 -080010X86 /proc/cpuinfo flag bits:
11RDT (Resource Director Technology) Allocation - "rdt_a"
12CAT (Cache Allocation Technology) - "cat_l3", "cat_l2"
Fenghua Yuaa55d5a2017-12-20 14:57:20 -080013CDP (Code and Data Prioritization ) - "cdp_l3", "cdp_l2"
Fenghua Yu0ff8e082017-12-20 14:57:19 -080014CQM (Cache QoS Monitoring) - "cqm_llc", "cqm_occup_llc"
15MBM (Memory Bandwidth Monitoring) - "cqm_mbm_total", "cqm_mbm_local"
16MBA (Memory Bandwidth Allocation) - "mba"
Fenghua Yuf20e5782016-10-28 15:04:40 -070017
18To use the feature mount the file system:
19
Vikas Shivappad6c64a42018-04-20 15:36:16 -070020 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps]] /sys/fs/resctrl
Fenghua Yuf20e5782016-10-28 15:04:40 -070021
22mount options are:
23
24"cdp": Enable code/data prioritization in L3 cache allocations.
Fenghua Yuaa55d5a2017-12-20 14:57:20 -080025"cdpl2": Enable code/data prioritization in L2 cache allocations.
Vikas Shivappad6c64a42018-04-20 15:36:16 -070026"mba_MBps": Enable the MBA Software Controller(mba_sc) to specify MBA
27 bandwidth in MBps
Fenghua Yuaa55d5a2017-12-20 14:57:20 -080028
29L2 and L3 CDP are controlled seperately.
Fenghua Yuf20e5782016-10-28 15:04:40 -070030
Vikas Shivappa1640ae92017-07-25 14:14:21 -070031RDT features are orthogonal. A particular system may support only
Reinette Chatree17e7332018-06-22 15:42:07 -070032monitoring, only control, or both monitoring and control. Cache
33pseudo-locking is a unique way of using cache control to "pin" or
34"lock" data in the cache. Details can be found in
35"Cache Pseudo-Locking".
36
Vikas Shivappa1640ae92017-07-25 14:14:21 -070037
38The mount succeeds if either of allocation or monitoring is present, but
39only those files and directories supported by the system will be created.
40For more details on the behavior of the interface during monitoring
41and allocation, see the "Resource alloc and monitor groups" section.
Fenghua Yuf20e5782016-10-28 15:04:40 -070042
Thomas Gleixner458b0d6e2016-11-07 11:58:12 +010043Info directory
44--------------
45
46The 'info' directory contains information about the enabled
47resources. Each resource has its own subdirectory. The subdirectory
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -070048names reflect the resource names.
Vikas Shivappa1640ae92017-07-25 14:14:21 -070049
50Each subdirectory contains the following files with respect to
51allocation:
52
53Cache resource(L3/L2) subdirectory contains the following files
54related to allocation:
Thomas Gleixner458b0d6e2016-11-07 11:58:12 +010055
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -070056"num_closids": The number of CLOSIDs which are valid for this
57 resource. The kernel uses the smallest number of
58 CLOSIDs of all enabled resources as limit.
Thomas Gleixner458b0d6e2016-11-07 11:58:12 +010059
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -070060"cbm_mask": The bitmask which is valid for this resource.
61 This mask is equivalent to 100%.
Thomas Gleixner458b0d6e2016-11-07 11:58:12 +010062
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -070063"min_cbm_bits": The minimum number of consecutive bits which
64 must be set when writing a mask.
Thomas Gleixner458b0d6e2016-11-07 11:58:12 +010065
Fenghua Yu0dd2d742017-07-25 15:39:04 -070066"shareable_bits": Bitmask of shareable resource with other executing
67 entities (e.g. I/O). User can use this when
68 setting up exclusive cache partitions. Note that
69 some platforms support devices that have their
70 own settings for cache use which can over-ride
71 these bits.
Reinette Chatrecba1aab2018-06-22 15:41:53 -070072"bit_usage": Annotated capacity bitmasks showing how all
73 instances of the resource are used. The legend is:
74 "0" - Corresponding region is unused. When the system's
75 resources have been allocated and a "0" is found
76 in "bit_usage" it is a sign that resources are
77 wasted.
78 "H" - Corresponding region is used by hardware only
79 but available for software use. If a resource
80 has bits set in "shareable_bits" but not all
81 of these bits appear in the resource groups'
82 schematas then the bits appearing in
83 "shareable_bits" but no resource group will
84 be marked as "H".
85 "X" - Corresponding region is available for sharing and
86 used by hardware and software. These are the
87 bits that appear in "shareable_bits" as
88 well as a resource group's allocation.
89 "S" - Corresponding region is used by software
90 and available for sharing.
91 "E" - Corresponding region is used exclusively by
92 one resource group. No sharing allowed.
Reinette Chatree17e7332018-06-22 15:42:07 -070093 "P" - Corresponding region is pseudo-locked. No
94 sharing allowed.
Fenghua Yu0dd2d742017-07-25 15:39:04 -070095
Vikas Shivappa1640ae92017-07-25 14:14:21 -070096Memory bandwitdh(MB) subdirectory contains the following files
97with respect to allocation:
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -070098
99"min_bandwidth": The minimum memory bandwidth percentage which
100 user can request.
101
102"bandwidth_gran": The granularity in which the memory bandwidth
103 percentage is allocated. The allocated
104 b/w percentage is rounded off to the next
105 control step available on the hardware. The
106 available bandwidth control steps are:
107 min_bandwidth + N * bandwidth_gran.
108
109"delay_linear": Indicates if the delay scale is linear or
110 non-linear. This field is purely informational
111 only.
Thomas Gleixner458b0d6e2016-11-07 11:58:12 +0100112
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700113If RDT monitoring is available there will be an "L3_MON" directory
114with the following files:
115
116"num_rmids": The number of RMIDs available. This is the
117 upper bound for how many "CTRL_MON" + "MON"
118 groups can be created.
119
120"mon_features": Lists the monitoring events if
121 monitoring is enabled for the resource.
122
123"max_threshold_occupancy":
124 Read/write file provides the largest value (in
125 bytes) at which a previously used LLC_occupancy
126 counter can be considered for re-use.
127
Tony Luck165d3ad2017-09-25 16:39:38 -0700128Finally, in the top level of the "info" directory there is a file
129named "last_cmd_status". This is reset with every "command" issued
130via the file system (making new directories or writing to any of the
131control files). If the command was successful, it will read as "ok".
132If the command failed, it will provide more information that can be
133conveyed in the error returns from file operations. E.g.
134
135 # echo L3:0=f7 > schemata
136 bash: echo: write error: Invalid argument
137 # cat info/last_cmd_status
138 mask f7 has non-consecutive 1-bits
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700139
140Resource alloc and monitor groups
141---------------------------------
142
Fenghua Yuf20e5782016-10-28 15:04:40 -0700143Resource groups are represented as directories in the resctrl file
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700144system. The default group is the root directory which, immediately
145after mounting, owns all the tasks and cpus in the system and can make
146full use of all resources.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700147
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700148On a system with RDT control features additional directories can be
149created in the root directory that specify different amounts of each
150resource (see "schemata" below). The root and these additional top level
151directories are referred to as "CTRL_MON" groups below.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700152
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700153On a system with RDT monitoring the root directory and other top level
154directories contain a directory named "mon_groups" in which additional
155directories can be created to monitor subsets of tasks in the CTRL_MON
156group that is their ancestor. These are called "MON" groups in the rest
157of this document.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700158
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700159Removing a directory will move all tasks and cpus owned by the group it
160represents to the parent. Removing one of the created CTRL_MON groups
161will automatically remove all MON groups below it.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700162
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700163All groups contain the following files:
Jiri Olsa4ffa3c92017-04-10 16:52:32 +0200164
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700165"tasks":
166 Reading this file shows the list of all tasks that belong to
167 this group. Writing a task id to the file will add a task to the
168 group. If the group is a CTRL_MON group the task is removed from
169 whichever previous CTRL_MON group owned the task and also from
170 any MON group that owned the task. If the group is a MON group,
171 then the task must already belong to the CTRL_MON parent of this
172 group. The task is removed from any previous MON group.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700173
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700174
175"cpus":
176 Reading this file shows a bitmask of the logical CPUs owned by
177 this group. Writing a mask to this file will add and remove
178 CPUs to/from this group. As with the tasks file a hierarchy is
179 maintained where MON groups may only include CPUs owned by the
180 parent CTRL_MON group.
181
182
183"cpus_list":
184 Just like "cpus", only using ranges of CPUs instead of bitmasks.
185
186
187When control is enabled all CTRL_MON groups will also contain:
188
189"schemata":
190 A list of all the resources available to this group.
191 Each resource has its own line and format - see below for details.
192
Reinette Chatrecba1aab2018-06-22 15:41:53 -0700193"size":
194 Mirrors the display of the "schemata" file to display the size in
195 bytes of each allocation instead of the bits representing the
196 allocation.
197
198"mode":
199 The "mode" of the resource group dictates the sharing of its
200 allocations. A "shareable" resource group allows sharing of its
Reinette Chatree17e7332018-06-22 15:42:07 -0700201 allocations while an "exclusive" resource group does not. A
202 cache pseudo-locked region is created by first writing
203 "pseudo-locksetup" to the "mode" file before writing the cache
204 pseudo-locked region's schemata to the resource group's "schemata"
205 file. On successful pseudo-locked region creation the mode will
206 automatically change to "pseudo-locked".
Reinette Chatrecba1aab2018-06-22 15:41:53 -0700207
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700208When monitoring is enabled all MON groups will also contain:
209
210"mon_data":
211 This contains a set of files organized by L3 domain and by
212 RDT event. E.g. on a system with two L3 domains there will
213 be subdirectories "mon_L3_00" and "mon_L3_01". Each of these
214 directories have one file per event (e.g. "llc_occupancy",
215 "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these
216 files provide a read out of the current value of the event for
217 all tasks in the group. In CTRL_MON groups these files provide
218 the sum for all tasks in the CTRL_MON group and all tasks in
219 MON groups. Please see example section for more details on usage.
220
221Resource allocation rules
222-------------------------
223When a task is running the following rules define which resources are
224available to it:
Fenghua Yuf20e5782016-10-28 15:04:40 -0700225
2261) If the task is a member of a non-default group, then the schemata
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700227 for that group is used.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700228
2292) Else if the task belongs to the default group, but is running on a
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700230 CPU that is assigned to some specific group, then the schemata for the
231 CPU's group is used.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700232
2333) Otherwise the schemata for the default group is used.
234
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700235Resource monitoring rules
236-------------------------
2371) If a task is a member of a MON group, or non-default CTRL_MON group
238 then RDT events for the task will be reported in that group.
239
2402) If a task is a member of the default CTRL_MON group, but is running
241 on a CPU that is assigned to some specific group, then the RDT events
242 for the task will be reported in that group.
243
2443) Otherwise RDT events for the task will be reported in the root level
245 "mon_data" group.
246
247
248Notes on cache occupancy monitoring and control
249-----------------------------------------------
250When moving a task from one group to another you should remember that
251this only affects *new* cache allocations by the task. E.g. you may have
252a task in a monitor group showing 3 MB of cache occupancy. If you move
253to a new group and immediately check the occupancy of the old and new
254groups you will likely see that the old group is still showing 3 MB and
255the new group zero. When the task accesses locations still in cache from
256before the move, the h/w does not update any counters. On a busy system
257you will likely see the occupancy in the old group go down as cache lines
258are evicted and re-used while the occupancy in the new group rises as
259the task accesses memory and loads into the cache are counted based on
260membership in the new group.
261
262The same applies to cache allocation control. Moving a task to a group
263with a smaller cache partition will not evict any cache lines. The
264process may continue to use them from the old partition.
265
266Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)
267to identify a control group and a monitoring group respectively. Each of
268the resource groups are mapped to these IDs based on the kind of group. The
269number of CLOSid and RMID are limited by the hardware and hence the creation of
270a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID
271and creation of "MON" group may fail if we run out of RMIDs.
272
273max_threshold_occupancy - generic concepts
274------------------------------------------
275
276Note that an RMID once freed may not be immediately available for use as
277the RMID is still tagged the cache lines of the previous user of RMID.
278Hence such RMIDs are placed on limbo list and checked back if the cache
279occupancy has gone down. If there is a time when system has a lot of
280limbo RMIDs but which are not ready to be used, user may see an -EBUSY
281during mkdir.
282
283max_threshold_occupancy is a user configurable value to determine the
284occupancy at which an RMID can be freed.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700285
286Schemata files - general concepts
287---------------------------------
288Each line in the file describes one resource. The line starts with
289the name of the resource, followed by specific values to be applied
290in each of the instances of that resource on the system.
291
292Cache IDs
293---------
294On current generation systems there is one L3 cache per socket and L2
295caches are generally just shared by the hyperthreads on a core, but this
296isn't an architectural requirement. We could have multiple separate L3
297caches on a socket, multiple cores could share an L2 cache. So instead
298of using "socket" or "core" to define the set of logical cpus sharing
299a resource we use a "Cache ID". At a given cache level this will be a
300unique number across the whole system (but it isn't guaranteed to be a
301contiguous sequence, there may be gaps). To find the ID for each logical
302CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
303
304Cache Bit Masks (CBM)
305---------------------
306For cache resources we describe the portion of the cache that is available
307for allocation using a bitmask. The maximum value of the mask is defined
308by each cpu model (and may be different for different cache levels). It
309is found using CPUID, but is also provided in the "info" directory of
310the resctrl file system in "info/{resource}/cbm_mask". X86 hardware
311requires that these masks have all the '1' bits in a contiguous block. So
3120x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
313and 0xA are not. On a system with a 20-bit mask each bit represents 5%
314of the capacity of the cache. You could partition the cache into four
315equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
316
Vikas Shivappad6c64a42018-04-20 15:36:16 -0700317Memory bandwidth Allocation and monitoring
318------------------------------------------
319
320For Memory bandwidth resource, by default the user controls the resource
321by indicating the percentage of total memory bandwidth.
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700322
323The minimum bandwidth percentage value for each cpu model is predefined
324and can be looked up through "info/MB/min_bandwidth". The bandwidth
325granularity that is allocated is also dependent on the cpu model and can
326be looked up at "info/MB/bandwidth_gran". The available bandwidth
327control steps are: min_bw + N * bw_gran. Intermediate values are rounded
328to the next control step available on the hardware.
329
330The bandwidth throttling is a core specific mechanism on some of Intel
331SKUs. Using a high bandwidth and a low bandwidth setting on two threads
332sharing a core will result in both threads being throttled to use the
Vikas Shivappad6c64a42018-04-20 15:36:16 -0700333low bandwidth. The fact that Memory bandwidth allocation(MBA) is a core
334specific mechanism where as memory bandwidth monitoring(MBM) is done at
335the package level may lead to confusion when users try to apply control
336via the MBA and then monitor the bandwidth to see if the controls are
337effective. Below are such scenarios:
338
3391. User may *not* see increase in actual bandwidth when percentage
340 values are increased:
341
342This can occur when aggregate L2 external bandwidth is more than L3
343external bandwidth. Consider an SKL SKU with 24 cores on a package and
344where L2 external is 10GBps (hence aggregate L2 external bandwidth is
345240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
346threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
347bandwidth of 100GBps although the percentage value specified is only 50%
348<< 100%. Hence increasing the bandwidth percentage will not yeild any
349more bandwidth. This is because although the L2 external bandwidth still
350has capacity, the L3 external bandwidth is fully used. Also note that
351this would be dependent on number of cores the benchmark is run on.
352
3532. Same bandwidth percentage may mean different actual bandwidth
354 depending on # of threads:
355
356For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
357thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
358they have same percentage bandwidth of 10%. This is simply because as
359threads start using more cores in an rdtgroup, the actual bandwidth may
360increase or vary although user specified bandwidth percentage is same.
361
362In order to mitigate this and make the interface more user friendly,
363resctrl added support for specifying the bandwidth in MBps as well. The
364kernel underneath would use a software feedback mechanism or a "Software
365Controller(mba_sc)" which reads the actual bandwidth using MBM counters
366and adjust the memowy bandwidth percentages to ensure
367
368 "actual bandwidth < user specified bandwidth".
369
370By default, the schemata would take the bandwidth percentage values
371where as user can switch to the "MBA software controller" mode using
372a mount option 'mba_MBps'. The schemata format is specified in the below
373sections.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700374
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700375L3 schemata file details (code and data prioritization disabled)
376----------------------------------------------------------------
Fenghua Yuf20e5782016-10-28 15:04:40 -0700377With CDP disabled the L3 schemata format is:
378
379 L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
380
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700381L3 schemata file details (CDP enabled via mount option to resctrl)
382------------------------------------------------------------------
Fenghua Yuf20e5782016-10-28 15:04:40 -0700383When CDP is enabled L3 control is split into two separate resources
384so you can specify independent masks for code and data like this:
385
386 L3data:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
387 L3code:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
388
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700389L2 schemata file details
390------------------------
Fenghua Yuf20e5782016-10-28 15:04:40 -0700391L2 cache does not support code and data prioritization, so the
392schemata format is always:
393
394 L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
395
Vikas Shivappad6c64a42018-04-20 15:36:16 -0700396Memory bandwidth Allocation (default mode)
397------------------------------------------
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700398
399Memory b/w domain is L3 cache.
400
401 MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
402
Vikas Shivappad6c64a42018-04-20 15:36:16 -0700403Memory bandwidth Allocation specified in MBps
404---------------------------------------------
405
406Memory bandwidth domain is L3 cache.
407
408 MB:<cache_id0>=bw_MBps0;<cache_id1>=bw_MBps1;...
409
Tony Luckc4026b7b2017-04-03 14:44:16 -0700410Reading/writing the schemata file
411---------------------------------
412Reading the schemata file will show the state of all resources
413on all domains. When writing you only need to specify those values
414which you wish to change. E.g.
415
416# cat schemata
417L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
418L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
419# echo "L3DATA:2=3c0;" > schemata
420# cat schemata
421L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
422L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
423
Reinette Chatree17e7332018-06-22 15:42:07 -0700424Cache Pseudo-Locking
425--------------------
426CAT enables a user to specify the amount of cache space that an
427application can fill. Cache pseudo-locking builds on the fact that a
428CPU can still read and write data pre-allocated outside its current
429allocated area on a cache hit. With cache pseudo-locking, data can be
430preloaded into a reserved portion of cache that no application can
431fill, and from that point on will only serve cache hits. The cache
432pseudo-locked memory is made accessible to user space where an
433application can map it into its virtual address space and thus have
434a region of memory with reduced average read latency.
435
436The creation of a cache pseudo-locked region is triggered by a request
437from the user to do so that is accompanied by a schemata of the region
438to be pseudo-locked. The cache pseudo-locked region is created as follows:
439- Create a CAT allocation CLOSNEW with a CBM matching the schemata
440 from the user of the cache region that will contain the pseudo-locked
441 memory. This region must not overlap with any current CAT allocation/CLOS
442 on the system and no future overlap with this cache region is allowed
443 while the pseudo-locked region exists.
444- Create a contiguous region of memory of the same size as the cache
445 region.
446- Flush the cache, disable hardware prefetchers, disable preemption.
447- Make CLOSNEW the active CLOS and touch the allocated memory to load
448 it into the cache.
449- Set the previous CLOS as active.
450- At this point the closid CLOSNEW can be released - the cache
451 pseudo-locked region is protected as long as its CBM does not appear in
452 any CAT allocation. Even though the cache pseudo-locked region will from
453 this point on not appear in any CBM of any CLOS an application running with
454 any CLOS will be able to access the memory in the pseudo-locked region since
455 the region continues to serve cache hits.
456- The contiguous region of memory loaded into the cache is exposed to
457 user-space as a character device.
458
459Cache pseudo-locking increases the probability that data will remain
460in the cache via carefully configuring the CAT feature and controlling
461application behavior. There is no guarantee that data is placed in
462cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
463“locked” data from cache. Power management C-states may shrink or
Reinette Chatre6fc0de32018-06-22 15:42:30 -0700464power off cache. Deeper C-states will automatically be restricted on
465pseudo-locked region creation.
Reinette Chatree17e7332018-06-22 15:42:07 -0700466
467It is required that an application using a pseudo-locked region runs
468with affinity to the cores (or a subset of the cores) associated
469with the cache on which the pseudo-locked region resides. A sanity check
470within the code will not allow an application to map pseudo-locked memory
471unless it runs with affinity to cores associated with the cache on which the
472pseudo-locked region resides. The sanity check is only done during the
473initial mmap() handling, there is no enforcement afterwards and the
474application self needs to ensure it remains affine to the correct cores.
475
476Pseudo-locking is accomplished in two stages:
4771) During the first stage the system administrator allocates a portion
478 of cache that should be dedicated to pseudo-locking. At this time an
479 equivalent portion of memory is allocated, loaded into allocated
480 cache portion, and exposed as a character device.
4812) During the second stage a user-space application maps (mmap()) the
482 pseudo-locked memory into its address space.
483
484Cache Pseudo-Locking Interface
485------------------------------
486A pseudo-locked region is created using the resctrl interface as follows:
487
4881) Create a new resource group by creating a new directory in /sys/fs/resctrl.
4892) Change the new resource group's mode to "pseudo-locksetup" by writing
490 "pseudo-locksetup" to the "mode" file.
4913) Write the schemata of the pseudo-locked region to the "schemata" file. All
492 bits within the schemata should be "unused" according to the "bit_usage"
493 file.
494
495On successful pseudo-locked region creation the "mode" file will contain
496"pseudo-locked" and a new character device with the same name as the resource
497group will exist in /dev/pseudo_lock. This character device can be mmap()'ed
498by user space in order to obtain access to the pseudo-locked memory region.
499
500An example of cache pseudo-locked region creation and usage can be found below.
501
502Cache Pseudo-Locking Debugging Interface
503---------------------------------------
504The pseudo-locking debugging interface is enabled by default (if
505CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
506
507There is no explicit way for the kernel to test if a provided memory
508location is present in the cache. The pseudo-locking debugging interface uses
509the tracing infrastructure to provide two ways to measure cache residency of
510the pseudo-locked region:
5111) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
512 from these measurements are best visualized using a hist trigger (see
513 example below). In this test the pseudo-locked region is traversed at
514 a stride of 32 bytes while hardware prefetchers and preemption
515 are disabled. This also provides a substitute visualization of cache
516 hits and misses.
5172) Cache hit and miss measurements using model specific precision counters if
518 available. Depending on the levels of cache on the system the pseudo_lock_l2
519 and pseudo_lock_l3 tracepoints are available.
520 WARNING: triggering this measurement uses from two (for just L2
521 measurements) to four (for L2 and L3 measurements) precision counters on
522 the system, if any other measurements are in progress the counters and
523 their corresponding event registers will be clobbered.
524
525When a pseudo-locked region is created a new debugfs directory is created for
526it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
527write-only file, pseudo_lock_measure, is present in this directory. The
528measurement on the pseudo-locked region depends on the number, 1 or 2,
529written to this debugfs file. Since the measurements are recorded with the
530tracing infrastructure the relevant tracepoints need to be enabled before the
531measurement is triggered.
532
533Example of latency debugging interface:
534In this example a pseudo-locked region named "newlock" was created. Here is
535how we can measure the latency in cycles of reading from this region and
536visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS
537is set:
538# :> /sys/kernel/debug/tracing/trace
539# echo 'hist:keys=latency' > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/trigger
540# echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable
541# echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
542# echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable
543# cat /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/hist
544
545# event histogram
546#
547# trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]
548#
549
550{ latency: 456 } hitcount: 1
551{ latency: 50 } hitcount: 83
552{ latency: 36 } hitcount: 96
553{ latency: 44 } hitcount: 174
554{ latency: 48 } hitcount: 195
555{ latency: 46 } hitcount: 262
556{ latency: 42 } hitcount: 693
557{ latency: 40 } hitcount: 3204
558{ latency: 38 } hitcount: 3484
559
560Totals:
561 Hits: 8192
562 Entries: 9
563 Dropped: 0
564
565Example of cache hits/misses debugging:
566In this example a pseudo-locked region named "newlock" was created on the L2
567cache of a platform. Here is how we can obtain details of the cache hits
568and misses using the platform's precision counters.
569
570# :> /sys/kernel/debug/tracing/trace
571# echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable
572# echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
573# echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable
574# cat /sys/kernel/debug/tracing/trace
575
576# tracer: nop
577#
578# _-----=> irqs-off
579# / _----=> need-resched
580# | / _---=> hardirq/softirq
581# || / _--=> preempt-depth
582# ||| / delay
583# TASK-PID CPU# |||| TIMESTAMP FUNCTION
584# | | | |||| | |
585 pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0
586
587
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700588Examples for RDT allocation usage:
589
Fenghua Yuf20e5782016-10-28 15:04:40 -0700590Example 1
591---------
592On a two socket machine (one L3 cache per socket) with just four bits
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700593for cache bit masks, minimum b/w of 10% with a memory bandwidth
594granularity of 10%
Fenghua Yuf20e5782016-10-28 15:04:40 -0700595
596# mount -t resctrl resctrl /sys/fs/resctrl
597# cd /sys/fs/resctrl
598# mkdir p0 p1
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700599# echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata
600# echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
Fenghua Yuf20e5782016-10-28 15:04:40 -0700601
602The default resource group is unmodified, so we have access to all parts
603of all caches (its schemata file reads "L3:0=f;1=f").
604
605Tasks that are under the control of group "p0" may only allocate from the
606"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
607Tasks in group "p1" use the "lower" 50% of cache on both sockets.
608
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700609Similarly, tasks that are under the control of group "p0" may use a
610maximum memory b/w of 50% on socket0 and 50% on socket 1.
611Tasks in group "p1" may also use 50% memory b/w on both sockets.
612Note that unlike cache masks, memory b/w cannot specify whether these
613allocations can overlap or not. The allocations specifies the maximum
614b/w that the group may be able to use and the system admin can configure
615the b/w accordingly.
616
Vikas Shivappad6c64a42018-04-20 15:36:16 -0700617If the MBA is specified in MB(megabytes) then user can enter the max b/w in MB
618rather than the percentage values.
619
620# echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata
621# echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
622
623In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
624of 1024MB where as on socket 1 they would use 500MB.
625
Fenghua Yuf20e5782016-10-28 15:04:40 -0700626Example 2
627---------
628Again two sockets, but this time with a more realistic 20-bit mask.
629
630Two real time tasks pid=1234 running on processor 0 and pid=5678 running on
631processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
632neighbors, each of the two real-time tasks exclusively occupies one quarter
633of L3 cache on socket 0.
634
635# mount -t resctrl resctrl /sys/fs/resctrl
636# cd /sys/fs/resctrl
637
638First we reset the schemata for the default group so that the "upper"
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -070063950% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
640ordinary tasks:
Fenghua Yuf20e5782016-10-28 15:04:40 -0700641
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700642# echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
Fenghua Yuf20e5782016-10-28 15:04:40 -0700643
644Next we make a resource group for our first real time task and give
645it access to the "top" 25% of the cache on socket 0.
646
647# mkdir p0
648# echo "L3:0=f8000;1=fffff" > p0/schemata
649
650Finally we move our first real time task into this resource group. We
651also use taskset(1) to ensure the task always runs on a dedicated CPU
652on socket 0. Most uses of resource groups will also constrain which
653processors tasks run on.
654
655# echo 1234 > p0/tasks
656# taskset -cp 1 1234
657
658Ditto for the second real time task (with the remaining 25% of cache):
659
660# mkdir p1
661# echo "L3:0=7c00;1=fffff" > p1/schemata
662# echo 5678 > p1/tasks
663# taskset -cp 2 5678
664
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700665For the same 2 socket system with memory b/w resource and CAT L3 the
666schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is
66710):
668
669For our first real time task this would request 20% memory b/w on socket
6700.
671
672# echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
673
674For our second real time task this would request an other 20% memory b/w
675on socket 0.
676
677# echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
678
Fenghua Yuf20e5782016-10-28 15:04:40 -0700679Example 3
680---------
681
682A single socket system which has real-time tasks running on core 4-7 and
683non real-time workload assigned to core 0-3. The real-time tasks share text
684and data, so a per task association is not required and due to interaction
685with the kernel it's desired that the kernel on these cores shares L3 with
686the tasks.
687
688# mount -t resctrl resctrl /sys/fs/resctrl
689# cd /sys/fs/resctrl
690
691First we reset the schemata for the default group so that the "upper"
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -070069250% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
693cannot be used by ordinary tasks:
Fenghua Yuf20e5782016-10-28 15:04:40 -0700694
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700695# echo "L3:0=3ff\nMB:0=50" > schemata
Fenghua Yuf20e5782016-10-28 15:04:40 -0700696
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700697Next we make a resource group for our real time cores and give it access
698to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
699socket 0.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700700
701# mkdir p0
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700702# echo "L3:0=ffc00\nMB:0=50" > p0/schemata
Fenghua Yuf20e5782016-10-28 15:04:40 -0700703
704Finally we move core 4-7 over to the new group and make sure that the
Vikas Shivappaa9cad3d2017-04-07 17:33:50 -0700705kernel and the tasks running there get 50% of the cache. They should
706also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
707siblings and only the real time threads are scheduled on the cores 4-7.
Fenghua Yuf20e5782016-10-28 15:04:40 -0700708
Xiaochen Shenfb8fb462017-05-03 11:15:56 +0800709# echo F0 > p0/cpus
Marcelo Tosatti3c2a7692016-12-14 15:08:37 -0200710
Reinette Chatrecba1aab2018-06-22 15:41:53 -0700711Example 4
712---------
713
714The resource groups in previous examples were all in the default "shareable"
715mode allowing sharing of their cache allocations. If one resource group
716configures a cache allocation then nothing prevents another resource group
717to overlap with that allocation.
718
719In this example a new exclusive resource group will be created on a L2 CAT
720system with two L2 cache instances that can be configured with an 8-bit
721capacity bitmask. The new exclusive resource group will be configured to use
72225% of each cache instance.
723
724# mount -t resctrl resctrl /sys/fs/resctrl/
725# cd /sys/fs/resctrl
726
727First, we observe that the default group is configured to allocate to all L2
728cache:
729
730# cat schemata
731L2:0=ff;1=ff
732
733We could attempt to create the new resource group at this point, but it will
734fail because of the overlap with the schemata of the default group:
735# mkdir p0
736# echo 'L2:0=0x3;1=0x3' > p0/schemata
737# cat p0/mode
738shareable
739# echo exclusive > p0/mode
740-sh: echo: write error: Invalid argument
741# cat info/last_cmd_status
742schemata overlaps
743
744To ensure that there is no overlap with another resource group the default
745resource group's schemata has to change, making it possible for the new
746resource group to become exclusive.
747# echo 'L2:0=0xfc;1=0xfc' > schemata
748# echo exclusive > p0/mode
749# grep . p0/*
750p0/cpus:0
751p0/mode:exclusive
752p0/schemata:L2:0=03;1=03
753p0/size:L2:0=262144;1=262144
754
755A new resource group will on creation not overlap with an exclusive resource
756group:
757# mkdir p1
758# grep . p1/*
759p1/cpus:0
760p1/mode:shareable
761p1/schemata:L2:0=fc;1=fc
762p1/size:L2:0=786432;1=786432
763
764The bit_usage will reflect how the cache is used:
765# cat info/L2/bit_usage
7660=SSSSSSEE;1=SSSSSSEE
767
768A resource group cannot be forced to overlap with an exclusive resource group:
769# echo 'L2:0=0x1;1=0x1' > p1/schemata
770-sh: echo: write error: Invalid argument
771# cat info/last_cmd_status
772overlaps with exclusive group
773
Reinette Chatree17e7332018-06-22 15:42:07 -0700774Example of Cache Pseudo-Locking
775-------------------------------
776Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
777region is exposed at /dev/pseudo_lock/newlock that can be provided to
778application for argument to mmap().
779
780# mount -t resctrl resctrl /sys/fs/resctrl/
781# cd /sys/fs/resctrl
782
783Ensure that there are bits available that can be pseudo-locked, since only
784unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
785removed from the default resource group's schemata:
786# cat info/L2/bit_usage
7870=SSSSSSSS;1=SSSSSSSS
788# echo 'L2:1=0xfc' > schemata
789# cat info/L2/bit_usage
7900=SSSSSSSS;1=SSSSSS00
791
792Create a new resource group that will be associated with the pseudo-locked
793region, indicate that it will be used for a pseudo-locked region, and
794configure the requested pseudo-locked region capacity bitmask:
795
796# mkdir newlock
797# echo pseudo-locksetup > newlock/mode
798# echo 'L2:1=0x3' > newlock/schemata
799
800On success the resource group's mode will change to pseudo-locked, the
801bit_usage will reflect the pseudo-locked region, and the character device
802exposing the pseudo-locked region will exist:
803
804# cat newlock/mode
805pseudo-locked
806# cat info/L2/bit_usage
8070=SSSSSSSS;1=SSSSSSPP
808# ls -l /dev/pseudo_lock/newlock
809crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock
810
811/*
812 * Example code to access one page of pseudo-locked cache region
813 * from user space.
814 */
815#define _GNU_SOURCE
816#include <fcntl.h>
817#include <sched.h>
818#include <stdio.h>
819#include <stdlib.h>
820#include <unistd.h>
821#include <sys/mman.h>
822
823/*
824 * It is required that the application runs with affinity to only
825 * cores associated with the pseudo-locked region. Here the cpu
826 * is hardcoded for convenience of example.
827 */
828static int cpuid = 2;
829
830int main(int argc, char *argv[])
831{
832 cpu_set_t cpuset;
833 long page_size;
834 void *mapping;
835 int dev_fd;
836 int ret;
837
838 page_size = sysconf(_SC_PAGESIZE);
839
840 CPU_ZERO(&cpuset);
841 CPU_SET(cpuid, &cpuset);
842 ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
843 if (ret < 0) {
844 perror("sched_setaffinity");
845 exit(EXIT_FAILURE);
846 }
847
848 dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR);
849 if (dev_fd < 0) {
850 perror("open");
851 exit(EXIT_FAILURE);
852 }
853
854 mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
855 dev_fd, 0);
856 if (mapping == MAP_FAILED) {
857 perror("mmap");
858 close(dev_fd);
859 exit(EXIT_FAILURE);
860 }
861
862 /* Application interacts with pseudo-locked memory @mapping */
863
864 ret = munmap(mapping, page_size);
865 if (ret < 0) {
866 perror("munmap");
867 close(dev_fd);
868 exit(EXIT_FAILURE);
869 }
870
871 close(dev_fd);
872 exit(EXIT_SUCCESS);
873}
874
Reinette Chatrecba1aab2018-06-22 15:41:53 -0700875Locking between applications
876----------------------------
Marcelo Tosatti3c2a7692016-12-14 15:08:37 -0200877
878Certain operations on the resctrl filesystem, composed of read/writes
879to/from multiple files, must be atomic.
880
881As an example, the allocation of an exclusive reservation of L3 cache
882involves:
883
Reinette Chatrecba1aab2018-06-22 15:41:53 -0700884 1. Read the cbmmasks from each directory or the per-resource "bit_usage"
Marcelo Tosatti3c2a7692016-12-14 15:08:37 -0200885 2. Find a contiguous set of bits in the global CBM bitmask that is clear
886 in any of the directory cbmmasks
887 3. Create a new directory
888 4. Set the bits found in step 2 to the new directory "schemata" file
889
890If two applications attempt to allocate space concurrently then they can
891end up allocating the same bits so the reservations are shared instead of
892exclusive.
893
894To coordinate atomic operations on the resctrlfs and to avoid the problem
895above, the following locking procedure is recommended:
896
897Locking is based on flock, which is available in libc and also as a shell
898script command
899
900Write lock:
901
902 A) Take flock(LOCK_EX) on /sys/fs/resctrl
903 B) Read/write the directory structure.
904 C) funlock
905
906Read lock:
907
908 A) Take flock(LOCK_SH) on /sys/fs/resctrl
909 B) If success read the directory structure.
910 C) funlock
911
912Example with bash:
913
914# Atomically read directory structure
915$ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
916
917# Read directory contents and create new subdirectory
918
919$ cat create-dir.sh
920find /sys/fs/resctrl/ > output.txt
921mask = function-of(output.txt)
922mkdir /sys/fs/resctrl/newres/
923echo mask > /sys/fs/resctrl/newres/schemata
924
925$ flock /sys/fs/resctrl/ ./create-dir.sh
926
927Example with C:
928
929/*
930 * Example code do take advisory locks
931 * before accessing resctrl filesystem
932 */
933#include <sys/file.h>
934#include <stdlib.h>
935
936void resctrl_take_shared_lock(int fd)
937{
938 int ret;
939
940 /* take shared lock on resctrl filesystem */
941 ret = flock(fd, LOCK_SH);
942 if (ret) {
943 perror("flock");
944 exit(-1);
945 }
946}
947
948void resctrl_take_exclusive_lock(int fd)
949{
950 int ret;
951
952 /* release lock on resctrl filesystem */
953 ret = flock(fd, LOCK_EX);
954 if (ret) {
955 perror("flock");
956 exit(-1);
957 }
958}
959
960void resctrl_release_lock(int fd)
961{
962 int ret;
963
964 /* take shared lock on resctrl filesystem */
965 ret = flock(fd, LOCK_UN);
966 if (ret) {
967 perror("flock");
968 exit(-1);
969 }
970}
971
972void main(void)
973{
974 int fd, ret;
975
976 fd = open("/sys/fs/resctrl", O_DIRECTORY);
977 if (fd == -1) {
978 perror("open");
979 exit(-1);
980 }
981 resctrl_take_shared_lock(fd);
982 /* code to read directory contents */
983 resctrl_release_lock(fd);
984
985 resctrl_take_exclusive_lock(fd);
986 /* code to read and write directory contents */
987 resctrl_release_lock(fd);
988}
Vikas Shivappa1640ae92017-07-25 14:14:21 -0700989
990Examples for RDT Monitoring along with allocation usage:
991
992Reading monitored data
993----------------------
994Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would
995show the current snapshot of LLC occupancy of the corresponding MON
996group or CTRL_MON group.
997
998
999Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)
1000---------
1001On a two socket machine (one L3 cache per socket) with just four bits
1002for cache bit masks
1003
1004# mount -t resctrl resctrl /sys/fs/resctrl
1005# cd /sys/fs/resctrl
1006# mkdir p0 p1
1007# echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata
1008# echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata
1009# echo 5678 > p1/tasks
1010# echo 5679 > p1/tasks
1011
1012The default resource group is unmodified, so we have access to all parts
1013of all caches (its schemata file reads "L3:0=f;1=f").
1014
1015Tasks that are under the control of group "p0" may only allocate from the
1016"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1017Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1018
1019Create monitor groups and assign a subset of tasks to each monitor group.
1020
1021# cd /sys/fs/resctrl/p1/mon_groups
1022# mkdir m11 m12
1023# echo 5678 > m11/tasks
1024# echo 5679 > m12/tasks
1025
1026fetch data (data shown in bytes)
1027
1028# cat m11/mon_data/mon_L3_00/llc_occupancy
102916234000
1030# cat m11/mon_data/mon_L3_01/llc_occupancy
103114789000
1032# cat m12/mon_data/mon_L3_00/llc_occupancy
103316789000
1034
1035The parent ctrl_mon group shows the aggregated data.
1036
1037# cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
103831234000
1039
1040Example 2 (Monitor a task from its creation)
1041---------
1042On a two socket machine (one L3 cache per socket)
1043
1044# mount -t resctrl resctrl /sys/fs/resctrl
1045# cd /sys/fs/resctrl
1046# mkdir p0 p1
1047
1048An RMID is allocated to the group once its created and hence the <cmd>
1049below is monitored from its creation.
1050
1051# echo $$ > /sys/fs/resctrl/p1/tasks
1052# <cmd>
1053
1054Fetch the data
1055
1056# cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
105731789000
1058
1059Example 3 (Monitor without CAT support or before creating CAT groups)
1060---------
1061
1062Assume a system like HSW has only CQM and no CAT support. In this case
1063the resctrl will still mount but cannot create CTRL_MON directories.
1064But user can create different MON groups within the root group thereby
1065able to monitor all tasks including kernel threads.
1066
1067This can also be used to profile jobs cache size footprint before being
1068able to allocate them to different allocation groups.
1069
1070# mount -t resctrl resctrl /sys/fs/resctrl
1071# cd /sys/fs/resctrl
1072# mkdir mon_groups/m01
1073# mkdir mon_groups/m02
1074
1075# echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks
1076# echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
1077
1078Monitor the groups separately and also get per domain data. From the
1079below its apparent that the tasks are mostly doing work on
1080domain(socket) 0.
1081
1082# cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy
108331234000
1084# cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy
108534555
1086# cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy
108731234000
1088# cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy
108932789
1090
1091
1092Example 4 (Monitor real time tasks)
1093-----------------------------------
1094
1095A single socket system which has real time tasks running on cores 4-7
1096and non real time tasks on other cpus. We want to monitor the cache
1097occupancy of the real time threads on these cores.
1098
1099# mount -t resctrl resctrl /sys/fs/resctrl
1100# cd /sys/fs/resctrl
1101# mkdir p1
1102
1103Move the cpus 4-7 over to p1
Li RongQing30009742018-02-27 14:17:51 +08001104# echo f0 > p1/cpus
Vikas Shivappa1640ae92017-07-25 14:14:21 -07001105
1106View the llc occupancy snapshot
1107
1108# cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy
110911234000