Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 1 | /* |
| 2 | * Copyright © 2015-2016 Intel Corporation |
| 3 | * |
| 4 | * Permission is hereby granted, free of charge, to any person obtaining a |
| 5 | * copy of this software and associated documentation files (the "Software"), |
| 6 | * to deal in the Software without restriction, including without limitation |
| 7 | * the rights to use, copy, modify, merge, publish, distribute, sublicense, |
| 8 | * and/or sell copies of the Software, and to permit persons to whom the |
| 9 | * Software is furnished to do so, subject to the following conditions: |
| 10 | * |
| 11 | * The above copyright notice and this permission notice (including the next |
| 12 | * paragraph) shall be included in all copies or substantial portions of the |
| 13 | * Software. |
| 14 | * |
| 15 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
| 16 | * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
| 17 | * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL |
| 18 | * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |
| 19 | * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING |
| 20 | * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS |
| 21 | * IN THE SOFTWARE. |
| 22 | * |
| 23 | * Authors: |
| 24 | * Robert Bragg <robert@sixbynine.org> |
| 25 | */ |
| 26 | |
Robert Bragg | 7abbd8d | 2016-11-07 19:49:57 +0000 | [diff] [blame] | 27 | |
| 28 | /** |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 29 | * DOC: i915 Perf Overview |
Robert Bragg | 7abbd8d | 2016-11-07 19:49:57 +0000 | [diff] [blame] | 30 | * |
| 31 | * Gen graphics supports a large number of performance counters that can help |
| 32 | * driver and application developers understand and optimize their use of the |
| 33 | * GPU. |
| 34 | * |
| 35 | * This i915 perf interface enables userspace to configure and open a file |
| 36 | * descriptor representing a stream of GPU metrics which can then be read() as |
| 37 | * a stream of sample records. |
| 38 | * |
| 39 | * The interface is particularly suited to exposing buffered metrics that are |
| 40 | * captured by DMA from the GPU, unsynchronized with and unrelated to the CPU. |
| 41 | * |
| 42 | * Streams representing a single context are accessible to applications with a |
| 43 | * corresponding drm file descriptor, such that OpenGL can use the interface |
| 44 | * without special privileges. Access to system-wide metrics requires root |
| 45 | * privileges by default, unless changed via the dev.i915.perf_event_paranoid |
| 46 | * sysctl option. |
| 47 | * |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 48 | */ |
| 49 | |
| 50 | /** |
| 51 | * DOC: i915 Perf History and Comparison with Core Perf |
Robert Bragg | 7abbd8d | 2016-11-07 19:49:57 +0000 | [diff] [blame] | 52 | * |
| 53 | * The interface was initially inspired by the core Perf infrastructure but |
| 54 | * some notable differences are: |
| 55 | * |
| 56 | * i915 perf file descriptors represent a "stream" instead of an "event"; where |
| 57 | * a perf event primarily corresponds to a single 64bit value, while a stream |
| 58 | * might sample sets of tightly-coupled counters, depending on the |
| 59 | * configuration. For example the Gen OA unit isn't designed to support |
| 60 | * orthogonal configurations of individual counters; it's configured for a set |
| 61 | * of related counters. Samples for an i915 perf stream capturing OA metrics |
| 62 | * will include a set of counter values packed in a compact HW specific format. |
| 63 | * The OA unit supports a number of different packing formats which can be |
| 64 | * selected by the user opening the stream. Perf has support for grouping |
| 65 | * events, but each event in the group is configured, validated and |
| 66 | * authenticated individually with separate system calls. |
| 67 | * |
| 68 | * i915 perf stream configurations are provided as an array of u64 (key,value) |
| 69 | * pairs, instead of a fixed struct with multiple miscellaneous config members, |
| 70 | * interleaved with event-type specific members. |
| 71 | * |
| 72 | * i915 perf doesn't support exposing metrics via an mmap'd circular buffer. |
| 73 | * The supported metrics are being written to memory by the GPU unsynchronized |
| 74 | * with the CPU, using HW specific packing formats for counter sets. Sometimes |
| 75 | * the constraints on HW configuration require reports to be filtered before it |
| 76 | * would be acceptable to expose them to unprivileged applications - to hide |
| 77 | * the metrics of other processes/contexts. For these use cases a read() based |
| 78 | * interface is a good fit, and provides an opportunity to filter data as it |
| 79 | * gets copied from the GPU mapped buffers to userspace buffers. |
| 80 | * |
| 81 | * |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 82 | * Issues hit with first prototype based on Core Perf |
| 83 | * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
Robert Bragg | 7abbd8d | 2016-11-07 19:49:57 +0000 | [diff] [blame] | 84 | * |
| 85 | * The first prototype of this driver was based on the core perf |
| 86 | * infrastructure, and while we did make that mostly work, with some changes to |
| 87 | * perf, we found we were breaking or working around too many assumptions baked |
| 88 | * into perf's currently cpu centric design. |
| 89 | * |
| 90 | * In the end we didn't see a clear benefit to making perf's implementation and |
| 91 | * interface more complex by changing design assumptions while we knew we still |
| 92 | * wouldn't be able to use any existing perf based userspace tools. |
| 93 | * |
| 94 | * Also considering the Gen specific nature of the Observability hardware and |
| 95 | * how userspace will sometimes need to combine i915 perf OA metrics with |
| 96 | * side-band OA data captured via MI_REPORT_PERF_COUNT commands; we're |
| 97 | * expecting the interface to be used by a platform specific userspace such as |
| 98 | * OpenGL or tools. This is to say; we aren't inherently missing out on having |
| 99 | * a standard vendor/architecture agnostic interface by not using perf. |
| 100 | * |
| 101 | * |
| 102 | * For posterity, in case we might re-visit trying to adapt core perf to be |
| 103 | * better suited to exposing i915 metrics these were the main pain points we |
| 104 | * hit: |
| 105 | * |
| 106 | * - The perf based OA PMU driver broke some significant design assumptions: |
| 107 | * |
| 108 | * Existing perf pmus are used for profiling work on a cpu and we were |
| 109 | * introducing the idea of _IS_DEVICE pmus with different security |
| 110 | * implications, the need to fake cpu-related data (such as user/kernel |
| 111 | * registers) to fit with perf's current design, and adding _DEVICE records |
| 112 | * as a way to forward device-specific status records. |
| 113 | * |
| 114 | * The OA unit writes reports of counters into a circular buffer, without |
| 115 | * involvement from the CPU, making our PMU driver the first of a kind. |
| 116 | * |
| 117 | * Given the way we were periodically forward data from the GPU-mapped, OA |
| 118 | * buffer to perf's buffer, those bursts of sample writes looked to perf like |
| 119 | * we were sampling too fast and so we had to subvert its throttling checks. |
| 120 | * |
| 121 | * Perf supports groups of counters and allows those to be read via |
| 122 | * transactions internally but transactions currently seem designed to be |
| 123 | * explicitly initiated from the cpu (say in response to a userspace read()) |
| 124 | * and while we could pull a report out of the OA buffer we can't |
| 125 | * trigger a report from the cpu on demand. |
| 126 | * |
| 127 | * Related to being report based; the OA counters are configured in HW as a |
| 128 | * set while perf generally expects counter configurations to be orthogonal. |
| 129 | * Although counters can be associated with a group leader as they are |
| 130 | * opened, there's no clear precedent for being able to provide group-wide |
| 131 | * configuration attributes (for example we want to let userspace choose the |
| 132 | * OA unit report format used to capture all counters in a set, or specify a |
| 133 | * GPU context to filter metrics on). We avoided using perf's grouping |
| 134 | * feature and forwarded OA reports to userspace via perf's 'raw' sample |
| 135 | * field. This suited our userspace well considering how coupled the counters |
| 136 | * are when dealing with normalizing. It would be inconvenient to split |
| 137 | * counters up into separate events, only to require userspace to recombine |
| 138 | * them. For Mesa it's also convenient to be forwarded raw, periodic reports |
| 139 | * for combining with the side-band raw reports it captures using |
| 140 | * MI_REPORT_PERF_COUNT commands. |
| 141 | * |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 142 | * - As a side note on perf's grouping feature; there was also some concern |
Robert Bragg | 7abbd8d | 2016-11-07 19:49:57 +0000 | [diff] [blame] | 143 | * that using PERF_FORMAT_GROUP as a way to pack together counter values |
| 144 | * would quite drastically inflate our sample sizes, which would likely |
| 145 | * lower the effective sampling resolutions we could use when the available |
| 146 | * memory bandwidth is limited. |
| 147 | * |
| 148 | * With the OA unit's report formats, counters are packed together as 32 |
| 149 | * or 40bit values, with the largest report size being 256 bytes. |
| 150 | * |
| 151 | * PERF_FORMAT_GROUP values are 64bit, but there doesn't appear to be a |
| 152 | * documented ordering to the values, implying PERF_FORMAT_ID must also be |
| 153 | * used to add a 64bit ID before each value; giving 16 bytes per counter. |
| 154 | * |
| 155 | * Related to counter orthogonality; we can't time share the OA unit, while |
| 156 | * event scheduling is a central design idea within perf for allowing |
| 157 | * userspace to open + enable more events than can be configured in HW at any |
| 158 | * one time. The OA unit is not designed to allow re-configuration while in |
| 159 | * use. We can't reconfigure the OA unit without losing internal OA unit |
| 160 | * state which we can't access explicitly to save and restore. Reconfiguring |
| 161 | * the OA unit is also relatively slow, involving ~100 register writes. From |
| 162 | * userspace Mesa also depends on a stable OA configuration when emitting |
| 163 | * MI_REPORT_PERF_COUNT commands and importantly the OA unit can't be |
| 164 | * disabled while there are outstanding MI_RPC commands lest we hang the |
| 165 | * command streamer. |
| 166 | * |
| 167 | * The contents of sample records aren't extensible by device drivers (i.e. |
| 168 | * the sample_type bits). As an example; Sourab Gupta had been looking to |
| 169 | * attach GPU timestamps to our OA samples. We were shoehorning OA reports |
| 170 | * into sample records by using the 'raw' field, but it's tricky to pack more |
| 171 | * than one thing into this field because events/core.c currently only lets a |
| 172 | * pmu give a single raw data pointer plus len which will be copied into the |
| 173 | * ring buffer. To include more than the OA report we'd have to copy the |
| 174 | * report into an intermediate larger buffer. I'd been considering allowing a |
| 175 | * vector of data+len values to be specified for copying the raw data, but |
| 176 | * it felt like a kludge to being using the raw field for this purpose. |
| 177 | * |
| 178 | * - It felt like our perf based PMU was making some technical compromises |
| 179 | * just for the sake of using perf: |
| 180 | * |
| 181 | * perf_event_open() requires events to either relate to a pid or a specific |
| 182 | * cpu core, while our device pmu related to neither. Events opened with a |
| 183 | * pid will be automatically enabled/disabled according to the scheduling of |
| 184 | * that process - so not appropriate for us. When an event is related to a |
| 185 | * cpu id, perf ensures pmu methods will be invoked via an inter process |
| 186 | * interrupt on that core. To avoid invasive changes our userspace opened OA |
| 187 | * perf events for a specific cpu. This was workable but it meant the |
| 188 | * majority of the OA driver ran in atomic context, including all OA report |
| 189 | * forwarding, which wasn't really necessary in our case and seems to make |
| 190 | * our locking requirements somewhat complex as we handled the interaction |
| 191 | * with the rest of the i915 driver. |
| 192 | */ |
| 193 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 194 | #include <linux/anon_inodes.h> |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 195 | #include <linux/sizes.h> |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 196 | #include <linux/uuid.h> |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 197 | |
Chris Wilson | 10be98a | 2019-05-28 10:29:49 +0100 | [diff] [blame] | 198 | #include "gem/i915_gem_context.h" |
Chris Wilson | a5efcde | 2019-10-11 20:03:17 +0100 | [diff] [blame] | 199 | #include "gt/intel_engine_pm.h" |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 200 | #include "gt/intel_engine_user.h" |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 201 | #include "gt/intel_gt.h" |
Chris Wilson | 112ed2d | 2019-04-24 18:48:39 +0100 | [diff] [blame] | 202 | #include "gt/intel_lrc_reg.h" |
Chris Wilson | 2871ea8 | 2019-10-24 11:03:44 +0100 | [diff] [blame] | 203 | #include "gt/intel_ring.h" |
Chris Wilson | 112ed2d | 2019-04-24 18:48:39 +0100 | [diff] [blame] | 204 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 205 | #include "i915_drv.h" |
Jani Nikula | db94e9f | 2019-08-08 16:42:44 +0300 | [diff] [blame] | 206 | #include "i915_perf.h" |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 207 | |
Joonas Lahtinen | fe84168 | 2018-11-16 15:55:09 +0200 | [diff] [blame] | 208 | /* HW requires this to be a power of two, between 128k and 16M, though driver |
| 209 | * is currently generally designed assuming the largest 16M size is used such |
| 210 | * that the overflow cases are unlikely in normal operation. |
| 211 | */ |
| 212 | #define OA_BUFFER_SIZE SZ_16M |
| 213 | |
| 214 | #define OA_TAKEN(tail, head) ((tail - head) & (OA_BUFFER_SIZE - 1)) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 215 | |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 216 | /** |
| 217 | * DOC: OA Tail Pointer Race |
| 218 | * |
| 219 | * There's a HW race condition between OA unit tail pointer register updates and |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 220 | * writes to memory whereby the tail pointer can sometimes get ahead of what's |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 221 | * been written out to the OA buffer so far (in terms of what's visible to the |
| 222 | * CPU). |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 223 | * |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 224 | * Although this can be observed explicitly while copying reports to userspace |
| 225 | * by checking for a zeroed report-id field in tail reports, we want to account |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 226 | * for this earlier, as part of the oa_buffer_check to avoid lots of redundant |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 227 | * read() attempts. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 228 | * |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 229 | * In effect we define a tail pointer for reading that lags the real tail |
| 230 | * pointer by at least %OA_TAIL_MARGIN_NSEC nanoseconds, which gives enough |
| 231 | * time for the corresponding reports to become visible to the CPU. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 232 | * |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 233 | * To manage this we actually track two tail pointers: |
| 234 | * 1) An 'aging' tail with an associated timestamp that is tracked until we |
| 235 | * can trust the corresponding data is visible to the CPU; at which point |
| 236 | * it is considered 'aged'. |
| 237 | * 2) An 'aged' tail that can be used for read()ing. |
| 238 | * |
| 239 | * The two separate pointers let us decouple read()s from tail pointer aging. |
| 240 | * |
| 241 | * The tail pointers are checked and updated at a limited rate within a hrtimer |
Linus Torvalds | a9a0884 | 2018-02-11 14:34:03 -0800 | [diff] [blame] | 242 | * callback (the same callback that is used for delivering EPOLLIN events) |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 243 | * |
| 244 | * Initially the tails are marked invalid with %INVALID_TAIL_PTR which |
| 245 | * indicates that an updated tail pointer is needed. |
| 246 | * |
| 247 | * Most of the implementation details for this workaround are in |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 248 | * oa_buffer_check_unlocked() and _append_oa_reports() |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 249 | * |
| 250 | * Note for posterity: previously the driver used to define an effective tail |
| 251 | * pointer that lagged the real pointer by a 'tail margin' measured in bytes |
| 252 | * derived from %OA_TAIL_MARGIN_NSEC and the configured sampling frequency. |
| 253 | * This was flawed considering that the OA unit may also automatically generate |
| 254 | * non-periodic reports (such as on context switch) or the OA unit may be |
| 255 | * enabled without any periodic sampling. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 256 | */ |
| 257 | #define OA_TAIL_MARGIN_NSEC 100000ULL |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 258 | #define INVALID_TAIL_PTR 0xffffffff |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 259 | |
| 260 | /* frequency for checking whether the OA unit has written new reports to the |
| 261 | * circular OA buffer... |
| 262 | */ |
| 263 | #define POLL_FREQUENCY 200 |
| 264 | #define POLL_PERIOD (NSEC_PER_SEC / POLL_FREQUENCY) |
| 265 | |
Robert Bragg | ccdf634 | 2016-11-07 19:49:54 +0000 | [diff] [blame] | 266 | /* for sysctl proc_dointvec_minmax of dev.i915.perf_stream_paranoid */ |
Robert Bragg | ccdf634 | 2016-11-07 19:49:54 +0000 | [diff] [blame] | 267 | static u32 i915_perf_stream_paranoid = true; |
| 268 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 269 | /* The maximum exponent the hardware accepts is 63 (essentially it selects one |
| 270 | * of the 64bit timestamp bits to trigger reports from) but there's currently |
| 271 | * no known use case for sampling as infrequently as once per 47 thousand years. |
| 272 | * |
| 273 | * Since the timestamps included in OA reports are only 32bits it seems |
| 274 | * reasonable to limit the OA exponent where it's still possible to account for |
| 275 | * overflow in OA report timestamps. |
| 276 | */ |
| 277 | #define OA_EXPONENT_MAX 31 |
| 278 | |
| 279 | #define INVALID_CTX_ID 0xffffffff |
| 280 | |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 281 | /* On Gen8+ automatically triggered OA reports include a 'reason' field... */ |
| 282 | #define OAREPORT_REASON_MASK 0x3f |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 283 | #define OAREPORT_REASON_MASK_EXTENDED 0x7f |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 284 | #define OAREPORT_REASON_SHIFT 19 |
| 285 | #define OAREPORT_REASON_TIMER (1<<0) |
| 286 | #define OAREPORT_REASON_CTX_SWITCH (1<<3) |
| 287 | #define OAREPORT_REASON_CLK_RATIO (1<<5) |
| 288 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 289 | |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 290 | /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate |
| 291 | * |
Robert Bragg | 155e941 | 2017-06-13 12:23:05 +0100 | [diff] [blame] | 292 | * The highest sampling frequency we can theoretically program the OA unit |
| 293 | * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell. |
| 294 | * |
| 295 | * Initialized just before we register the sysctl parameter. |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 296 | */ |
Robert Bragg | 155e941 | 2017-06-13 12:23:05 +0100 | [diff] [blame] | 297 | static int oa_sample_rate_hard_limit; |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 298 | |
| 299 | /* Theoretically we can program the OA unit to sample every 160ns but don't |
| 300 | * allow that by default unless root... |
| 301 | * |
| 302 | * The default threshold of 100000Hz is based on perf's similar |
| 303 | * kernel.perf_event_max_sample_rate sysctl parameter. |
| 304 | */ |
| 305 | static u32 i915_oa_max_sample_rate = 100000; |
| 306 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 307 | /* XXX: beware if future OA HW adds new report formats that the current |
| 308 | * code assumes all reports have a power-of-two size and ~(size - 1) can |
| 309 | * be used as a mask to align the OA tail pointer. |
| 310 | */ |
Jani Nikula | 6ebb6d8 | 2018-06-13 14:49:29 +0300 | [diff] [blame] | 311 | static const struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = { |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 312 | [I915_OA_FORMAT_A13] = { 0, 64 }, |
| 313 | [I915_OA_FORMAT_A29] = { 1, 128 }, |
| 314 | [I915_OA_FORMAT_A13_B8_C8] = { 2, 128 }, |
| 315 | /* A29_B8_C8 Disallowed as 192 bytes doesn't factor into buffer size */ |
| 316 | [I915_OA_FORMAT_B4_C8] = { 4, 64 }, |
| 317 | [I915_OA_FORMAT_A45_B8_C8] = { 5, 256 }, |
| 318 | [I915_OA_FORMAT_B4_C8_A16] = { 6, 128 }, |
| 319 | [I915_OA_FORMAT_C4_B8] = { 7, 64 }, |
| 320 | }; |
| 321 | |
Jani Nikula | 6ebb6d8 | 2018-06-13 14:49:29 +0300 | [diff] [blame] | 322 | static const struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = { |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 323 | [I915_OA_FORMAT_A12] = { 0, 64 }, |
| 324 | [I915_OA_FORMAT_A12_B8_C8] = { 2, 128 }, |
| 325 | [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 }, |
| 326 | [I915_OA_FORMAT_C4_B8] = { 7, 64 }, |
| 327 | }; |
| 328 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 329 | static const struct i915_oa_format gen12_oa_formats[I915_OA_FORMAT_MAX] = { |
| 330 | [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 }, |
| 331 | }; |
| 332 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 333 | #define SAMPLE_OA_REPORT (1<<0) |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 334 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 335 | /** |
| 336 | * struct perf_open_properties - for validated properties given to open a stream |
| 337 | * @sample_flags: `DRM_I915_PERF_PROP_SAMPLE_*` properties are tracked as flags |
| 338 | * @single_context: Whether a single or all gpu contexts should be monitored |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 339 | * @hold_preemption: Whether the preemption is disabled for the filtered |
| 340 | * context |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 341 | * @ctx_handle: A gem ctx handle for use with @single_context |
| 342 | * @metrics_set: An ID for an OA unit metric set advertised via sysfs |
| 343 | * @oa_format: An OA unit HW report format |
| 344 | * @oa_periodic: Whether to enable periodic OA unit sampling |
| 345 | * @oa_period_exponent: The OA unit sampling period is derived from this |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 346 | * @engine: The engine (typically rcs0) being monitored by the OA unit |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 347 | * @has_sseu: Whether @sseu was specified by userspace |
| 348 | * @sseu: internal SSEU configuration computed either from the userspace |
| 349 | * specified configuration in the opening parameters or a default value |
| 350 | * (see get_default_sseu_config()) |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 351 | * |
| 352 | * As read_properties_unlocked() enumerates and validates the properties given |
| 353 | * to open a stream of metrics the configuration is built up in the structure |
| 354 | * which starts out zero initialized. |
| 355 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 356 | struct perf_open_properties { |
| 357 | u32 sample_flags; |
| 358 | |
| 359 | u64 single_context:1; |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 360 | u64 hold_preemption:1; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 361 | u64 ctx_handle; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 362 | |
| 363 | /* OA sampling state */ |
| 364 | int metrics_set; |
| 365 | int oa_format; |
| 366 | bool oa_periodic; |
| 367 | int oa_period_exponent; |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 368 | |
| 369 | struct intel_engine_cs *engine; |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 370 | |
| 371 | bool has_sseu; |
| 372 | struct intel_sseu sseu; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 373 | }; |
| 374 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 375 | struct i915_oa_config_bo { |
| 376 | struct llist_node node; |
| 377 | |
| 378 | struct i915_oa_config *oa_config; |
| 379 | struct i915_vma *vma; |
| 380 | }; |
| 381 | |
Venkata Sandeep Dhanalakota | 3dc716fd | 2019-12-13 07:51:51 -0800 | [diff] [blame] | 382 | static struct ctl_table_header *sysctl_header; |
| 383 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 384 | static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer); |
| 385 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 386 | void i915_oa_config_release(struct kref *ref) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 387 | { |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 388 | struct i915_oa_config *oa_config = |
| 389 | container_of(ref, typeof(*oa_config), ref); |
| 390 | |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 391 | kfree(oa_config->flex_regs); |
| 392 | kfree(oa_config->b_counter_regs); |
| 393 | kfree(oa_config->mux_regs); |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 394 | |
| 395 | kfree_rcu(oa_config, rcu); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 396 | } |
| 397 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 398 | struct i915_oa_config * |
| 399 | i915_perf_get_oa_config(struct i915_perf *perf, int metrics_set) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 400 | { |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 401 | struct i915_oa_config *oa_config; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 402 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 403 | rcu_read_lock(); |
Lionel Landwerlin | 9aba9c1 | 2020-03-17 15:22:20 +0200 | [diff] [blame] | 404 | oa_config = idr_find(&perf->metrics_idr, metrics_set); |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 405 | if (oa_config) |
| 406 | oa_config = i915_oa_config_get(oa_config); |
| 407 | rcu_read_unlock(); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 408 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 409 | return oa_config; |
| 410 | } |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 411 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 412 | static void free_oa_config_bo(struct i915_oa_config_bo *oa_bo) |
| 413 | { |
| 414 | i915_oa_config_put(oa_bo->oa_config); |
| 415 | i915_vma_put(oa_bo->vma); |
| 416 | kfree(oa_bo); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 417 | } |
| 418 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 419 | static u32 gen12_oa_hw_tail_read(struct i915_perf_stream *stream) |
| 420 | { |
| 421 | struct intel_uncore *uncore = stream->uncore; |
| 422 | |
| 423 | return intel_uncore_read(uncore, GEN12_OAG_OATAILPTR) & |
| 424 | GEN12_OAG_OATAILPTR_MASK; |
| 425 | } |
| 426 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 427 | static u32 gen8_oa_hw_tail_read(struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 428 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 429 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 430 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 431 | return intel_uncore_read(uncore, GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 432 | } |
| 433 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 434 | static u32 gen7_oa_hw_tail_read(struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 435 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 436 | struct intel_uncore *uncore = stream->uncore; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 437 | u32 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 438 | |
| 439 | return oastatus1 & GEN7_OASTATUS1_TAIL_MASK; |
| 440 | } |
| 441 | |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 442 | /** |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 443 | * oa_buffer_check_unlocked - check for data and update tail ptr state |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 444 | * @stream: i915 stream instance |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 445 | * |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 446 | * This is either called via fops (for blocking reads in user ctx) or the poll |
| 447 | * check hrtimer (atomic ctx) to check the OA buffer tail pointer and check |
| 448 | * if there is data available for userspace to read. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 449 | * |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 450 | * This function is central to providing a workaround for the OA unit tail |
| 451 | * pointer having a race with respect to what data is visible to the CPU. |
| 452 | * It is responsible for reading tail pointers from the hardware and giving |
| 453 | * the pointers time to 'age' before they are made available for reading. |
| 454 | * (See description of OA_TAIL_MARGIN_NSEC above for further details.) |
| 455 | * |
| 456 | * Besides returning true when there is data available to read() this function |
| 457 | * also has the side effect of updating the oa_buffer.tails[], .aging_timestamp |
| 458 | * and .aged_tail_idx state used for reading. |
| 459 | * |
| 460 | * Note: It's safe to read OA config state here unlocked, assuming that this is |
| 461 | * only called while the stream is enabled, while the global OA configuration |
| 462 | * can't be modified. |
| 463 | * |
| 464 | * Returns: %true if the OA buffer contains data, else %false |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 465 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 466 | static bool oa_buffer_check_unlocked(struct i915_perf_stream *stream) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 467 | { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 468 | int report_size = stream->oa_buffer.format_size; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 469 | unsigned long flags; |
| 470 | unsigned int aged_idx; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 471 | u32 head, hw_tail, aged_tail, aging_tail; |
| 472 | u64 now; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 473 | |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 474 | /* We have to consider the (unlikely) possibility that read() errors |
| 475 | * could result in an OA buffer reset which might reset the head, |
| 476 | * tails[] and aged_tail state. |
| 477 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 478 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 479 | |
| 480 | /* NB: The head we observe here might effectively be a little out of |
| 481 | * date (between head and tails[aged_idx].offset if there is currently |
| 482 | * a read() in progress. |
| 483 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 484 | head = stream->oa_buffer.head; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 485 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 486 | aged_idx = stream->oa_buffer.aged_tail_idx; |
| 487 | aged_tail = stream->oa_buffer.tails[aged_idx].offset; |
| 488 | aging_tail = stream->oa_buffer.tails[!aged_idx].offset; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 489 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 490 | hw_tail = stream->perf->ops.oa_hw_tail_read(stream); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 491 | |
| 492 | /* The tail pointer increases in 64 byte increments, |
| 493 | * not in report_size steps... |
| 494 | */ |
| 495 | hw_tail &= ~(report_size - 1); |
| 496 | |
| 497 | now = ktime_get_mono_fast_ns(); |
| 498 | |
Robert Bragg | 4117ebc | 2017-05-11 16:43:30 +0100 | [diff] [blame] | 499 | /* Update the aged tail |
| 500 | * |
| 501 | * Flip the tail pointer available for read()s once the aging tail is |
| 502 | * old enough to trust that the corresponding data will be visible to |
| 503 | * the CPU... |
| 504 | * |
| 505 | * Do this before updating the aging pointer in case we may be able to |
| 506 | * immediately start aging a new pointer too (if new data has become |
| 507 | * available) without needing to wait for a later hrtimer callback. |
| 508 | */ |
| 509 | if (aging_tail != INVALID_TAIL_PTR && |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 510 | ((now - stream->oa_buffer.aging_timestamp) > |
Robert Bragg | 4117ebc | 2017-05-11 16:43:30 +0100 | [diff] [blame] | 511 | OA_TAIL_MARGIN_NSEC)) { |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 512 | |
Robert Bragg | 4117ebc | 2017-05-11 16:43:30 +0100 | [diff] [blame] | 513 | aged_idx ^= 1; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 514 | stream->oa_buffer.aged_tail_idx = aged_idx; |
Robert Bragg | 4117ebc | 2017-05-11 16:43:30 +0100 | [diff] [blame] | 515 | |
| 516 | aged_tail = aging_tail; |
| 517 | |
| 518 | /* Mark that we need a new pointer to start aging... */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 519 | stream->oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR; |
Robert Bragg | 4117ebc | 2017-05-11 16:43:30 +0100 | [diff] [blame] | 520 | aging_tail = INVALID_TAIL_PTR; |
| 521 | } |
| 522 | |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 523 | /* Update the aging tail |
| 524 | * |
| 525 | * We throttle aging tail updates until we have a new tail that |
| 526 | * represents >= one report more data than is already available for |
| 527 | * reading. This ensures there will be enough data for a successful |
| 528 | * read once this new pointer has aged and ensures we will give the new |
| 529 | * pointer time to age. |
| 530 | */ |
| 531 | if (aging_tail == INVALID_TAIL_PTR && |
| 532 | (aged_tail == INVALID_TAIL_PTR || |
| 533 | OA_TAKEN(hw_tail, aged_tail) >= report_size)) { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 534 | struct i915_vma *vma = stream->oa_buffer.vma; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 535 | u32 gtt_offset = i915_ggtt_offset(vma); |
| 536 | |
| 537 | /* Be paranoid and do a bounds check on the pointer read back |
| 538 | * from hardware, just in case some spurious hardware condition |
| 539 | * could put the tail out of bounds... |
| 540 | */ |
| 541 | if (hw_tail >= gtt_offset && |
Joonas Lahtinen | fe84168 | 2018-11-16 15:55:09 +0200 | [diff] [blame] | 542 | hw_tail < (gtt_offset + OA_BUFFER_SIZE)) { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 543 | stream->oa_buffer.tails[!aged_idx].offset = |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 544 | aging_tail = hw_tail; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 545 | stream->oa_buffer.aging_timestamp = now; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 546 | } else { |
Wambui Karuga | 0bf8573 | 2020-02-18 20:39:36 +0300 | [diff] [blame] | 547 | drm_err(&stream->perf->i915->drm, |
| 548 | "Ignoring spurious out of range OA buffer tail pointer = %x\n", |
| 549 | hw_tail); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 550 | } |
| 551 | } |
| 552 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 553 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 554 | |
| 555 | return aged_tail == INVALID_TAIL_PTR ? |
| 556 | false : OA_TAKEN(aged_tail, head) >= report_size; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 557 | } |
| 558 | |
| 559 | /** |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 560 | * append_oa_status - Appends a status record to a userspace read() buffer. |
| 561 | * @stream: An i915-perf stream opened for OA metrics |
| 562 | * @buf: destination buffer given by userspace |
| 563 | * @count: the number of bytes userspace wants to read |
| 564 | * @offset: (inout): the current position for writing into @buf |
| 565 | * @type: The kind of status to report to userspace |
| 566 | * |
| 567 | * Writes a status record (such as `DRM_I915_PERF_RECORD_OA_REPORT_LOST`) |
| 568 | * into the userspace read() buffer. |
| 569 | * |
| 570 | * The @buf @offset will only be updated on success. |
| 571 | * |
| 572 | * Returns: 0 on success, negative error code on failure. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 573 | */ |
| 574 | static int append_oa_status(struct i915_perf_stream *stream, |
| 575 | char __user *buf, |
| 576 | size_t count, |
| 577 | size_t *offset, |
| 578 | enum drm_i915_perf_record_type type) |
| 579 | { |
| 580 | struct drm_i915_perf_record_header header = { type, 0, sizeof(header) }; |
| 581 | |
| 582 | if ((count - *offset) < header.size) |
| 583 | return -ENOSPC; |
| 584 | |
| 585 | if (copy_to_user(buf + *offset, &header, sizeof(header))) |
| 586 | return -EFAULT; |
| 587 | |
| 588 | (*offset) += header.size; |
| 589 | |
| 590 | return 0; |
| 591 | } |
| 592 | |
| 593 | /** |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 594 | * append_oa_sample - Copies single OA report into userspace read() buffer. |
| 595 | * @stream: An i915-perf stream opened for OA metrics |
| 596 | * @buf: destination buffer given by userspace |
| 597 | * @count: the number of bytes userspace wants to read |
| 598 | * @offset: (inout): the current position for writing into @buf |
| 599 | * @report: A single OA report to (optionally) include as part of the sample |
| 600 | * |
| 601 | * The contents of a sample are configured through `DRM_I915_PERF_PROP_SAMPLE_*` |
| 602 | * properties when opening a stream, tracked as `stream->sample_flags`. This |
| 603 | * function copies the requested components of a single sample to the given |
| 604 | * read() @buf. |
| 605 | * |
| 606 | * The @buf @offset will only be updated on success. |
| 607 | * |
| 608 | * Returns: 0 on success, negative error code on failure. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 609 | */ |
| 610 | static int append_oa_sample(struct i915_perf_stream *stream, |
| 611 | char __user *buf, |
| 612 | size_t count, |
| 613 | size_t *offset, |
| 614 | const u8 *report) |
| 615 | { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 616 | int report_size = stream->oa_buffer.format_size; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 617 | struct drm_i915_perf_record_header header; |
| 618 | u32 sample_flags = stream->sample_flags; |
| 619 | |
| 620 | header.type = DRM_I915_PERF_RECORD_SAMPLE; |
| 621 | header.pad = 0; |
| 622 | header.size = stream->sample_size; |
| 623 | |
| 624 | if ((count - *offset) < header.size) |
| 625 | return -ENOSPC; |
| 626 | |
| 627 | buf += *offset; |
| 628 | if (copy_to_user(buf, &header, sizeof(header))) |
| 629 | return -EFAULT; |
| 630 | buf += sizeof(header); |
| 631 | |
| 632 | if (sample_flags & SAMPLE_OA_REPORT) { |
| 633 | if (copy_to_user(buf, report, report_size)) |
| 634 | return -EFAULT; |
| 635 | } |
| 636 | |
| 637 | (*offset) += header.size; |
| 638 | |
| 639 | return 0; |
| 640 | } |
| 641 | |
| 642 | /** |
| 643 | * Copies all buffered OA reports into userspace read() buffer. |
| 644 | * @stream: An i915-perf stream opened for OA metrics |
| 645 | * @buf: destination buffer given by userspace |
| 646 | * @count: the number of bytes userspace wants to read |
| 647 | * @offset: (inout): the current position for writing into @buf |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 648 | * |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 649 | * Notably any error condition resulting in a short read (-%ENOSPC or |
| 650 | * -%EFAULT) will be returned even though one or more records may |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 651 | * have been successfully copied. In this case it's up to the caller |
| 652 | * to decide if the error should be squashed before returning to |
| 653 | * userspace. |
| 654 | * |
| 655 | * Note: reports are consumed from the head, and appended to the |
Robert Bragg | e81b3a5 | 2017-05-11 16:43:24 +0100 | [diff] [blame] | 656 | * tail, so the tail chases the head?... If you think that's mad |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 657 | * and back-to-front you're not alone, but this follows the |
| 658 | * Gen PRM naming convention. |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 659 | * |
| 660 | * Returns: 0 on success, negative error code on failure. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 661 | */ |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 662 | static int gen8_append_oa_reports(struct i915_perf_stream *stream, |
| 663 | char __user *buf, |
| 664 | size_t count, |
| 665 | size_t *offset) |
| 666 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 667 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 668 | int report_size = stream->oa_buffer.format_size; |
| 669 | u8 *oa_buf_base = stream->oa_buffer.vaddr; |
| 670 | u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); |
Joonas Lahtinen | fe84168 | 2018-11-16 15:55:09 +0200 | [diff] [blame] | 671 | u32 mask = (OA_BUFFER_SIZE - 1); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 672 | size_t start_offset = *offset; |
| 673 | unsigned long flags; |
| 674 | unsigned int aged_tail_idx; |
| 675 | u32 head, tail; |
| 676 | u32 taken; |
| 677 | int ret = 0; |
| 678 | |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 679 | if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled)) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 680 | return -EIO; |
| 681 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 682 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 683 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 684 | head = stream->oa_buffer.head; |
| 685 | aged_tail_idx = stream->oa_buffer.aged_tail_idx; |
| 686 | tail = stream->oa_buffer.tails[aged_tail_idx].offset; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 687 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 688 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 689 | |
| 690 | /* |
| 691 | * An invalid tail pointer here means we're still waiting for the poll |
| 692 | * hrtimer callback to give us a pointer |
| 693 | */ |
| 694 | if (tail == INVALID_TAIL_PTR) |
| 695 | return -EAGAIN; |
| 696 | |
| 697 | /* |
| 698 | * NB: oa_buffer.head/tail include the gtt_offset which we don't want |
| 699 | * while indexing relative to oa_buf_base. |
| 700 | */ |
| 701 | head -= gtt_offset; |
| 702 | tail -= gtt_offset; |
| 703 | |
| 704 | /* |
| 705 | * An out of bounds or misaligned head or tail pointer implies a driver |
| 706 | * bug since we validate + align the tail pointers we read from the |
| 707 | * hardware and we are in full control of the head pointer which should |
| 708 | * only be incremented by multiples of the report size (notably also |
| 709 | * all a power of two). |
| 710 | */ |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 711 | if (drm_WARN_ONCE(&uncore->i915->drm, |
| 712 | head > OA_BUFFER_SIZE || head % report_size || |
| 713 | tail > OA_BUFFER_SIZE || tail % report_size, |
| 714 | "Inconsistent OA buffer pointers: head = %u, tail = %u\n", |
| 715 | head, tail)) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 716 | return -EIO; |
| 717 | |
| 718 | |
| 719 | for (/* none */; |
| 720 | (taken = OA_TAKEN(tail, head)); |
| 721 | head = (head + report_size) & mask) { |
| 722 | u8 *report = oa_buf_base + head; |
| 723 | u32 *report32 = (void *)report; |
| 724 | u32 ctx_id; |
| 725 | u32 reason; |
| 726 | |
| 727 | /* |
| 728 | * All the report sizes factor neatly into the buffer |
| 729 | * size so we never expect to see a report split |
| 730 | * between the beginning and end of the buffer. |
| 731 | * |
| 732 | * Given the initial alignment check a misalignment |
| 733 | * here would imply a driver bug that would result |
| 734 | * in an overrun. |
| 735 | */ |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 736 | if (drm_WARN_ON(&uncore->i915->drm, |
| 737 | (OA_BUFFER_SIZE - head) < report_size)) { |
Wambui Karuga | 0bf8573 | 2020-02-18 20:39:36 +0300 | [diff] [blame] | 738 | drm_err(&uncore->i915->drm, |
| 739 | "Spurious OA head ptr: non-integral report offset\n"); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 740 | break; |
| 741 | } |
| 742 | |
| 743 | /* |
| 744 | * The reason field includes flags identifying what |
| 745 | * triggered this specific report (mostly timer |
| 746 | * triggered or e.g. due to a context switch). |
| 747 | * |
| 748 | * This field is never expected to be zero so we can |
| 749 | * check that the report isn't invalid before copying |
| 750 | * it to userspace... |
| 751 | */ |
| 752 | reason = ((report32[0] >> OAREPORT_REASON_SHIFT) & |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 753 | (IS_GEN(stream->perf->i915, 12) ? |
| 754 | OAREPORT_REASON_MASK_EXTENDED : |
| 755 | OAREPORT_REASON_MASK)); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 756 | if (reason == 0) { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 757 | if (__ratelimit(&stream->perf->spurious_report_rs)) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 758 | DRM_NOTE("Skipping spurious, invalid OA report\n"); |
| 759 | continue; |
| 760 | } |
| 761 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 762 | ctx_id = report32[2] & stream->specific_ctx_id_mask; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 763 | |
| 764 | /* |
| 765 | * Squash whatever is in the CTX_ID field if it's marked as |
| 766 | * invalid to be sure we avoid false-positive, single-context |
| 767 | * filtering below... |
| 768 | * |
| 769 | * Note: that we don't clear the valid_ctx_bit so userspace can |
| 770 | * understand that the ID has been squashed by the kernel. |
| 771 | */ |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 772 | if (!(report32[0] & stream->perf->gen8_valid_ctx_bit) && |
| 773 | INTEL_GEN(stream->perf->i915) <= 11) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 774 | ctx_id = report32[2] = INVALID_CTX_ID; |
| 775 | |
| 776 | /* |
| 777 | * NB: For Gen 8 the OA unit no longer supports clock gating |
| 778 | * off for a specific context and the kernel can't securely |
| 779 | * stop the counters from updating as system-wide / global |
| 780 | * values. |
| 781 | * |
| 782 | * Automatic reports now include a context ID so reports can be |
| 783 | * filtered on the cpu but it's not worth trying to |
| 784 | * automatically subtract/hide counter progress for other |
| 785 | * contexts while filtering since we can't stop userspace |
| 786 | * issuing MI_REPORT_PERF_COUNT commands which would still |
| 787 | * provide a side-band view of the real values. |
| 788 | * |
| 789 | * To allow userspace (such as Mesa/GL_INTEL_performance_query) |
| 790 | * to normalize counters for a single filtered context then it |
| 791 | * needs be forwarded bookend context-switch reports so that it |
| 792 | * can track switches in between MI_REPORT_PERF_COUNT commands |
| 793 | * and can itself subtract/ignore the progress of counters |
| 794 | * associated with other contexts. Note that the hardware |
| 795 | * automatically triggers reports when switching to a new |
| 796 | * context which are tagged with the ID of the newly active |
| 797 | * context. To avoid the complexity (and likely fragility) of |
| 798 | * reading ahead while parsing reports to try and minimize |
| 799 | * forwarding redundant context switch reports (i.e. between |
| 800 | * other, unrelated contexts) we simply elect to forward them |
| 801 | * all. |
| 802 | * |
| 803 | * We don't rely solely on the reason field to identify context |
| 804 | * switches since it's not-uncommon for periodic samples to |
| 805 | * identify a switch before any 'context switch' report. |
| 806 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 807 | if (!stream->perf->exclusive_stream->ctx || |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 808 | stream->specific_ctx_id == ctx_id || |
| 809 | stream->oa_buffer.last_ctx_id == stream->specific_ctx_id || |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 810 | reason & OAREPORT_REASON_CTX_SWITCH) { |
| 811 | |
| 812 | /* |
| 813 | * While filtering for a single context we avoid |
| 814 | * leaking the IDs of other contexts. |
| 815 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 816 | if (stream->perf->exclusive_stream->ctx && |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 817 | stream->specific_ctx_id != ctx_id) { |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 818 | report32[2] = INVALID_CTX_ID; |
| 819 | } |
| 820 | |
| 821 | ret = append_oa_sample(stream, buf, count, offset, |
| 822 | report); |
| 823 | if (ret) |
| 824 | break; |
| 825 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 826 | stream->oa_buffer.last_ctx_id = ctx_id; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 827 | } |
| 828 | |
| 829 | /* |
| 830 | * The above reason field sanity check is based on |
| 831 | * the assumption that the OA buffer is initially |
| 832 | * zeroed and we reset the field after copying so the |
| 833 | * check is still meaningful once old reports start |
| 834 | * being overwritten. |
| 835 | */ |
| 836 | report32[0] = 0; |
| 837 | } |
| 838 | |
| 839 | if (start_offset != *offset) { |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 840 | i915_reg_t oaheadptr; |
| 841 | |
| 842 | oaheadptr = IS_GEN(stream->perf->i915, 12) ? |
| 843 | GEN12_OAG_OAHEADPTR : GEN8_OAHEADPTR; |
| 844 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 845 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 846 | |
| 847 | /* |
| 848 | * We removed the gtt_offset for the copy loop above, indexing |
| 849 | * relative to oa_buf_base so put back here... |
| 850 | */ |
| 851 | head += gtt_offset; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 852 | intel_uncore_write(uncore, oaheadptr, |
| 853 | head & GEN12_OAG_OAHEADPTR_MASK); |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 854 | stream->oa_buffer.head = head; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 855 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 856 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 857 | } |
| 858 | |
| 859 | return ret; |
| 860 | } |
| 861 | |
| 862 | /** |
| 863 | * gen8_oa_read - copy status records then buffered OA reports |
| 864 | * @stream: An i915-perf stream opened for OA metrics |
| 865 | * @buf: destination buffer given by userspace |
| 866 | * @count: the number of bytes userspace wants to read |
| 867 | * @offset: (inout): the current position for writing into @buf |
| 868 | * |
| 869 | * Checks OA unit status registers and if necessary appends corresponding |
| 870 | * status records for userspace (such as for a buffer full condition) and then |
| 871 | * initiate appending any buffered OA reports. |
| 872 | * |
| 873 | * Updates @offset according to the number of bytes successfully copied into |
| 874 | * the userspace buffer. |
| 875 | * |
| 876 | * NB: some data may be successfully copied to the userspace buffer |
| 877 | * even if an error is returned, and this is reflected in the |
| 878 | * updated @offset. |
| 879 | * |
| 880 | * Returns: zero on success or a negative error code |
| 881 | */ |
| 882 | static int gen8_oa_read(struct i915_perf_stream *stream, |
| 883 | char __user *buf, |
| 884 | size_t count, |
| 885 | size_t *offset) |
| 886 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 887 | struct intel_uncore *uncore = stream->uncore; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 888 | u32 oastatus; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 889 | i915_reg_t oastatus_reg; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 890 | int ret; |
| 891 | |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 892 | if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr)) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 893 | return -EIO; |
| 894 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 895 | oastatus_reg = IS_GEN(stream->perf->i915, 12) ? |
| 896 | GEN12_OAG_OASTATUS : GEN8_OASTATUS; |
| 897 | |
| 898 | oastatus = intel_uncore_read(uncore, oastatus_reg); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 899 | |
| 900 | /* |
| 901 | * We treat OABUFFER_OVERFLOW as a significant error: |
| 902 | * |
| 903 | * Although theoretically we could handle this more gracefully |
| 904 | * sometimes, some Gens don't correctly suppress certain |
| 905 | * automatically triggered reports in this condition and so we |
| 906 | * have to assume that old reports are now being trampled |
| 907 | * over. |
Joonas Lahtinen | fe84168 | 2018-11-16 15:55:09 +0200 | [diff] [blame] | 908 | * |
| 909 | * Considering how we don't currently give userspace control |
| 910 | * over the OA buffer size and always configure a large 16MB |
| 911 | * buffer, then a buffer overflow does anyway likely indicate |
| 912 | * that something has gone quite badly wrong. |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 913 | */ |
| 914 | if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) { |
| 915 | ret = append_oa_status(stream, buf, count, offset, |
| 916 | DRM_I915_PERF_RECORD_OA_BUFFER_LOST); |
| 917 | if (ret) |
| 918 | return ret; |
| 919 | |
| 920 | DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n", |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 921 | stream->period_exponent); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 922 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 923 | stream->perf->ops.oa_disable(stream); |
| 924 | stream->perf->ops.oa_enable(stream); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 925 | |
| 926 | /* |
| 927 | * Note: .oa_enable() is expected to re-init the oabuffer and |
| 928 | * reset GEN8_OASTATUS for us |
| 929 | */ |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 930 | oastatus = intel_uncore_read(uncore, oastatus_reg); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 931 | } |
| 932 | |
| 933 | if (oastatus & GEN8_OASTATUS_REPORT_LOST) { |
| 934 | ret = append_oa_status(stream, buf, count, offset, |
| 935 | DRM_I915_PERF_RECORD_OA_REPORT_LOST); |
| 936 | if (ret) |
| 937 | return ret; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 938 | intel_uncore_write(uncore, oastatus_reg, |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 939 | oastatus & ~GEN8_OASTATUS_REPORT_LOST); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 940 | } |
| 941 | |
| 942 | return gen8_append_oa_reports(stream, buf, count, offset); |
| 943 | } |
| 944 | |
| 945 | /** |
| 946 | * Copies all buffered OA reports into userspace read() buffer. |
| 947 | * @stream: An i915-perf stream opened for OA metrics |
| 948 | * @buf: destination buffer given by userspace |
| 949 | * @count: the number of bytes userspace wants to read |
| 950 | * @offset: (inout): the current position for writing into @buf |
| 951 | * |
| 952 | * Notably any error condition resulting in a short read (-%ENOSPC or |
| 953 | * -%EFAULT) will be returned even though one or more records may |
| 954 | * have been successfully copied. In this case it's up to the caller |
| 955 | * to decide if the error should be squashed before returning to |
| 956 | * userspace. |
| 957 | * |
| 958 | * Note: reports are consumed from the head, and appended to the |
| 959 | * tail, so the tail chases the head?... If you think that's mad |
| 960 | * and back-to-front you're not alone, but this follows the |
| 961 | * Gen PRM naming convention. |
| 962 | * |
| 963 | * Returns: 0 on success, negative error code on failure. |
| 964 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 965 | static int gen7_append_oa_reports(struct i915_perf_stream *stream, |
| 966 | char __user *buf, |
| 967 | size_t count, |
Robert Bragg | 3bb335c | 2017-05-11 16:43:27 +0100 | [diff] [blame] | 968 | size_t *offset) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 969 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 970 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 971 | int report_size = stream->oa_buffer.format_size; |
| 972 | u8 *oa_buf_base = stream->oa_buffer.vaddr; |
| 973 | u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); |
Joonas Lahtinen | fe84168 | 2018-11-16 15:55:09 +0200 | [diff] [blame] | 974 | u32 mask = (OA_BUFFER_SIZE - 1); |
Robert Bragg | 3bb335c | 2017-05-11 16:43:27 +0100 | [diff] [blame] | 975 | size_t start_offset = *offset; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 976 | unsigned long flags; |
| 977 | unsigned int aged_tail_idx; |
| 978 | u32 head, tail; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 979 | u32 taken; |
| 980 | int ret = 0; |
| 981 | |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 982 | if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled)) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 983 | return -EIO; |
| 984 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 985 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | f279020 | 2017-05-11 16:43:26 +0100 | [diff] [blame] | 986 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 987 | head = stream->oa_buffer.head; |
| 988 | aged_tail_idx = stream->oa_buffer.aged_tail_idx; |
| 989 | tail = stream->oa_buffer.tails[aged_tail_idx].offset; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 990 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 991 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 992 | |
| 993 | /* An invalid tail pointer here means we're still waiting for the poll |
| 994 | * hrtimer callback to give us a pointer |
Robert Bragg | f279020 | 2017-05-11 16:43:26 +0100 | [diff] [blame] | 995 | */ |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 996 | if (tail == INVALID_TAIL_PTR) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 997 | return -EAGAIN; |
| 998 | |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 999 | /* NB: oa_buffer.head/tail include the gtt_offset which we don't want |
| 1000 | * while indexing relative to oa_buf_base. |
| 1001 | */ |
| 1002 | head -= gtt_offset; |
| 1003 | tail -= gtt_offset; |
| 1004 | |
| 1005 | /* An out of bounds or misaligned head or tail pointer implies a driver |
| 1006 | * bug since we validate + align the tail pointers we read from the |
| 1007 | * hardware and we are in full control of the head pointer which should |
| 1008 | * only be incremented by multiples of the report size (notably also |
| 1009 | * all a power of two). |
| 1010 | */ |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 1011 | if (drm_WARN_ONCE(&uncore->i915->drm, |
| 1012 | head > OA_BUFFER_SIZE || head % report_size || |
| 1013 | tail > OA_BUFFER_SIZE || tail % report_size, |
| 1014 | "Inconsistent OA buffer pointers: head = %u, tail = %u\n", |
| 1015 | head, tail)) |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 1016 | return -EIO; |
| 1017 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1018 | |
| 1019 | for (/* none */; |
| 1020 | (taken = OA_TAKEN(tail, head)); |
| 1021 | head = (head + report_size) & mask) { |
| 1022 | u8 *report = oa_buf_base + head; |
| 1023 | u32 *report32 = (void *)report; |
| 1024 | |
| 1025 | /* All the report sizes factor neatly into the buffer |
| 1026 | * size so we never expect to see a report split |
| 1027 | * between the beginning and end of the buffer. |
| 1028 | * |
| 1029 | * Given the initial alignment check a misalignment |
| 1030 | * here would imply a driver bug that would result |
| 1031 | * in an overrun. |
| 1032 | */ |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 1033 | if (drm_WARN_ON(&uncore->i915->drm, |
| 1034 | (OA_BUFFER_SIZE - head) < report_size)) { |
Wambui Karuga | 0bf8573 | 2020-02-18 20:39:36 +0300 | [diff] [blame] | 1035 | drm_err(&uncore->i915->drm, |
| 1036 | "Spurious OA head ptr: non-integral report offset\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1037 | break; |
| 1038 | } |
| 1039 | |
| 1040 | /* The report-ID field for periodic samples includes |
| 1041 | * some undocumented flags related to what triggered |
| 1042 | * the report and is never expected to be zero so we |
| 1043 | * can check that the report isn't invalid before |
| 1044 | * copying it to userspace... |
| 1045 | */ |
| 1046 | if (report32[0] == 0) { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1047 | if (__ratelimit(&stream->perf->spurious_report_rs)) |
Robert Bragg | 712122e | 2017-05-11 16:43:31 +0100 | [diff] [blame] | 1048 | DRM_NOTE("Skipping spurious, invalid OA report\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1049 | continue; |
| 1050 | } |
| 1051 | |
| 1052 | ret = append_oa_sample(stream, buf, count, offset, report); |
| 1053 | if (ret) |
| 1054 | break; |
| 1055 | |
| 1056 | /* The above report-id field sanity check is based on |
| 1057 | * the assumption that the OA buffer is initially |
| 1058 | * zeroed and we reset the field after copying so the |
| 1059 | * check is still meaningful once old reports start |
| 1060 | * being overwritten. |
| 1061 | */ |
| 1062 | report32[0] = 0; |
| 1063 | } |
| 1064 | |
Robert Bragg | 3bb335c | 2017-05-11 16:43:27 +0100 | [diff] [blame] | 1065 | if (start_offset != *offset) { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1066 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 1067 | |
Robert Bragg | 3bb335c | 2017-05-11 16:43:27 +0100 | [diff] [blame] | 1068 | /* We removed the gtt_offset for the copy loop above, indexing |
| 1069 | * relative to oa_buf_base so put back here... |
| 1070 | */ |
| 1071 | head += gtt_offset; |
| 1072 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1073 | intel_uncore_write(uncore, GEN7_OASTATUS2, |
| 1074 | (head & GEN7_OASTATUS2_HEAD_MASK) | |
| 1075 | GEN7_OASTATUS2_MEM_SELECT_GGTT); |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1076 | stream->oa_buffer.head = head; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 1077 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1078 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 3bb335c | 2017-05-11 16:43:27 +0100 | [diff] [blame] | 1079 | } |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1080 | |
| 1081 | return ret; |
| 1082 | } |
| 1083 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 1084 | /** |
| 1085 | * gen7_oa_read - copy status records then buffered OA reports |
| 1086 | * @stream: An i915-perf stream opened for OA metrics |
| 1087 | * @buf: destination buffer given by userspace |
| 1088 | * @count: the number of bytes userspace wants to read |
| 1089 | * @offset: (inout): the current position for writing into @buf |
| 1090 | * |
| 1091 | * Checks Gen 7 specific OA unit status registers and if necessary appends |
| 1092 | * corresponding status records for userspace (such as for a buffer full |
| 1093 | * condition) and then initiate appending any buffered OA reports. |
| 1094 | * |
| 1095 | * Updates @offset according to the number of bytes successfully copied into |
| 1096 | * the userspace buffer. |
| 1097 | * |
| 1098 | * Returns: zero on success or a negative error code |
| 1099 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1100 | static int gen7_oa_read(struct i915_perf_stream *stream, |
| 1101 | char __user *buf, |
| 1102 | size_t count, |
| 1103 | size_t *offset) |
| 1104 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 1105 | struct intel_uncore *uncore = stream->uncore; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1106 | u32 oastatus1; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1107 | int ret; |
| 1108 | |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 1109 | if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr)) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1110 | return -EIO; |
| 1111 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1112 | oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1113 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1114 | /* XXX: On Haswell we don't have a safe way to clear oastatus1 |
| 1115 | * bits while the OA unit is enabled (while the tail pointer |
| 1116 | * may be updated asynchronously) so we ignore status bits |
| 1117 | * that have already been reported to userspace. |
| 1118 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1119 | oastatus1 &= ~stream->perf->gen7_latched_oastatus1; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1120 | |
| 1121 | /* We treat OABUFFER_OVERFLOW as a significant error: |
| 1122 | * |
| 1123 | * - The status can be interpreted to mean that the buffer is |
| 1124 | * currently full (with a higher precedence than OA_TAKEN() |
| 1125 | * which will start to report a near-empty buffer after an |
| 1126 | * overflow) but it's awkward that we can't clear the status |
| 1127 | * on Haswell, so without a reset we won't be able to catch |
| 1128 | * the state again. |
| 1129 | * |
| 1130 | * - Since it also implies the HW has started overwriting old |
| 1131 | * reports it may also affect our sanity checks for invalid |
| 1132 | * reports when copying to userspace that assume new reports |
| 1133 | * are being written to cleared memory. |
| 1134 | * |
| 1135 | * - In the future we may want to introduce a flight recorder |
| 1136 | * mode where the driver will automatically maintain a safe |
| 1137 | * guard band between head/tail, avoiding this overflow |
| 1138 | * condition, but we avoid the added driver complexity for |
| 1139 | * now. |
| 1140 | */ |
| 1141 | if (unlikely(oastatus1 & GEN7_OASTATUS1_OABUFFER_OVERFLOW)) { |
| 1142 | ret = append_oa_status(stream, buf, count, offset, |
| 1143 | DRM_I915_PERF_RECORD_OA_BUFFER_LOST); |
| 1144 | if (ret) |
| 1145 | return ret; |
| 1146 | |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1147 | DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n", |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1148 | stream->period_exponent); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1149 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1150 | stream->perf->ops.oa_disable(stream); |
| 1151 | stream->perf->ops.oa_enable(stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1152 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1153 | oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1154 | } |
| 1155 | |
| 1156 | if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) { |
| 1157 | ret = append_oa_status(stream, buf, count, offset, |
| 1158 | DRM_I915_PERF_RECORD_OA_REPORT_LOST); |
| 1159 | if (ret) |
| 1160 | return ret; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1161 | stream->perf->gen7_latched_oastatus1 |= |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1162 | GEN7_OASTATUS1_REPORT_LOST; |
| 1163 | } |
| 1164 | |
Robert Bragg | 3bb335c | 2017-05-11 16:43:27 +0100 | [diff] [blame] | 1165 | return gen7_append_oa_reports(stream, buf, count, offset); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1166 | } |
| 1167 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 1168 | /** |
| 1169 | * i915_oa_wait_unlocked - handles blocking IO until OA data available |
| 1170 | * @stream: An i915-perf stream opened for OA metrics |
| 1171 | * |
| 1172 | * Called when userspace tries to read() from a blocking stream FD opened |
| 1173 | * for OA metrics. It waits until the hrtimer callback finds a non-empty |
| 1174 | * OA buffer and wakes us. |
| 1175 | * |
| 1176 | * Note: it's acceptable to have this return with some false positives |
| 1177 | * since any subsequent read handling will return -EAGAIN if there isn't |
| 1178 | * really data ready for userspace yet. |
| 1179 | * |
| 1180 | * Returns: zero on success or a negative error code |
| 1181 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1182 | static int i915_oa_wait_unlocked(struct i915_perf_stream *stream) |
| 1183 | { |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1184 | /* We would wait indefinitely if periodic sampling is not enabled */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1185 | if (!stream->periodic) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1186 | return -EIO; |
| 1187 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1188 | return wait_event_interruptible(stream->poll_wq, |
| 1189 | oa_buffer_check_unlocked(stream)); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1190 | } |
| 1191 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 1192 | /** |
| 1193 | * i915_oa_poll_wait - call poll_wait() for an OA stream poll() |
| 1194 | * @stream: An i915-perf stream opened for OA metrics |
| 1195 | * @file: An i915 perf stream file |
| 1196 | * @wait: poll() state table |
| 1197 | * |
| 1198 | * For handling userspace polling on an i915 perf stream opened for OA metrics, |
| 1199 | * this starts a poll_wait with the wait queue that our hrtimer callback wakes |
| 1200 | * when it sees data ready to read in the circular OA buffer. |
| 1201 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1202 | static void i915_oa_poll_wait(struct i915_perf_stream *stream, |
| 1203 | struct file *file, |
| 1204 | poll_table *wait) |
| 1205 | { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1206 | poll_wait(file, &stream->poll_wq, wait); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1207 | } |
| 1208 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 1209 | /** |
| 1210 | * i915_oa_read - just calls through to &i915_oa_ops->read |
| 1211 | * @stream: An i915-perf stream opened for OA metrics |
| 1212 | * @buf: destination buffer given by userspace |
| 1213 | * @count: the number of bytes userspace wants to read |
| 1214 | * @offset: (inout): the current position for writing into @buf |
| 1215 | * |
| 1216 | * Updates @offset according to the number of bytes successfully copied into |
| 1217 | * the userspace buffer. |
| 1218 | * |
| 1219 | * Returns: zero on success or a negative error code |
| 1220 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1221 | static int i915_oa_read(struct i915_perf_stream *stream, |
| 1222 | char __user *buf, |
| 1223 | size_t count, |
| 1224 | size_t *offset) |
| 1225 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1226 | return stream->perf->ops.read(stream, buf, count, offset); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1227 | } |
| 1228 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1229 | static struct intel_context *oa_pin_context(struct i915_perf_stream *stream) |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1230 | { |
Chris Wilson | 5e2a041 | 2019-04-26 17:33:34 +0100 | [diff] [blame] | 1231 | struct i915_gem_engines_iter it; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1232 | struct i915_gem_context *ctx = stream->ctx; |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1233 | struct intel_context *ce; |
Chris Wilson | fa9f668 | 2019-04-26 17:33:29 +0100 | [diff] [blame] | 1234 | int err; |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1235 | |
Chris Wilson | 5e2a041 | 2019-04-26 17:33:34 +0100 | [diff] [blame] | 1236 | for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 1237 | if (ce->engine != stream->engine) /* first match! */ |
Chris Wilson | 5e2a041 | 2019-04-26 17:33:34 +0100 | [diff] [blame] | 1238 | continue; |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1239 | |
Chris Wilson | 5e2a041 | 2019-04-26 17:33:34 +0100 | [diff] [blame] | 1240 | /* |
| 1241 | * As the ID is the gtt offset of the context's vma we |
| 1242 | * pin the vma to ensure the ID remains fixed. |
| 1243 | */ |
| 1244 | err = intel_context_pin(ce); |
| 1245 | if (err == 0) { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1246 | stream->pinned_ctx = ce; |
Chris Wilson | 5e2a041 | 2019-04-26 17:33:34 +0100 | [diff] [blame] | 1247 | break; |
| 1248 | } |
| 1249 | } |
| 1250 | i915_gem_context_unlock_engines(ctx); |
| 1251 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1252 | return stream->pinned_ctx; |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1253 | } |
| 1254 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 1255 | /** |
| 1256 | * oa_get_render_ctx_id - determine and hold ctx hw id |
| 1257 | * @stream: An i915-perf stream opened for OA metrics |
| 1258 | * |
| 1259 | * Determine the render context hw id, and ensure it remains fixed for the |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1260 | * lifetime of the stream. This ensures that we don't have to worry about |
| 1261 | * updating the context ID in OACONTROL on the fly. |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 1262 | * |
| 1263 | * Returns: zero on success or a negative error code |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1264 | */ |
| 1265 | static int oa_get_render_ctx_id(struct i915_perf_stream *stream) |
| 1266 | { |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1267 | struct intel_context *ce; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1268 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1269 | ce = oa_pin_context(stream); |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1270 | if (IS_ERR(ce)) |
| 1271 | return PTR_ERR(ce); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1272 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1273 | switch (INTEL_GEN(ce->engine->i915)) { |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1274 | case 7: { |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1275 | /* |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1276 | * On Haswell we don't do any post processing of the reports |
| 1277 | * and don't need to use the mask. |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1278 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1279 | stream->specific_ctx_id = i915_ggtt_offset(ce->state); |
| 1280 | stream->specific_ctx_id_mask = 0; |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1281 | break; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1282 | } |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1283 | |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1284 | case 8: |
| 1285 | case 9: |
| 1286 | case 10: |
Michal Wajdeczko | 19c17b7 | 2019-10-28 16:45:20 +0000 | [diff] [blame] | 1287 | if (intel_engine_in_execlists_submission_mode(ce->engine)) { |
| 1288 | stream->specific_ctx_id_mask = |
| 1289 | (1U << GEN8_CTX_ID_WIDTH) - 1; |
| 1290 | stream->specific_ctx_id = stream->specific_ctx_id_mask; |
| 1291 | } else { |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1292 | /* |
| 1293 | * When using GuC, the context descriptor we write in |
| 1294 | * i915 is read by GuC and rewritten before it's |
| 1295 | * actually written into the hardware. The LRCA is |
| 1296 | * what is put into the context id field of the |
| 1297 | * context descriptor by GuC. Because it's aligned to |
| 1298 | * a page, the lower 12bits are always at 0 and |
| 1299 | * dropped by GuC. They won't be part of the context |
| 1300 | * ID in the OA reports, so squash those lower bits. |
| 1301 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1302 | stream->specific_ctx_id = |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1303 | lower_32_bits(ce->lrc_desc) >> 12; |
| 1304 | |
| 1305 | /* |
| 1306 | * GuC uses the top bit to signal proxy submission, so |
| 1307 | * ignore that bit. |
| 1308 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1309 | stream->specific_ctx_id_mask = |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1310 | (1U << (GEN8_CTX_ID_WIDTH - 1)) - 1; |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1311 | } |
| 1312 | break; |
| 1313 | |
Michel Thierry | 45e9c82 | 2019-08-23 01:20:50 -0700 | [diff] [blame] | 1314 | case 11: |
| 1315 | case 12: { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1316 | stream->specific_ctx_id_mask = |
Chris Wilson | 2935ed5 | 2019-10-04 14:40:08 +0100 | [diff] [blame] | 1317 | ((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << (GEN11_SW_CTX_ID_SHIFT - 32); |
Umesh Nerlige Ramappa | 6f280b1 | 2020-01-23 17:37:01 -0800 | [diff] [blame] | 1318 | /* |
| 1319 | * Pick an unused context id |
| 1320 | * 0 - (NUM_CONTEXT_TAG - 1) are used by other contexts |
| 1321 | * GEN12_MAX_CONTEXT_HW_ID (0x7ff) is used by idle context |
| 1322 | */ |
| 1323 | stream->specific_ctx_id = (GEN12_MAX_CONTEXT_HW_ID - 1) << (GEN11_SW_CTX_ID_SHIFT - 32); |
| 1324 | BUILD_BUG_ON((GEN12_MAX_CONTEXT_HW_ID - 1) < NUM_CONTEXT_TAG); |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1325 | break; |
| 1326 | } |
| 1327 | |
| 1328 | default: |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1329 | MISSING_CASE(INTEL_GEN(ce->engine->i915)); |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1330 | } |
| 1331 | |
Umesh Nerlige Ramappa | 6f280b1 | 2020-01-23 17:37:01 -0800 | [diff] [blame] | 1332 | ce->tag = stream->specific_ctx_id; |
Chris Wilson | 2935ed5 | 2019-10-04 14:40:08 +0100 | [diff] [blame] | 1333 | |
Wambui Karuga | 0bf8573 | 2020-02-18 20:39:36 +0300 | [diff] [blame] | 1334 | drm_dbg(&stream->perf->i915->drm, |
| 1335 | "filtering on ctx_id=0x%x ctx_id_mask=0x%x\n", |
| 1336 | stream->specific_ctx_id, |
| 1337 | stream->specific_ctx_id_mask); |
Lionel Landwerlin | 61d5676 | 2018-06-02 12:29:46 +0100 | [diff] [blame] | 1338 | |
Chris Wilson | 266a240 | 2017-05-04 10:33:08 +0100 | [diff] [blame] | 1339 | return 0; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1340 | } |
| 1341 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 1342 | /** |
| 1343 | * oa_put_render_ctx_id - counterpart to oa_get_render_ctx_id releases hold |
| 1344 | * @stream: An i915-perf stream opened for OA metrics |
| 1345 | * |
| 1346 | * In case anything needed doing to ensure the context HW ID would remain valid |
| 1347 | * for the lifetime of the stream, then that can be undone here. |
| 1348 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1349 | static void oa_put_render_ctx_id(struct i915_perf_stream *stream) |
| 1350 | { |
Chris Wilson | 1fc44d9 | 2018-05-17 22:26:32 +0100 | [diff] [blame] | 1351 | struct intel_context *ce; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1352 | |
Chris Wilson | 2935ed5 | 2019-10-04 14:40:08 +0100 | [diff] [blame] | 1353 | ce = fetch_and_zero(&stream->pinned_ctx); |
| 1354 | if (ce) { |
| 1355 | ce->tag = 0; /* recomputed on next submission after parking */ |
| 1356 | intel_context_unpin(ce); |
| 1357 | } |
| 1358 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1359 | stream->specific_ctx_id = INVALID_CTX_ID; |
| 1360 | stream->specific_ctx_id_mask = 0; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1361 | } |
| 1362 | |
| 1363 | static void |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1364 | free_oa_buffer(struct i915_perf_stream *stream) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1365 | { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1366 | i915_vma_unpin_and_release(&stream->oa_buffer.vma, |
Chris Wilson | 6a2f59e | 2018-07-21 13:50:37 +0100 | [diff] [blame] | 1367 | I915_VMA_RELEASE_MAP); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1368 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1369 | stream->oa_buffer.vaddr = NULL; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1370 | } |
| 1371 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 1372 | static void |
| 1373 | free_oa_configs(struct i915_perf_stream *stream) |
| 1374 | { |
| 1375 | struct i915_oa_config_bo *oa_bo, *tmp; |
| 1376 | |
| 1377 | i915_oa_config_put(stream->oa_config); |
| 1378 | llist_for_each_entry_safe(oa_bo, tmp, stream->oa_config_bos.first, node) |
| 1379 | free_oa_config_bo(oa_bo); |
| 1380 | } |
| 1381 | |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 1382 | static void |
| 1383 | free_noa_wait(struct i915_perf_stream *stream) |
| 1384 | { |
| 1385 | i915_vma_unpin_and_release(&stream->noa_wait, 0); |
| 1386 | } |
| 1387 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1388 | static void i915_oa_stream_destroy(struct i915_perf_stream *stream) |
| 1389 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1390 | struct i915_perf *perf = stream->perf; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1391 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1392 | BUG_ON(stream != perf->exclusive_stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1393 | |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1394 | /* |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 1395 | * Unset exclusive_stream first, it will be checked while disabling |
| 1396 | * the metric set on gen8+. |
Chris Wilson | a5af081 | 2020-02-27 08:57:05 +0000 | [diff] [blame] | 1397 | * |
| 1398 | * See i915_oa_init_reg_state() and lrc_configure_all_contexts() |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1399 | */ |
Chris Wilson | a5af081 | 2020-02-27 08:57:05 +0000 | [diff] [blame] | 1400 | WRITE_ONCE(perf->exclusive_stream, NULL); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1401 | perf->ops.disable_metric_set(stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1402 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1403 | free_oa_buffer(stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1404 | |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 1405 | intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL); |
Chris Wilson | a5efcde | 2019-10-11 20:03:17 +0100 | [diff] [blame] | 1406 | intel_engine_pm_put(stream->engine); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1407 | |
| 1408 | if (stream->ctx) |
| 1409 | oa_put_render_ctx_id(stream); |
| 1410 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 1411 | free_oa_configs(stream); |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 1412 | free_noa_wait(stream); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 1413 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1414 | if (perf->spurious_report_rs.missed) { |
Robert Bragg | 712122e | 2017-05-11 16:43:31 +0100 | [diff] [blame] | 1415 | DRM_NOTE("%d spurious OA report notices suppressed due to ratelimiting\n", |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1416 | perf->spurious_report_rs.missed); |
Robert Bragg | 712122e | 2017-05-11 16:43:31 +0100 | [diff] [blame] | 1417 | } |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1418 | } |
| 1419 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1420 | static void gen7_init_oa_buffer(struct i915_perf_stream *stream) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1421 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 1422 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1423 | u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 1424 | unsigned long flags; |
| 1425 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1426 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1427 | |
| 1428 | /* Pre-DevBDW: OABUFFER must be set with counters off, |
| 1429 | * before OASTATUS1, but after OASTATUS2 |
| 1430 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1431 | intel_uncore_write(uncore, GEN7_OASTATUS2, /* head */ |
| 1432 | gtt_offset | GEN7_OASTATUS2_MEM_SELECT_GGTT); |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1433 | stream->oa_buffer.head = gtt_offset; |
Robert Bragg | f279020 | 2017-05-11 16:43:26 +0100 | [diff] [blame] | 1434 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1435 | intel_uncore_write(uncore, GEN7_OABUFFER, gtt_offset); |
Robert Bragg | f279020 | 2017-05-11 16:43:26 +0100 | [diff] [blame] | 1436 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1437 | intel_uncore_write(uncore, GEN7_OASTATUS1, /* tail */ |
| 1438 | gtt_offset | OABUFFER_SIZE_16M); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1439 | |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 1440 | /* Mark that we need updated tail pointers to read from... */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1441 | stream->oa_buffer.tails[0].offset = INVALID_TAIL_PTR; |
| 1442 | stream->oa_buffer.tails[1].offset = INVALID_TAIL_PTR; |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 1443 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1444 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 0dd860c | 2017-05-11 16:43:28 +0100 | [diff] [blame] | 1445 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1446 | /* On Haswell we have to track which OASTATUS1 flags we've |
| 1447 | * already seen since they can't be cleared while periodic |
| 1448 | * sampling is enabled. |
| 1449 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1450 | stream->perf->gen7_latched_oastatus1 = 0; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1451 | |
| 1452 | /* NB: although the OA buffer will initially be allocated |
| 1453 | * zeroed via shmfs (and so this memset is redundant when |
| 1454 | * first allocating), we may re-init the OA buffer, either |
| 1455 | * when re-enabling a stream or in error/reset paths. |
| 1456 | * |
| 1457 | * The reason we clear the buffer for each re-init is for the |
| 1458 | * sanity check in gen7_append_oa_reports() that looks at the |
| 1459 | * report-id field to make sure it's non-zero which relies on |
| 1460 | * the assumption that new reports are being written to zeroed |
| 1461 | * memory... |
| 1462 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1463 | memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1464 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1465 | stream->pollin = false; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1466 | } |
| 1467 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1468 | static void gen8_init_oa_buffer(struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1469 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 1470 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1471 | u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1472 | unsigned long flags; |
| 1473 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1474 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1475 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1476 | intel_uncore_write(uncore, GEN8_OASTATUS, 0); |
| 1477 | intel_uncore_write(uncore, GEN8_OAHEADPTR, gtt_offset); |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1478 | stream->oa_buffer.head = gtt_offset; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1479 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1480 | intel_uncore_write(uncore, GEN8_OABUFFER_UDW, 0); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1481 | |
| 1482 | /* |
| 1483 | * PRM says: |
| 1484 | * |
| 1485 | * "This MMIO must be set before the OATAILPTR |
| 1486 | * register and after the OAHEADPTR register. This is |
| 1487 | * to enable proper functionality of the overflow |
| 1488 | * bit." |
| 1489 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1490 | intel_uncore_write(uncore, GEN8_OABUFFER, gtt_offset | |
Joonas Lahtinen | fe84168 | 2018-11-16 15:55:09 +0200 | [diff] [blame] | 1491 | OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1492 | intel_uncore_write(uncore, GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1493 | |
| 1494 | /* Mark that we need updated tail pointers to read from... */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1495 | stream->oa_buffer.tails[0].offset = INVALID_TAIL_PTR; |
| 1496 | stream->oa_buffer.tails[1].offset = INVALID_TAIL_PTR; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1497 | |
| 1498 | /* |
| 1499 | * Reset state used to recognise context switches, affecting which |
| 1500 | * reports we will forward to userspace while filtering for a single |
| 1501 | * context. |
| 1502 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1503 | stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1504 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1505 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1506 | |
| 1507 | /* |
| 1508 | * NB: although the OA buffer will initially be allocated |
| 1509 | * zeroed via shmfs (and so this memset is redundant when |
| 1510 | * first allocating), we may re-init the OA buffer, either |
| 1511 | * when re-enabling a stream or in error/reset paths. |
| 1512 | * |
| 1513 | * The reason we clear the buffer for each re-init is for the |
| 1514 | * sanity check in gen8_append_oa_reports() that looks at the |
| 1515 | * reason field to make sure it's non-zero which relies on |
| 1516 | * the assumption that new reports are being written to zeroed |
| 1517 | * memory... |
| 1518 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1519 | memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1520 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1521 | stream->pollin = false; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 1522 | } |
| 1523 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 1524 | static void gen12_init_oa_buffer(struct i915_perf_stream *stream) |
| 1525 | { |
| 1526 | struct intel_uncore *uncore = stream->uncore; |
| 1527 | u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma); |
| 1528 | unsigned long flags; |
| 1529 | |
| 1530 | spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags); |
| 1531 | |
| 1532 | intel_uncore_write(uncore, GEN12_OAG_OASTATUS, 0); |
| 1533 | intel_uncore_write(uncore, GEN12_OAG_OAHEADPTR, |
| 1534 | gtt_offset & GEN12_OAG_OAHEADPTR_MASK); |
| 1535 | stream->oa_buffer.head = gtt_offset; |
| 1536 | |
| 1537 | /* |
| 1538 | * PRM says: |
| 1539 | * |
| 1540 | * "This MMIO must be set before the OATAILPTR |
| 1541 | * register and after the OAHEADPTR register. This is |
| 1542 | * to enable proper functionality of the overflow |
| 1543 | * bit." |
| 1544 | */ |
| 1545 | intel_uncore_write(uncore, GEN12_OAG_OABUFFER, gtt_offset | |
| 1546 | OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT); |
| 1547 | intel_uncore_write(uncore, GEN12_OAG_OATAILPTR, |
| 1548 | gtt_offset & GEN12_OAG_OATAILPTR_MASK); |
| 1549 | |
| 1550 | /* Mark that we need updated tail pointers to read from... */ |
| 1551 | stream->oa_buffer.tails[0].offset = INVALID_TAIL_PTR; |
| 1552 | stream->oa_buffer.tails[1].offset = INVALID_TAIL_PTR; |
| 1553 | |
| 1554 | /* |
| 1555 | * Reset state used to recognise context switches, affecting which |
| 1556 | * reports we will forward to userspace while filtering for a single |
| 1557 | * context. |
| 1558 | */ |
| 1559 | stream->oa_buffer.last_ctx_id = INVALID_CTX_ID; |
| 1560 | |
| 1561 | spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags); |
| 1562 | |
| 1563 | /* |
| 1564 | * NB: although the OA buffer will initially be allocated |
| 1565 | * zeroed via shmfs (and so this memset is redundant when |
| 1566 | * first allocating), we may re-init the OA buffer, either |
| 1567 | * when re-enabling a stream or in error/reset paths. |
| 1568 | * |
| 1569 | * The reason we clear the buffer for each re-init is for the |
| 1570 | * sanity check in gen8_append_oa_reports() that looks at the |
| 1571 | * reason field to make sure it's non-zero which relies on |
| 1572 | * the assumption that new reports are being written to zeroed |
| 1573 | * memory... |
| 1574 | */ |
| 1575 | memset(stream->oa_buffer.vaddr, 0, |
| 1576 | stream->oa_buffer.vma->size); |
| 1577 | |
| 1578 | stream->pollin = false; |
| 1579 | } |
| 1580 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1581 | static int alloc_oa_buffer(struct i915_perf_stream *stream) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1582 | { |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 1583 | struct drm_i915_private *i915 = stream->perf->i915; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1584 | struct drm_i915_gem_object *bo; |
| 1585 | struct i915_vma *vma; |
| 1586 | int ret; |
| 1587 | |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 1588 | if (drm_WARN_ON(&i915->drm, stream->oa_buffer.vma)) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1589 | return -ENODEV; |
| 1590 | |
Joonas Lahtinen | fe84168 | 2018-11-16 15:55:09 +0200 | [diff] [blame] | 1591 | BUILD_BUG_ON_NOT_POWER_OF_2(OA_BUFFER_SIZE); |
| 1592 | BUILD_BUG_ON(OA_BUFFER_SIZE < SZ_128K || OA_BUFFER_SIZE > SZ_16M); |
| 1593 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 1594 | bo = i915_gem_object_create_shmem(stream->perf->i915, OA_BUFFER_SIZE); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1595 | if (IS_ERR(bo)) { |
Wambui Karuga | 00376cc | 2020-01-31 12:34:12 +0300 | [diff] [blame] | 1596 | drm_err(&i915->drm, "Failed to allocate OA buffer\n"); |
Chris Wilson | 2850748 | 2019-10-04 14:39:58 +0100 | [diff] [blame] | 1597 | return PTR_ERR(bo); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1598 | } |
| 1599 | |
Chris Wilson | a679f58 | 2019-03-21 16:19:07 +0000 | [diff] [blame] | 1600 | i915_gem_object_set_cache_coherency(bo, I915_CACHE_LLC); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1601 | |
| 1602 | /* PreHSW required 512K alignment, HSW requires 16M */ |
| 1603 | vma = i915_gem_object_ggtt_pin(bo, NULL, 0, SZ_16M, 0); |
| 1604 | if (IS_ERR(vma)) { |
| 1605 | ret = PTR_ERR(vma); |
| 1606 | goto err_unref; |
| 1607 | } |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1608 | stream->oa_buffer.vma = vma; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1609 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1610 | stream->oa_buffer.vaddr = |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1611 | i915_gem_object_pin_map(bo, I915_MAP_WB); |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1612 | if (IS_ERR(stream->oa_buffer.vaddr)) { |
| 1613 | ret = PTR_ERR(stream->oa_buffer.vaddr); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1614 | goto err_unpin; |
| 1615 | } |
| 1616 | |
Chris Wilson | 2850748 | 2019-10-04 14:39:58 +0100 | [diff] [blame] | 1617 | return 0; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1618 | |
| 1619 | err_unpin: |
| 1620 | __i915_vma_unpin(vma); |
| 1621 | |
| 1622 | err_unref: |
| 1623 | i915_gem_object_put(bo); |
| 1624 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 1625 | stream->oa_buffer.vaddr = NULL; |
| 1626 | stream->oa_buffer.vma = NULL; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1627 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1628 | return ret; |
| 1629 | } |
| 1630 | |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 1631 | static u32 *save_restore_register(struct i915_perf_stream *stream, u32 *cs, |
| 1632 | bool save, i915_reg_t reg, u32 offset, |
| 1633 | u32 dword_count) |
| 1634 | { |
| 1635 | u32 cmd; |
| 1636 | u32 d; |
| 1637 | |
| 1638 | cmd = save ? MI_STORE_REGISTER_MEM : MI_LOAD_REGISTER_MEM; |
| 1639 | if (INTEL_GEN(stream->perf->i915) >= 8) |
| 1640 | cmd++; |
| 1641 | |
| 1642 | for (d = 0; d < dword_count; d++) { |
| 1643 | *cs++ = cmd; |
| 1644 | *cs++ = i915_mmio_reg_offset(reg) + 4 * d; |
| 1645 | *cs++ = intel_gt_scratch_offset(stream->engine->gt, |
| 1646 | offset) + 4 * d; |
| 1647 | *cs++ = 0; |
| 1648 | } |
| 1649 | |
| 1650 | return cs; |
| 1651 | } |
| 1652 | |
| 1653 | static int alloc_noa_wait(struct i915_perf_stream *stream) |
| 1654 | { |
| 1655 | struct drm_i915_private *i915 = stream->perf->i915; |
| 1656 | struct drm_i915_gem_object *bo; |
| 1657 | struct i915_vma *vma; |
| 1658 | const u64 delay_ticks = 0xffffffffffffffff - |
| 1659 | DIV64_U64_ROUND_UP( |
| 1660 | atomic64_read(&stream->perf->noa_programming_delay) * |
| 1661 | RUNTIME_INFO(i915)->cs_timestamp_frequency_khz, |
| 1662 | 1000000ull); |
| 1663 | const u32 base = stream->engine->mmio_base; |
| 1664 | #define CS_GPR(x) GEN8_RING_CS_GPR(base, x) |
| 1665 | u32 *batch, *ts0, *cs, *jump; |
| 1666 | int ret, i; |
| 1667 | enum { |
| 1668 | START_TS, |
| 1669 | NOW_TS, |
| 1670 | DELTA_TS, |
| 1671 | JUMP_PREDICATE, |
| 1672 | DELTA_TARGET, |
| 1673 | N_CS_GPR |
| 1674 | }; |
| 1675 | |
| 1676 | bo = i915_gem_object_create_internal(i915, 4096); |
| 1677 | if (IS_ERR(bo)) { |
Wambui Karuga | 00376cc | 2020-01-31 12:34:12 +0300 | [diff] [blame] | 1678 | drm_err(&i915->drm, |
| 1679 | "Failed to allocate NOA wait batchbuffer\n"); |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 1680 | return PTR_ERR(bo); |
| 1681 | } |
| 1682 | |
| 1683 | /* |
| 1684 | * We pin in GGTT because we jump into this buffer now because |
| 1685 | * multiple OA config BOs will have a jump to this address and it |
| 1686 | * needs to be fixed during the lifetime of the i915/perf stream. |
| 1687 | */ |
| 1688 | vma = i915_gem_object_ggtt_pin(bo, NULL, 0, 0, PIN_HIGH); |
| 1689 | if (IS_ERR(vma)) { |
| 1690 | ret = PTR_ERR(vma); |
| 1691 | goto err_unref; |
| 1692 | } |
| 1693 | |
| 1694 | batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB); |
| 1695 | if (IS_ERR(batch)) { |
| 1696 | ret = PTR_ERR(batch); |
| 1697 | goto err_unpin; |
| 1698 | } |
| 1699 | |
| 1700 | /* Save registers. */ |
| 1701 | for (i = 0; i < N_CS_GPR; i++) |
| 1702 | cs = save_restore_register( |
| 1703 | stream, cs, true /* save */, CS_GPR(i), |
| 1704 | INTEL_GT_SCRATCH_FIELD_PERF_CS_GPR + 8 * i, 2); |
| 1705 | cs = save_restore_register( |
| 1706 | stream, cs, true /* save */, MI_PREDICATE_RESULT_1, |
| 1707 | INTEL_GT_SCRATCH_FIELD_PERF_PREDICATE_RESULT_1, 1); |
| 1708 | |
| 1709 | /* First timestamp snapshot location. */ |
| 1710 | ts0 = cs; |
| 1711 | |
| 1712 | /* |
| 1713 | * Initial snapshot of the timestamp register to implement the wait. |
| 1714 | * We work with 32b values, so clear out the top 32b bits of the |
| 1715 | * register because the ALU works 64bits. |
| 1716 | */ |
| 1717 | *cs++ = MI_LOAD_REGISTER_IMM(1); |
| 1718 | *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS)) + 4; |
| 1719 | *cs++ = 0; |
| 1720 | *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); |
| 1721 | *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base)); |
| 1722 | *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS)); |
| 1723 | |
| 1724 | /* |
| 1725 | * This is the location we're going to jump back into until the |
| 1726 | * required amount of time has passed. |
| 1727 | */ |
| 1728 | jump = cs; |
| 1729 | |
| 1730 | /* |
| 1731 | * Take another snapshot of the timestamp register. Take care to clear |
| 1732 | * up the top 32bits of CS_GPR(1) as we're using it for other |
| 1733 | * operations below. |
| 1734 | */ |
| 1735 | *cs++ = MI_LOAD_REGISTER_IMM(1); |
| 1736 | *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS)) + 4; |
| 1737 | *cs++ = 0; |
| 1738 | *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); |
| 1739 | *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base)); |
| 1740 | *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS)); |
| 1741 | |
| 1742 | /* |
| 1743 | * Do a diff between the 2 timestamps and store the result back into |
| 1744 | * CS_GPR(1). |
| 1745 | */ |
| 1746 | *cs++ = MI_MATH(5); |
| 1747 | *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(NOW_TS)); |
| 1748 | *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(START_TS)); |
| 1749 | *cs++ = MI_MATH_SUB; |
| 1750 | *cs++ = MI_MATH_STORE(MI_MATH_REG(DELTA_TS), MI_MATH_REG_ACCU); |
| 1751 | *cs++ = MI_MATH_STORE(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF); |
| 1752 | |
| 1753 | /* |
| 1754 | * Transfer the carry flag (set to 1 if ts1 < ts0, meaning the |
| 1755 | * timestamp have rolled over the 32bits) into the predicate register |
| 1756 | * to be used for the predicated jump. |
| 1757 | */ |
| 1758 | *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); |
| 1759 | *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE)); |
| 1760 | *cs++ = i915_mmio_reg_offset(MI_PREDICATE_RESULT_1); |
| 1761 | |
| 1762 | /* Restart from the beginning if we had timestamps roll over. */ |
| 1763 | *cs++ = (INTEL_GEN(i915) < 8 ? |
| 1764 | MI_BATCH_BUFFER_START : |
| 1765 | MI_BATCH_BUFFER_START_GEN8) | |
| 1766 | MI_BATCH_PREDICATE; |
| 1767 | *cs++ = i915_ggtt_offset(vma) + (ts0 - batch) * 4; |
| 1768 | *cs++ = 0; |
| 1769 | |
| 1770 | /* |
| 1771 | * Now add the diff between to previous timestamps and add it to : |
| 1772 | * (((1 * << 64) - 1) - delay_ns) |
| 1773 | * |
| 1774 | * When the Carry Flag contains 1 this means the elapsed time is |
| 1775 | * longer than the expected delay, and we can exit the wait loop. |
| 1776 | */ |
| 1777 | *cs++ = MI_LOAD_REGISTER_IMM(2); |
| 1778 | *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET)); |
| 1779 | *cs++ = lower_32_bits(delay_ticks); |
| 1780 | *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET)) + 4; |
| 1781 | *cs++ = upper_32_bits(delay_ticks); |
| 1782 | |
| 1783 | *cs++ = MI_MATH(4); |
| 1784 | *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(DELTA_TS)); |
| 1785 | *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(DELTA_TARGET)); |
| 1786 | *cs++ = MI_MATH_ADD; |
| 1787 | *cs++ = MI_MATH_STOREINV(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF); |
| 1788 | |
Lionel Landwerlin | dd590f6 | 2019-11-14 16:02:24 +0200 | [diff] [blame] | 1789 | *cs++ = MI_ARB_CHECK; |
| 1790 | |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 1791 | /* |
| 1792 | * Transfer the result into the predicate register to be used for the |
| 1793 | * predicated jump. |
| 1794 | */ |
| 1795 | *cs++ = MI_LOAD_REGISTER_REG | (3 - 2); |
| 1796 | *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE)); |
| 1797 | *cs++ = i915_mmio_reg_offset(MI_PREDICATE_RESULT_1); |
| 1798 | |
| 1799 | /* Predicate the jump. */ |
| 1800 | *cs++ = (INTEL_GEN(i915) < 8 ? |
| 1801 | MI_BATCH_BUFFER_START : |
| 1802 | MI_BATCH_BUFFER_START_GEN8) | |
| 1803 | MI_BATCH_PREDICATE; |
| 1804 | *cs++ = i915_ggtt_offset(vma) + (jump - batch) * 4; |
| 1805 | *cs++ = 0; |
| 1806 | |
| 1807 | /* Restore registers. */ |
| 1808 | for (i = 0; i < N_CS_GPR; i++) |
| 1809 | cs = save_restore_register( |
| 1810 | stream, cs, false /* restore */, CS_GPR(i), |
| 1811 | INTEL_GT_SCRATCH_FIELD_PERF_CS_GPR + 8 * i, 2); |
| 1812 | cs = save_restore_register( |
| 1813 | stream, cs, false /* restore */, MI_PREDICATE_RESULT_1, |
| 1814 | INTEL_GT_SCRATCH_FIELD_PERF_PREDICATE_RESULT_1, 1); |
| 1815 | |
| 1816 | /* And return to the ring. */ |
| 1817 | *cs++ = MI_BATCH_BUFFER_END; |
| 1818 | |
| 1819 | GEM_BUG_ON(cs - batch > PAGE_SIZE / sizeof(*batch)); |
| 1820 | |
| 1821 | i915_gem_object_flush_map(bo); |
| 1822 | i915_gem_object_unpin_map(bo); |
| 1823 | |
| 1824 | stream->noa_wait = vma; |
| 1825 | return 0; |
| 1826 | |
| 1827 | err_unpin: |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1828 | i915_vma_unpin_and_release(&vma, 0); |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 1829 | err_unref: |
| 1830 | i915_gem_object_put(bo); |
| 1831 | return ret; |
| 1832 | } |
| 1833 | |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1834 | static u32 *write_cs_mi_lri(u32 *cs, |
| 1835 | const struct i915_oa_reg *reg_data, |
| 1836 | u32 n_regs) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1837 | { |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 1838 | u32 i; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1839 | |
| 1840 | for (i = 0; i < n_regs; i++) { |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1841 | if ((i % MI_LOAD_REGISTER_IMM_MAX_REGS) == 0) { |
| 1842 | u32 n_lri = min_t(u32, |
| 1843 | n_regs - i, |
| 1844 | MI_LOAD_REGISTER_IMM_MAX_REGS); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1845 | |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1846 | *cs++ = MI_LOAD_REGISTER_IMM(n_lri); |
| 1847 | } |
| 1848 | *cs++ = i915_mmio_reg_offset(reg_data[i].addr); |
| 1849 | *cs++ = reg_data[i].value; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1850 | } |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1851 | |
| 1852 | return cs; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1853 | } |
| 1854 | |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1855 | static int num_lri_dwords(int num_regs) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1856 | { |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1857 | int count = 0; |
| 1858 | |
| 1859 | if (num_regs > 0) { |
| 1860 | count += DIV_ROUND_UP(num_regs, MI_LOAD_REGISTER_IMM_MAX_REGS); |
| 1861 | count += num_regs * 2; |
| 1862 | } |
| 1863 | |
| 1864 | return count; |
| 1865 | } |
| 1866 | |
| 1867 | static struct i915_oa_config_bo * |
| 1868 | alloc_oa_config_buffer(struct i915_perf_stream *stream, |
| 1869 | struct i915_oa_config *oa_config) |
| 1870 | { |
| 1871 | struct drm_i915_gem_object *obj; |
| 1872 | struct i915_oa_config_bo *oa_bo; |
| 1873 | size_t config_length = 0; |
| 1874 | u32 *cs; |
| 1875 | int err; |
| 1876 | |
| 1877 | oa_bo = kzalloc(sizeof(*oa_bo), GFP_KERNEL); |
| 1878 | if (!oa_bo) |
| 1879 | return ERR_PTR(-ENOMEM); |
| 1880 | |
| 1881 | config_length += num_lri_dwords(oa_config->mux_regs_len); |
| 1882 | config_length += num_lri_dwords(oa_config->b_counter_regs_len); |
| 1883 | config_length += num_lri_dwords(oa_config->flex_regs_len); |
Lionel Landwerlin | 9393765 | 2019-11-13 17:46:39 +0200 | [diff] [blame] | 1884 | config_length += 3; /* MI_BATCH_BUFFER_START */ |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1885 | config_length = ALIGN(sizeof(u32) * config_length, I915_GTT_PAGE_SIZE); |
| 1886 | |
| 1887 | obj = i915_gem_object_create_shmem(stream->perf->i915, config_length); |
| 1888 | if (IS_ERR(obj)) { |
| 1889 | err = PTR_ERR(obj); |
| 1890 | goto err_free; |
| 1891 | } |
| 1892 | |
| 1893 | cs = i915_gem_object_pin_map(obj, I915_MAP_WB); |
| 1894 | if (IS_ERR(cs)) { |
| 1895 | err = PTR_ERR(cs); |
| 1896 | goto err_oa_bo; |
| 1897 | } |
| 1898 | |
| 1899 | cs = write_cs_mi_lri(cs, |
| 1900 | oa_config->mux_regs, |
| 1901 | oa_config->mux_regs_len); |
| 1902 | cs = write_cs_mi_lri(cs, |
| 1903 | oa_config->b_counter_regs, |
| 1904 | oa_config->b_counter_regs_len); |
| 1905 | cs = write_cs_mi_lri(cs, |
| 1906 | oa_config->flex_regs, |
| 1907 | oa_config->flex_regs_len); |
| 1908 | |
Lionel Landwerlin | 9393765 | 2019-11-13 17:46:39 +0200 | [diff] [blame] | 1909 | /* Jump into the active wait. */ |
| 1910 | *cs++ = (INTEL_GEN(stream->perf->i915) < 8 ? |
| 1911 | MI_BATCH_BUFFER_START : |
| 1912 | MI_BATCH_BUFFER_START_GEN8); |
| 1913 | *cs++ = i915_ggtt_offset(stream->noa_wait); |
| 1914 | *cs++ = 0; |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1915 | |
| 1916 | i915_gem_object_flush_map(obj); |
| 1917 | i915_gem_object_unpin_map(obj); |
| 1918 | |
| 1919 | oa_bo->vma = i915_vma_instance(obj, |
| 1920 | &stream->engine->gt->ggtt->vm, |
| 1921 | NULL); |
| 1922 | if (IS_ERR(oa_bo->vma)) { |
| 1923 | err = PTR_ERR(oa_bo->vma); |
| 1924 | goto err_oa_bo; |
| 1925 | } |
| 1926 | |
| 1927 | oa_bo->oa_config = i915_oa_config_get(oa_config); |
| 1928 | llist_add(&oa_bo->node, &stream->oa_config_bos); |
| 1929 | |
| 1930 | return oa_bo; |
| 1931 | |
| 1932 | err_oa_bo: |
| 1933 | i915_gem_object_put(obj); |
| 1934 | err_free: |
| 1935 | kfree(oa_bo); |
| 1936 | return ERR_PTR(err); |
| 1937 | } |
| 1938 | |
| 1939 | static struct i915_vma * |
| 1940 | get_oa_vma(struct i915_perf_stream *stream, struct i915_oa_config *oa_config) |
| 1941 | { |
| 1942 | struct i915_oa_config_bo *oa_bo; |
| 1943 | |
Lionel Landwerlin | 14bfcd3 | 2019-07-10 11:55:24 +0100 | [diff] [blame] | 1944 | /* |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1945 | * Look for the buffer in the already allocated BOs attached |
| 1946 | * to the stream. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 1947 | */ |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1948 | llist_for_each_entry(oa_bo, stream->oa_config_bos.first, node) { |
| 1949 | if (oa_bo->oa_config == oa_config && |
| 1950 | memcmp(oa_bo->oa_config->uuid, |
| 1951 | oa_config->uuid, |
| 1952 | sizeof(oa_config->uuid)) == 0) |
| 1953 | goto out; |
| 1954 | } |
| 1955 | |
| 1956 | oa_bo = alloc_oa_config_buffer(stream, oa_config); |
| 1957 | if (IS_ERR(oa_bo)) |
| 1958 | return ERR_CAST(oa_bo); |
| 1959 | |
| 1960 | out: |
| 1961 | return i915_vma_get(oa_bo->vma); |
| 1962 | } |
| 1963 | |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 1964 | static struct i915_request * |
| 1965 | emit_oa_config(struct i915_perf_stream *stream, |
| 1966 | struct i915_oa_config *oa_config, |
| 1967 | struct intel_context *ce) |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1968 | { |
| 1969 | struct i915_request *rq; |
| 1970 | struct i915_vma *vma; |
| 1971 | int err; |
| 1972 | |
Lionel Landwerlin | 8814c6d | 2019-10-20 00:46:47 +0300 | [diff] [blame] | 1973 | vma = get_oa_vma(stream, oa_config); |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1974 | if (IS_ERR(vma)) |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 1975 | return ERR_CAST(vma); |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1976 | |
| 1977 | err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); |
| 1978 | if (err) |
| 1979 | goto err_vma_put; |
| 1980 | |
Chris Wilson | de5825b | 2019-11-25 10:58:56 +0000 | [diff] [blame] | 1981 | intel_engine_pm_get(ce->engine); |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1982 | rq = i915_request_create(ce); |
Chris Wilson | de5825b | 2019-11-25 10:58:56 +0000 | [diff] [blame] | 1983 | intel_engine_pm_put(ce->engine); |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 1984 | if (IS_ERR(rq)) { |
| 1985 | err = PTR_ERR(rq); |
| 1986 | goto err_vma_unpin; |
| 1987 | } |
| 1988 | |
| 1989 | i915_vma_lock(vma); |
| 1990 | err = i915_request_await_object(rq, vma->obj, 0); |
| 1991 | if (!err) |
| 1992 | err = i915_vma_move_to_active(vma, rq, 0); |
| 1993 | i915_vma_unlock(vma); |
| 1994 | if (err) |
| 1995 | goto err_add_request; |
| 1996 | |
| 1997 | err = rq->engine->emit_bb_start(rq, |
| 1998 | vma->node.start, 0, |
| 1999 | I915_DISPATCH_SECURE); |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2000 | if (err) |
| 2001 | goto err_add_request; |
| 2002 | |
| 2003 | i915_request_get(rq); |
Lionel Landwerlin | 15d0ace | 2019-10-12 08:23:08 +0100 | [diff] [blame] | 2004 | err_add_request: |
| 2005 | i915_request_add(rq); |
| 2006 | err_vma_unpin: |
| 2007 | i915_vma_unpin(vma); |
| 2008 | err_vma_put: |
| 2009 | i915_vma_put(vma); |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2010 | return err ? ERR_PTR(err) : rq; |
Lionel Landwerlin | 14bfcd3 | 2019-07-10 11:55:24 +0100 | [diff] [blame] | 2011 | } |
| 2012 | |
Chris Wilson | 5f5c382 | 2019-10-12 10:10:56 +0100 | [diff] [blame] | 2013 | static struct intel_context *oa_context(struct i915_perf_stream *stream) |
| 2014 | { |
| 2015 | return stream->pinned_ctx ?: stream->engine->kernel_context; |
| 2016 | } |
| 2017 | |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2018 | static struct i915_request * |
| 2019 | hsw_enable_metric_set(struct i915_perf_stream *stream) |
Lionel Landwerlin | 14bfcd3 | 2019-07-10 11:55:24 +0100 | [diff] [blame] | 2020 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2021 | struct intel_uncore *uncore = stream->uncore; |
Lionel Landwerlin | 14bfcd3 | 2019-07-10 11:55:24 +0100 | [diff] [blame] | 2022 | |
| 2023 | /* |
| 2024 | * PRM: |
| 2025 | * |
| 2026 | * OA unit is using “crclk” for its functionality. When trunk |
| 2027 | * level clock gating takes place, OA clock would be gated, |
| 2028 | * unable to count the events from non-render clock domain. |
| 2029 | * Render clock gating must be disabled when OA is enabled to |
| 2030 | * count the events from non-render domain. Unit level clock |
| 2031 | * gating for RCS should also be disabled. |
| 2032 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2033 | intel_uncore_rmw(uncore, GEN7_MISCCPCTL, |
| 2034 | GEN7_DOP_CLOCK_GATE_ENABLE, 0); |
| 2035 | intel_uncore_rmw(uncore, GEN6_UCGCTL1, |
| 2036 | 0, GEN6_CSUNIT_CLOCK_GATE_DISABLE); |
Lionel Landwerlin | 14bfcd3 | 2019-07-10 11:55:24 +0100 | [diff] [blame] | 2037 | |
Lionel Landwerlin | 8814c6d | 2019-10-20 00:46:47 +0300 | [diff] [blame] | 2038 | return emit_oa_config(stream, stream->oa_config, oa_context(stream)); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2039 | } |
| 2040 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2041 | static void hsw_disable_metric_set(struct i915_perf_stream *stream) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2042 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2043 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2044 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2045 | intel_uncore_rmw(uncore, GEN6_UCGCTL1, |
| 2046 | GEN6_CSUNIT_CLOCK_GATE_DISABLE, 0); |
| 2047 | intel_uncore_rmw(uncore, GEN7_MISCCPCTL, |
| 2048 | 0, GEN7_DOP_CLOCK_GATE_ENABLE); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2049 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2050 | intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2051 | } |
| 2052 | |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2053 | static u32 oa_config_flex_reg(const struct i915_oa_config *oa_config, |
| 2054 | i915_reg_t reg) |
| 2055 | { |
| 2056 | u32 mmio = i915_mmio_reg_offset(reg); |
| 2057 | int i; |
| 2058 | |
| 2059 | /* |
| 2060 | * This arbitrary default will select the 'EU FPU0 Pipeline |
| 2061 | * Active' event. In the future it's anticipated that there |
| 2062 | * will be an explicit 'No Event' we can select, but not yet... |
| 2063 | */ |
| 2064 | if (!oa_config) |
| 2065 | return 0; |
| 2066 | |
| 2067 | for (i = 0; i < oa_config->flex_regs_len; i++) { |
| 2068 | if (i915_mmio_reg_offset(oa_config->flex_regs[i].addr) == mmio) |
| 2069 | return oa_config->flex_regs[i].value; |
| 2070 | } |
| 2071 | |
| 2072 | return 0; |
| 2073 | } |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2074 | /* |
| 2075 | * NB: It must always remain pointer safe to run this even if the OA unit |
| 2076 | * has been disabled. |
| 2077 | * |
| 2078 | * It's fine to put out-of-date values into these per-context registers |
| 2079 | * in the case that the OA unit has been disabled. |
| 2080 | */ |
Chris Wilson | b146e5e | 2019-03-06 08:47:04 +0000 | [diff] [blame] | 2081 | static void |
Chris Wilson | 7dc56af | 2019-09-24 15:59:50 +0100 | [diff] [blame] | 2082 | gen8_update_reg_state_unlocked(const struct intel_context *ce, |
| 2083 | const struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2084 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2085 | u32 ctx_oactxctrl = stream->perf->ctx_oactxctrl_offset; |
| 2086 | u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2087 | /* The MMIO offsets for Flex EU registers aren't contiguous */ |
Lionel Landwerlin | 35ab4fd | 2018-08-13 09:02:18 +0100 | [diff] [blame] | 2088 | i915_reg_t flex_regs[] = { |
| 2089 | EU_PERF_CNTL0, |
| 2090 | EU_PERF_CNTL1, |
| 2091 | EU_PERF_CNTL2, |
| 2092 | EU_PERF_CNTL3, |
| 2093 | EU_PERF_CNTL4, |
| 2094 | EU_PERF_CNTL5, |
| 2095 | EU_PERF_CNTL6, |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2096 | }; |
Chris Wilson | 7dc56af | 2019-09-24 15:59:50 +0100 | [diff] [blame] | 2097 | u32 *reg_state = ce->lrc_reg_state; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2098 | int i; |
| 2099 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2100 | reg_state[ctx_oactxctrl + 1] = |
| 2101 | (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) | |
| 2102 | (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) | |
| 2103 | GEN8_OA_COUNTER_RESUME; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2104 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2105 | for (i = 0; i < ARRAY_SIZE(flex_regs); i++) |
Chris Wilson | 7dc56af | 2019-09-24 15:59:50 +0100 | [diff] [blame] | 2106 | reg_state[ctx_flexeu0 + i * 2 + 1] = |
| 2107 | oa_config_flex_reg(stream->oa_config, flex_regs[i]); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2108 | } |
| 2109 | |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2110 | struct flex { |
| 2111 | i915_reg_t reg; |
| 2112 | u32 offset; |
| 2113 | u32 value; |
| 2114 | }; |
| 2115 | |
| 2116 | static int |
| 2117 | gen8_store_flex(struct i915_request *rq, |
| 2118 | struct intel_context *ce, |
| 2119 | const struct flex *flex, unsigned int count) |
| 2120 | { |
| 2121 | u32 offset; |
| 2122 | u32 *cs; |
| 2123 | |
| 2124 | cs = intel_ring_begin(rq, 4 * count); |
| 2125 | if (IS_ERR(cs)) |
| 2126 | return PTR_ERR(cs); |
| 2127 | |
| 2128 | offset = i915_ggtt_offset(ce->state) + LRC_STATE_PN * PAGE_SIZE; |
| 2129 | do { |
| 2130 | *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; |
Chris Wilson | 7dc56af | 2019-09-24 15:59:50 +0100 | [diff] [blame] | 2131 | *cs++ = offset + flex->offset * sizeof(u32); |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2132 | *cs++ = 0; |
| 2133 | *cs++ = flex->value; |
| 2134 | } while (flex++, --count); |
| 2135 | |
| 2136 | intel_ring_advance(rq, cs); |
| 2137 | |
| 2138 | return 0; |
| 2139 | } |
| 2140 | |
| 2141 | static int |
| 2142 | gen8_load_flex(struct i915_request *rq, |
| 2143 | struct intel_context *ce, |
| 2144 | const struct flex *flex, unsigned int count) |
| 2145 | { |
| 2146 | u32 *cs; |
| 2147 | |
| 2148 | GEM_BUG_ON(!count || count > 63); |
| 2149 | |
| 2150 | cs = intel_ring_begin(rq, 2 * count + 2); |
| 2151 | if (IS_ERR(cs)) |
| 2152 | return PTR_ERR(cs); |
| 2153 | |
| 2154 | *cs++ = MI_LOAD_REGISTER_IMM(count); |
| 2155 | do { |
| 2156 | *cs++ = i915_mmio_reg_offset(flex->reg); |
| 2157 | *cs++ = flex->value; |
| 2158 | } while (flex++, --count); |
| 2159 | *cs++ = MI_NOOP; |
| 2160 | |
| 2161 | intel_ring_advance(rq, cs); |
| 2162 | |
| 2163 | return 0; |
| 2164 | } |
| 2165 | |
| 2166 | static int gen8_modify_context(struct intel_context *ce, |
| 2167 | const struct flex *flex, unsigned int count) |
| 2168 | { |
| 2169 | struct i915_request *rq; |
| 2170 | int err; |
| 2171 | |
Chris Wilson | de5825b | 2019-11-25 10:58:56 +0000 | [diff] [blame] | 2172 | rq = intel_engine_create_kernel_request(ce->engine); |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2173 | if (IS_ERR(rq)) |
| 2174 | return PTR_ERR(rq); |
| 2175 | |
| 2176 | /* Serialise with the remote context */ |
| 2177 | err = intel_context_prepare_remote_request(ce, rq); |
| 2178 | if (err == 0) |
| 2179 | err = gen8_store_flex(rq, ce, flex, count); |
| 2180 | |
| 2181 | i915_request_add(rq); |
| 2182 | return err; |
| 2183 | } |
| 2184 | |
| 2185 | static int gen8_modify_self(struct intel_context *ce, |
| 2186 | const struct flex *flex, unsigned int count) |
| 2187 | { |
| 2188 | struct i915_request *rq; |
| 2189 | int err; |
| 2190 | |
Chris Wilson | d236e2a | 2020-02-27 08:57:06 +0000 | [diff] [blame] | 2191 | intel_engine_pm_get(ce->engine); |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2192 | rq = i915_request_create(ce); |
Chris Wilson | d236e2a | 2020-02-27 08:57:06 +0000 | [diff] [blame] | 2193 | intel_engine_pm_put(ce->engine); |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2194 | if (IS_ERR(rq)) |
| 2195 | return PTR_ERR(rq); |
| 2196 | |
| 2197 | err = gen8_load_flex(rq, ce, flex, count); |
| 2198 | |
| 2199 | i915_request_add(rq); |
| 2200 | return err; |
| 2201 | } |
| 2202 | |
Chris Wilson | 5cca503 | 2019-07-26 14:14:58 +0100 | [diff] [blame] | 2203 | static int gen8_configure_context(struct i915_gem_context *ctx, |
| 2204 | struct flex *flex, unsigned int count) |
| 2205 | { |
| 2206 | struct i915_gem_engines_iter it; |
| 2207 | struct intel_context *ce; |
| 2208 | int err = 0; |
| 2209 | |
| 2210 | for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) { |
| 2211 | GEM_BUG_ON(ce == ce->engine->kernel_context); |
| 2212 | |
| 2213 | if (ce->engine->class != RENDER_CLASS) |
| 2214 | continue; |
| 2215 | |
Chris Wilson | feed5c7 | 2020-01-09 08:51:42 +0000 | [diff] [blame] | 2216 | /* Otherwise OA settings will be set upon first use */ |
| 2217 | if (!intel_context_pin_if_active(ce)) |
| 2218 | continue; |
Chris Wilson | 5cca503 | 2019-07-26 14:14:58 +0100 | [diff] [blame] | 2219 | |
| 2220 | flex->value = intel_sseu_make_rpcs(ctx->i915, &ce->sseu); |
Chris Wilson | feed5c7 | 2020-01-09 08:51:42 +0000 | [diff] [blame] | 2221 | err = gen8_modify_context(ce, flex, count); |
Chris Wilson | 5cca503 | 2019-07-26 14:14:58 +0100 | [diff] [blame] | 2222 | |
Chris Wilson | feed5c7 | 2020-01-09 08:51:42 +0000 | [diff] [blame] | 2223 | intel_context_unpin(ce); |
Chris Wilson | 5cca503 | 2019-07-26 14:14:58 +0100 | [diff] [blame] | 2224 | if (err) |
| 2225 | break; |
| 2226 | } |
| 2227 | i915_gem_context_unlock_engines(ctx); |
| 2228 | |
| 2229 | return err; |
| 2230 | } |
| 2231 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2232 | static int gen12_configure_oar_context(struct i915_perf_stream *stream, bool enable) |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2233 | { |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2234 | int err; |
| 2235 | struct intel_context *ce = stream->pinned_ctx; |
| 2236 | u32 format = stream->oa_buffer.format; |
| 2237 | struct flex regs_context[] = { |
| 2238 | { |
| 2239 | GEN8_OACTXCONTROL, |
| 2240 | stream->perf->ctx_oactxctrl_offset + 1, |
| 2241 | enable ? GEN8_OA_COUNTER_RESUME : 0, |
| 2242 | }, |
| 2243 | }; |
| 2244 | /* Offsets in regs_lri are not used since this configuration is only |
| 2245 | * applied using LRI. Initialize the correct offsets for posterity. |
| 2246 | */ |
| 2247 | #define GEN12_OAR_OACONTROL_OFFSET 0x5B0 |
| 2248 | struct flex regs_lri[] = { |
| 2249 | { |
| 2250 | GEN12_OAR_OACONTROL, |
| 2251 | GEN12_OAR_OACONTROL_OFFSET + 1, |
| 2252 | (format << GEN12_OAR_OACONTROL_COUNTER_FORMAT_SHIFT) | |
| 2253 | (enable ? GEN12_OAR_OACONTROL_COUNTER_ENABLE : 0) |
| 2254 | }, |
| 2255 | { |
| 2256 | RING_CONTEXT_CONTROL(ce->engine->mmio_base), |
| 2257 | CTX_CONTEXT_CONTROL, |
| 2258 | _MASKED_FIELD(GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE, |
| 2259 | enable ? |
| 2260 | GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE : |
| 2261 | 0) |
| 2262 | }, |
| 2263 | }; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2264 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2265 | /* Modify the context image of pinned context with regs_context*/ |
| 2266 | err = intel_context_lock_pinned(ce); |
| 2267 | if (err) |
| 2268 | return err; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2269 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2270 | err = gen8_modify_context(ce, regs_context, ARRAY_SIZE(regs_context)); |
| 2271 | intel_context_unlock_pinned(ce); |
| 2272 | if (err) |
| 2273 | return err; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2274 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2275 | /* Apply regs_lri using LRI with pinned context */ |
| 2276 | return gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri)); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2277 | } |
| 2278 | |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2279 | /* |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2280 | * Manages updating the per-context aspects of the OA stream |
| 2281 | * configuration across all contexts. |
| 2282 | * |
| 2283 | * The awkward consideration here is that OACTXCONTROL controls the |
| 2284 | * exponent for periodic sampling which is primarily used for system |
| 2285 | * wide profiling where we'd like a consistent sampling period even in |
| 2286 | * the face of context switches. |
| 2287 | * |
| 2288 | * Our approach of updating the register state context (as opposed to |
| 2289 | * say using a workaround batch buffer) ensures that the hardware |
| 2290 | * won't automatically reload an out-of-date timer exponent even |
| 2291 | * transiently before a WA BB could be parsed. |
| 2292 | * |
| 2293 | * This function needs to: |
| 2294 | * - Ensure the currently running context's per-context OA state is |
| 2295 | * updated |
| 2296 | * - Ensure that all existing contexts will have the correct per-context |
| 2297 | * OA state if they are scheduled for use. |
| 2298 | * - Ensure any new contexts will be initialized with the correct |
| 2299 | * per-context OA state. |
| 2300 | * |
| 2301 | * Note: it's only the RCS/Render context that has any OA state. |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2302 | * Note: the first flex register passed must always be R_PWR_CLK_STATE |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2303 | */ |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2304 | static int oa_configure_all_contexts(struct i915_perf_stream *stream, |
| 2305 | struct flex *regs, |
| 2306 | size_t num_regs) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2307 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2308 | struct drm_i915_private *i915 = stream->perf->i915; |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2309 | struct intel_engine_cs *engine; |
Chris Wilson | a4e7ccd | 2019-10-04 14:40:09 +0100 | [diff] [blame] | 2310 | struct i915_gem_context *ctx, *cn; |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2311 | int err; |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2312 | |
Chris Wilson | a4c969d | 2019-10-07 22:09:42 +0100 | [diff] [blame] | 2313 | lockdep_assert_held(&stream->perf->lock); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2314 | |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2315 | /* |
| 2316 | * The OA register config is setup through the context image. This image |
| 2317 | * might be written to by the GPU on context switch (in particular on |
| 2318 | * lite-restore). This means we can't safely update a context's image, |
| 2319 | * if this context is scheduled/submitted to run on the GPU. |
| 2320 | * |
| 2321 | * We could emit the OA register config through the batch buffer but |
| 2322 | * this might leave small interval of time where the OA unit is |
| 2323 | * configured at an invalid sampling period. |
| 2324 | * |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2325 | * Note that since we emit all requests from a single ring, there |
| 2326 | * is still an implicit global barrier here that may cause a high |
| 2327 | * priority context to wait for an otherwise independent low priority |
| 2328 | * context. Contexts idle at the time of reconfiguration are not |
| 2329 | * trapped behind the barrier. |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2330 | */ |
Chris Wilson | a4e7ccd | 2019-10-04 14:40:09 +0100 | [diff] [blame] | 2331 | spin_lock(&i915->gem.contexts.lock); |
| 2332 | list_for_each_entry_safe(ctx, cn, &i915->gem.contexts.list, link) { |
Chris Wilson | a4e7ccd | 2019-10-04 14:40:09 +0100 | [diff] [blame] | 2333 | if (!kref_get_unless_zero(&ctx->ref)) |
| 2334 | continue; |
| 2335 | |
| 2336 | spin_unlock(&i915->gem.contexts.lock); |
| 2337 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2338 | err = gen8_configure_context(ctx, regs, num_regs); |
Chris Wilson | a4e7ccd | 2019-10-04 14:40:09 +0100 | [diff] [blame] | 2339 | if (err) { |
| 2340 | i915_gem_context_put(ctx); |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2341 | return err; |
Chris Wilson | a4e7ccd | 2019-10-04 14:40:09 +0100 | [diff] [blame] | 2342 | } |
| 2343 | |
| 2344 | spin_lock(&i915->gem.contexts.lock); |
| 2345 | list_safe_reset_next(ctx, cn, link); |
| 2346 | i915_gem_context_put(ctx); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2347 | } |
Chris Wilson | a4e7ccd | 2019-10-04 14:40:09 +0100 | [diff] [blame] | 2348 | spin_unlock(&i915->gem.contexts.lock); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2349 | |
Tvrtko Ursulin | 722f3de | 2018-09-12 16:29:30 +0100 | [diff] [blame] | 2350 | /* |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2351 | * After updating all other contexts, we need to modify ourselves. |
| 2352 | * If we don't modify the kernel_context, we do not get events while |
| 2353 | * idle. |
Tvrtko Ursulin | 722f3de | 2018-09-12 16:29:30 +0100 | [diff] [blame] | 2354 | */ |
Chris Wilson | 750e76b | 2019-08-06 13:43:00 +0100 | [diff] [blame] | 2355 | for_each_uabi_engine(engine, i915) { |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2356 | struct intel_context *ce = engine->kernel_context; |
Tvrtko Ursulin | 722f3de | 2018-09-12 16:29:30 +0100 | [diff] [blame] | 2357 | |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2358 | if (engine->class != RENDER_CLASS) |
| 2359 | continue; |
| 2360 | |
| 2361 | regs[0].value = intel_sseu_make_rpcs(i915, &ce->sseu); |
| 2362 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2363 | err = gen8_modify_self(ce, regs, num_regs); |
Chris Wilson | a9877da | 2019-07-16 22:34:43 +0100 | [diff] [blame] | 2364 | if (err) |
| 2365 | return err; |
| 2366 | } |
Tvrtko Ursulin | 722f3de | 2018-09-12 16:29:30 +0100 | [diff] [blame] | 2367 | |
| 2368 | return 0; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2369 | } |
| 2370 | |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2371 | static int gen12_configure_all_contexts(struct i915_perf_stream *stream, |
| 2372 | const struct i915_oa_config *oa_config) |
| 2373 | { |
| 2374 | struct flex regs[] = { |
| 2375 | { |
| 2376 | GEN8_R_PWR_CLK_STATE, |
| 2377 | CTX_R_PWR_CLK_STATE, |
| 2378 | }, |
| 2379 | }; |
| 2380 | |
| 2381 | return oa_configure_all_contexts(stream, regs, ARRAY_SIZE(regs)); |
| 2382 | } |
| 2383 | |
| 2384 | static int lrc_configure_all_contexts(struct i915_perf_stream *stream, |
| 2385 | const struct i915_oa_config *oa_config) |
| 2386 | { |
| 2387 | /* The MMIO offsets for Flex EU registers aren't contiguous */ |
| 2388 | const u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset; |
| 2389 | #define ctx_flexeuN(N) (ctx_flexeu0 + 2 * (N) + 1) |
| 2390 | struct flex regs[] = { |
| 2391 | { |
| 2392 | GEN8_R_PWR_CLK_STATE, |
| 2393 | CTX_R_PWR_CLK_STATE, |
| 2394 | }, |
| 2395 | { |
| 2396 | GEN8_OACTXCONTROL, |
| 2397 | stream->perf->ctx_oactxctrl_offset + 1, |
| 2398 | }, |
| 2399 | { EU_PERF_CNTL0, ctx_flexeuN(0) }, |
| 2400 | { EU_PERF_CNTL1, ctx_flexeuN(1) }, |
| 2401 | { EU_PERF_CNTL2, ctx_flexeuN(2) }, |
| 2402 | { EU_PERF_CNTL3, ctx_flexeuN(3) }, |
| 2403 | { EU_PERF_CNTL4, ctx_flexeuN(4) }, |
| 2404 | { EU_PERF_CNTL5, ctx_flexeuN(5) }, |
| 2405 | { EU_PERF_CNTL6, ctx_flexeuN(6) }, |
| 2406 | }; |
| 2407 | #undef ctx_flexeuN |
| 2408 | int i; |
| 2409 | |
| 2410 | regs[1].value = |
| 2411 | (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) | |
| 2412 | (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) | |
| 2413 | GEN8_OA_COUNTER_RESUME; |
| 2414 | |
| 2415 | for (i = 2; i < ARRAY_SIZE(regs); i++) |
| 2416 | regs[i].value = oa_config_flex_reg(oa_config, regs[i].reg); |
| 2417 | |
| 2418 | return oa_configure_all_contexts(stream, regs, ARRAY_SIZE(regs)); |
| 2419 | } |
| 2420 | |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2421 | static struct i915_request * |
| 2422 | gen8_enable_metric_set(struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2423 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2424 | struct intel_uncore *uncore = stream->uncore; |
Lionel Landwerlin | 8814c6d | 2019-10-20 00:46:47 +0300 | [diff] [blame] | 2425 | struct i915_oa_config *oa_config = stream->oa_config; |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 2426 | int ret; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2427 | |
| 2428 | /* |
| 2429 | * We disable slice/unslice clock ratio change reports on SKL since |
| 2430 | * they are too noisy. The HW generates a lot of redundant reports |
| 2431 | * where the ratio hasn't really changed causing a lot of redundant |
| 2432 | * work to processes and increasing the chances we'll hit buffer |
| 2433 | * overruns. |
| 2434 | * |
| 2435 | * Although we don't currently use the 'disable overrun' OABUFFER |
| 2436 | * feature it's worth noting that clock ratio reports have to be |
| 2437 | * disabled before considering to use that feature since the HW doesn't |
| 2438 | * correctly block these reports. |
| 2439 | * |
| 2440 | * Currently none of the high-level metrics we have depend on knowing |
| 2441 | * this ratio to normalize. |
| 2442 | * |
| 2443 | * Note: This register is not power context saved and restored, but |
| 2444 | * that's OK considering that we disable RC6 while the OA unit is |
| 2445 | * enabled. |
| 2446 | * |
| 2447 | * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to |
| 2448 | * be read back from automatically triggered reports, as part of the |
| 2449 | * RPT_ID field. |
| 2450 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2451 | if (IS_GEN_RANGE(stream->perf->i915, 9, 11)) { |
| 2452 | intel_uncore_write(uncore, GEN8_OA_DEBUG, |
| 2453 | _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS | |
| 2454 | GEN9_OA_DEBUG_INCLUDE_CLK_RATIO)); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2455 | } |
| 2456 | |
| 2457 | /* |
| 2458 | * Update all contexts prior writing the mux configurations as we need |
| 2459 | * to make sure all slices/subslices are ON before writing to NOA |
| 2460 | * registers. |
| 2461 | */ |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2462 | ret = lrc_configure_all_contexts(stream, oa_config); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2463 | if (ret) |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2464 | return ERR_PTR(ret); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2465 | |
Lionel Landwerlin | 8814c6d | 2019-10-20 00:46:47 +0300 | [diff] [blame] | 2466 | return emit_oa_config(stream, oa_config, oa_context(stream)); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2467 | } |
| 2468 | |
Chris Wilson | 9278bbb | 2019-11-01 19:21:16 +0000 | [diff] [blame] | 2469 | static u32 oag_report_ctx_switches(const struct i915_perf_stream *stream) |
| 2470 | { |
| 2471 | return _MASKED_FIELD(GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS, |
| 2472 | (stream->sample_flags & SAMPLE_OA_REPORT) ? |
| 2473 | 0 : GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS); |
| 2474 | } |
| 2475 | |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2476 | static struct i915_request * |
| 2477 | gen12_enable_metric_set(struct i915_perf_stream *stream) |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2478 | { |
| 2479 | struct intel_uncore *uncore = stream->uncore; |
| 2480 | struct i915_oa_config *oa_config = stream->oa_config; |
| 2481 | bool periodic = stream->periodic; |
| 2482 | u32 period_exponent = stream->period_exponent; |
| 2483 | int ret; |
| 2484 | |
| 2485 | intel_uncore_write(uncore, GEN12_OAG_OA_DEBUG, |
| 2486 | /* Disable clk ratio reports, like previous Gens. */ |
| 2487 | _MASKED_BIT_ENABLE(GEN12_OAG_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS | |
| 2488 | GEN12_OAG_OA_DEBUG_INCLUDE_CLK_RATIO) | |
| 2489 | /* |
Chris Wilson | 9278bbb | 2019-11-01 19:21:16 +0000 | [diff] [blame] | 2490 | * If the user didn't require OA reports, instruct |
| 2491 | * the hardware not to emit ctx switch reports. |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2492 | */ |
Chris Wilson | 9278bbb | 2019-11-01 19:21:16 +0000 | [diff] [blame] | 2493 | oag_report_ctx_switches(stream)); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2494 | |
| 2495 | intel_uncore_write(uncore, GEN12_OAG_OAGLBCTXCTRL, periodic ? |
| 2496 | (GEN12_OAG_OAGLBCTXCTRL_COUNTER_RESUME | |
| 2497 | GEN12_OAG_OAGLBCTXCTRL_TIMER_ENABLE | |
| 2498 | (period_exponent << GEN12_OAG_OAGLBCTXCTRL_TIMER_PERIOD_SHIFT)) |
| 2499 | : 0); |
| 2500 | |
| 2501 | /* |
| 2502 | * Update all contexts prior writing the mux configurations as we need |
| 2503 | * to make sure all slices/subslices are ON before writing to NOA |
| 2504 | * registers. |
| 2505 | */ |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2506 | ret = gen12_configure_all_contexts(stream, oa_config); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2507 | if (ret) |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2508 | return ERR_PTR(ret); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2509 | |
| 2510 | /* |
| 2511 | * For Gen12, performance counters are context |
| 2512 | * saved/restored. Only enable it for the context that |
| 2513 | * requested this. |
| 2514 | */ |
| 2515 | if (stream->ctx) { |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2516 | ret = gen12_configure_oar_context(stream, true); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2517 | if (ret) |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2518 | return ERR_PTR(ret); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2519 | } |
| 2520 | |
| 2521 | return emit_oa_config(stream, oa_config, oa_context(stream)); |
| 2522 | } |
| 2523 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2524 | static void gen8_disable_metric_set(struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2525 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2526 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2527 | |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2528 | /* Reset all contexts' slices/subslices configurations. */ |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2529 | lrc_configure_all_contexts(stream, NULL); |
Lionel Landwerlin | 28964cf | 2017-08-03 17:58:10 +0100 | [diff] [blame] | 2530 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2531 | intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2532 | } |
| 2533 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2534 | static void gen10_disable_metric_set(struct i915_perf_stream *stream) |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 2535 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2536 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2537 | |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 2538 | /* Reset all contexts' slices/subslices configurations. */ |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2539 | lrc_configure_all_contexts(stream, NULL); |
| 2540 | |
| 2541 | /* Make sure we disable noa to save power. */ |
| 2542 | intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); |
| 2543 | } |
| 2544 | |
| 2545 | static void gen12_disable_metric_set(struct i915_perf_stream *stream) |
| 2546 | { |
| 2547 | struct intel_uncore *uncore = stream->uncore; |
| 2548 | |
| 2549 | /* Reset all contexts' slices/subslices configurations. */ |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2550 | gen12_configure_all_contexts(stream, NULL); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2551 | |
| 2552 | /* disable the context save/restore or OAR counters */ |
| 2553 | if (stream->ctx) |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2554 | gen12_configure_oar_context(stream, false); |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 2555 | |
| 2556 | /* Make sure we disable noa to save power. */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2557 | intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0); |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 2558 | } |
| 2559 | |
Lionel Landwerlin | 5728de2 | 2018-10-23 11:07:06 +0100 | [diff] [blame] | 2560 | static void gen7_oa_enable(struct i915_perf_stream *stream) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2561 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2562 | struct intel_uncore *uncore = stream->uncore; |
Lionel Landwerlin | 5728de2 | 2018-10-23 11:07:06 +0100 | [diff] [blame] | 2563 | struct i915_gem_context *ctx = stream->ctx; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2564 | u32 ctx_id = stream->specific_ctx_id; |
| 2565 | bool periodic = stream->periodic; |
| 2566 | u32 period_exponent = stream->period_exponent; |
| 2567 | u32 report_format = stream->oa_buffer.format; |
Lionel Landwerlin | 1105130 | 2018-03-26 10:08:23 +0100 | [diff] [blame] | 2568 | |
Robert Bragg | 1bef340 | 2017-06-13 12:23:06 +0100 | [diff] [blame] | 2569 | /* |
| 2570 | * Reset buf pointers so we don't forward reports from before now. |
| 2571 | * |
| 2572 | * Think carefully if considering trying to avoid this, since it |
| 2573 | * also ensures status flags and the buffer itself are cleared |
| 2574 | * in error paths, and we have checks for invalid reports based |
| 2575 | * on the assumption that certain fields are written to zeroed |
| 2576 | * memory which this helps maintains. |
| 2577 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2578 | gen7_init_oa_buffer(stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2579 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2580 | intel_uncore_write(uncore, GEN7_OACONTROL, |
| 2581 | (ctx_id & GEN7_OACONTROL_CTX_MASK) | |
| 2582 | (period_exponent << |
| 2583 | GEN7_OACONTROL_TIMER_PERIOD_SHIFT) | |
| 2584 | (periodic ? GEN7_OACONTROL_TIMER_ENABLE : 0) | |
| 2585 | (report_format << GEN7_OACONTROL_FORMAT_SHIFT) | |
| 2586 | (ctx ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) | |
| 2587 | GEN7_OACONTROL_ENABLE); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2588 | } |
| 2589 | |
Lionel Landwerlin | 5728de2 | 2018-10-23 11:07:06 +0100 | [diff] [blame] | 2590 | static void gen8_oa_enable(struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2591 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2592 | struct intel_uncore *uncore = stream->uncore; |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2593 | u32 report_format = stream->oa_buffer.format; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2594 | |
| 2595 | /* |
| 2596 | * Reset buf pointers so we don't forward reports from before now. |
| 2597 | * |
| 2598 | * Think carefully if considering trying to avoid this, since it |
| 2599 | * also ensures status flags and the buffer itself are cleared |
| 2600 | * in error paths, and we have checks for invalid reports based |
| 2601 | * on the assumption that certain fields are written to zeroed |
| 2602 | * memory which this helps maintains. |
| 2603 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2604 | gen8_init_oa_buffer(stream); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2605 | |
| 2606 | /* |
| 2607 | * Note: we don't rely on the hardware to perform single context |
| 2608 | * filtering and instead filter on the cpu based on the context-id |
| 2609 | * field of reports |
| 2610 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2611 | intel_uncore_write(uncore, GEN8_OACONTROL, |
| 2612 | (report_format << GEN8_OA_REPORT_FORMAT_SHIFT) | |
| 2613 | GEN8_OA_COUNTER_ENABLE); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2614 | } |
| 2615 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2616 | static void gen12_oa_enable(struct i915_perf_stream *stream) |
| 2617 | { |
| 2618 | struct intel_uncore *uncore = stream->uncore; |
| 2619 | u32 report_format = stream->oa_buffer.format; |
| 2620 | |
| 2621 | /* |
| 2622 | * If we don't want OA reports from the OA buffer, then we don't even |
| 2623 | * need to program the OAG unit. |
| 2624 | */ |
| 2625 | if (!(stream->sample_flags & SAMPLE_OA_REPORT)) |
| 2626 | return; |
| 2627 | |
| 2628 | gen12_init_oa_buffer(stream); |
| 2629 | |
| 2630 | intel_uncore_write(uncore, GEN12_OAG_OACONTROL, |
| 2631 | (report_format << GEN12_OAG_OACONTROL_OA_COUNTER_FORMAT_SHIFT) | |
| 2632 | GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE); |
| 2633 | } |
| 2634 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 2635 | /** |
| 2636 | * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream |
| 2637 | * @stream: An i915 perf stream opened for OA metrics |
| 2638 | * |
| 2639 | * [Re]enables hardware periodic sampling according to the period configured |
| 2640 | * when opening the stream. This also starts a hrtimer that will periodically |
| 2641 | * check for data in the circular OA buffer for notifying userspace (e.g. |
| 2642 | * during a read() or poll()). |
| 2643 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2644 | static void i915_oa_stream_enable(struct i915_perf_stream *stream) |
| 2645 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2646 | stream->perf->ops.oa_enable(stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2647 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2648 | if (stream->periodic) |
| 2649 | hrtimer_start(&stream->poll_check_timer, |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2650 | ns_to_ktime(POLL_PERIOD), |
| 2651 | HRTIMER_MODE_REL_PINNED); |
| 2652 | } |
| 2653 | |
Lionel Landwerlin | 5728de2 | 2018-10-23 11:07:06 +0100 | [diff] [blame] | 2654 | static void gen7_oa_disable(struct i915_perf_stream *stream) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2655 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2656 | struct intel_uncore *uncore = stream->uncore; |
Lionel Landwerlin | 5728de2 | 2018-10-23 11:07:06 +0100 | [diff] [blame] | 2657 | |
Daniele Ceraolo Spurio | 97a04e0 | 2019-03-25 14:49:39 -0700 | [diff] [blame] | 2658 | intel_uncore_write(uncore, GEN7_OACONTROL, 0); |
| 2659 | if (intel_wait_for_register(uncore, |
Chris Wilson | e896d29 | 2018-05-11 14:52:07 +0100 | [diff] [blame] | 2660 | GEN7_OACONTROL, GEN7_OACONTROL_ENABLE, 0, |
| 2661 | 50)) |
Wambui Karuga | 0bf8573 | 2020-02-18 20:39:36 +0300 | [diff] [blame] | 2662 | drm_err(&stream->perf->i915->drm, |
| 2663 | "wait for OA to be disabled timed out\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2664 | } |
| 2665 | |
Lionel Landwerlin | 5728de2 | 2018-10-23 11:07:06 +0100 | [diff] [blame] | 2666 | static void gen8_oa_disable(struct i915_perf_stream *stream) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2667 | { |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2668 | struct intel_uncore *uncore = stream->uncore; |
Lionel Landwerlin | 5728de2 | 2018-10-23 11:07:06 +0100 | [diff] [blame] | 2669 | |
Daniele Ceraolo Spurio | 97a04e0 | 2019-03-25 14:49:39 -0700 | [diff] [blame] | 2670 | intel_uncore_write(uncore, GEN8_OACONTROL, 0); |
| 2671 | if (intel_wait_for_register(uncore, |
Chris Wilson | e896d29 | 2018-05-11 14:52:07 +0100 | [diff] [blame] | 2672 | GEN8_OACONTROL, GEN8_OA_COUNTER_ENABLE, 0, |
| 2673 | 50)) |
Wambui Karuga | 0bf8573 | 2020-02-18 20:39:36 +0300 | [diff] [blame] | 2674 | drm_err(&stream->perf->i915->drm, |
| 2675 | "wait for OA to be disabled timed out\n"); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2676 | } |
| 2677 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2678 | static void gen12_oa_disable(struct i915_perf_stream *stream) |
| 2679 | { |
| 2680 | struct intel_uncore *uncore = stream->uncore; |
| 2681 | |
| 2682 | intel_uncore_write(uncore, GEN12_OAG_OACONTROL, 0); |
| 2683 | if (intel_wait_for_register(uncore, |
| 2684 | GEN12_OAG_OACONTROL, |
| 2685 | GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE, 0, |
| 2686 | 50)) |
Wambui Karuga | 0bf8573 | 2020-02-18 20:39:36 +0300 | [diff] [blame] | 2687 | drm_err(&stream->perf->i915->drm, |
| 2688 | "wait for OA to be disabled timed out\n"); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 2689 | } |
| 2690 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 2691 | /** |
| 2692 | * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream |
| 2693 | * @stream: An i915 perf stream opened for OA metrics |
| 2694 | * |
| 2695 | * Stops the OA unit from periodically writing counter reports into the |
| 2696 | * circular OA buffer. This also stops the hrtimer that periodically checks for |
| 2697 | * data in the circular OA buffer, for notifying userspace. |
| 2698 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2699 | static void i915_oa_stream_disable(struct i915_perf_stream *stream) |
| 2700 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2701 | stream->perf->ops.oa_disable(stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2702 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2703 | if (stream->periodic) |
| 2704 | hrtimer_cancel(&stream->poll_check_timer); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2705 | } |
| 2706 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2707 | static const struct i915_perf_stream_ops i915_oa_stream_ops = { |
| 2708 | .destroy = i915_oa_stream_destroy, |
| 2709 | .enable = i915_oa_stream_enable, |
| 2710 | .disable = i915_oa_stream_disable, |
| 2711 | .wait_unlocked = i915_oa_wait_unlocked, |
| 2712 | .poll_wait = i915_oa_poll_wait, |
| 2713 | .read = i915_oa_read, |
| 2714 | }; |
| 2715 | |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2716 | static int i915_perf_stream_enable_sync(struct i915_perf_stream *stream) |
| 2717 | { |
| 2718 | struct i915_request *rq; |
| 2719 | |
| 2720 | rq = stream->perf->ops.enable_metric_set(stream); |
| 2721 | if (IS_ERR(rq)) |
| 2722 | return PTR_ERR(rq); |
| 2723 | |
| 2724 | i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT); |
| 2725 | i915_request_put(rq); |
| 2726 | |
| 2727 | return 0; |
| 2728 | } |
| 2729 | |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 2730 | static void |
| 2731 | get_default_sseu_config(struct intel_sseu *out_sseu, |
| 2732 | struct intel_engine_cs *engine) |
| 2733 | { |
| 2734 | const struct sseu_dev_info *devinfo_sseu = |
| 2735 | &RUNTIME_INFO(engine->i915)->sseu; |
| 2736 | |
| 2737 | *out_sseu = intel_sseu_from_device_info(devinfo_sseu); |
| 2738 | |
| 2739 | if (IS_GEN(engine->i915, 11)) { |
| 2740 | /* |
| 2741 | * We only need subslice count so it doesn't matter which ones |
| 2742 | * we select - just turn off low bits in the amount of half of |
| 2743 | * all available subslices per slice. |
| 2744 | */ |
| 2745 | out_sseu->subslice_mask = |
| 2746 | ~(~0 << (hweight8(out_sseu->subslice_mask) / 2)); |
| 2747 | out_sseu->slice_mask = 0x1; |
| 2748 | } |
| 2749 | } |
| 2750 | |
| 2751 | static int |
| 2752 | get_sseu_config(struct intel_sseu *out_sseu, |
| 2753 | struct intel_engine_cs *engine, |
| 2754 | const struct drm_i915_gem_context_param_sseu *drm_sseu) |
| 2755 | { |
| 2756 | if (drm_sseu->engine.engine_class != engine->uabi_class || |
| 2757 | drm_sseu->engine.engine_instance != engine->uabi_instance) |
| 2758 | return -EINVAL; |
| 2759 | |
| 2760 | return i915_gem_user_to_context_sseu(engine->i915, drm_sseu, out_sseu); |
| 2761 | } |
| 2762 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 2763 | /** |
| 2764 | * i915_oa_stream_init - validate combined props for OA stream and init |
| 2765 | * @stream: An i915 perf stream |
| 2766 | * @param: The open parameters passed to `DRM_I915_PERF_OPEN` |
| 2767 | * @props: The property state that configures stream (individually validated) |
| 2768 | * |
| 2769 | * While read_properties_unlocked() validates properties in isolation it |
| 2770 | * doesn't ensure that the combination necessarily makes sense. |
| 2771 | * |
| 2772 | * At this point it has been determined that userspace wants a stream of |
| 2773 | * OA metrics, but still we need to further validate the combined |
| 2774 | * properties are OK. |
| 2775 | * |
| 2776 | * If the configuration makes sense then we can allocate memory for |
| 2777 | * a circular OA buffer and apply the requested metric set configuration. |
| 2778 | * |
| 2779 | * Returns: zero on success or a negative error code. |
| 2780 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2781 | static int i915_oa_stream_init(struct i915_perf_stream *stream, |
| 2782 | struct drm_i915_perf_open_param *param, |
| 2783 | struct perf_open_properties *props) |
| 2784 | { |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 2785 | struct drm_i915_private *i915 = stream->perf->i915; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2786 | struct i915_perf *perf = stream->perf; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2787 | int format_size; |
| 2788 | int ret; |
| 2789 | |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 2790 | if (!props->engine) { |
| 2791 | DRM_DEBUG("OA engine not specified\n"); |
| 2792 | return -EINVAL; |
| 2793 | } |
| 2794 | |
| 2795 | /* |
| 2796 | * If the sysfs metrics/ directory wasn't registered for some |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 2797 | * reason then don't let userspace try their luck with config |
| 2798 | * IDs |
| 2799 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2800 | if (!perf->metrics_kobj) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 2801 | DRM_DEBUG("OA metrics weren't advertised via sysfs\n"); |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 2802 | return -EINVAL; |
| 2803 | } |
| 2804 | |
Umesh Nerlige Ramappa | 322d56a | 2019-12-06 11:43:38 -0800 | [diff] [blame] | 2805 | if (!(props->sample_flags & SAMPLE_OA_REPORT) && |
| 2806 | (INTEL_GEN(perf->i915) < 12 || !stream->ctx)) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 2807 | DRM_DEBUG("Only OA report sampling supported\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2808 | return -EINVAL; |
| 2809 | } |
| 2810 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2811 | if (!perf->ops.enable_metric_set) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 2812 | DRM_DEBUG("OA unit not supported\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2813 | return -ENODEV; |
| 2814 | } |
| 2815 | |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 2816 | /* |
| 2817 | * To avoid the complexity of having to accurately filter |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2818 | * counter reports and marshal to the appropriate client |
| 2819 | * we currently only allow exclusive access |
| 2820 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2821 | if (perf->exclusive_stream) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 2822 | DRM_DEBUG("OA unit already in use\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2823 | return -EBUSY; |
| 2824 | } |
| 2825 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2826 | if (!props->oa_format) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 2827 | DRM_DEBUG("OA report format not specified\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2828 | return -EINVAL; |
| 2829 | } |
| 2830 | |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 2831 | stream->engine = props->engine; |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2832 | stream->uncore = stream->engine->gt->uncore; |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 2833 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2834 | stream->sample_size = sizeof(struct drm_i915_perf_record_header); |
| 2835 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2836 | format_size = perf->oa_formats[props->oa_format].size; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2837 | |
Umesh Nerlige Ramappa | 322d56a | 2019-12-06 11:43:38 -0800 | [diff] [blame] | 2838 | stream->sample_flags = props->sample_flags; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2839 | stream->sample_size += format_size; |
| 2840 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2841 | stream->oa_buffer.format_size = format_size; |
Pankaj Bharadiya | a9f236d | 2020-01-15 09:14:54 +0530 | [diff] [blame] | 2842 | if (drm_WARN_ON(&i915->drm, stream->oa_buffer.format_size == 0)) |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2843 | return -EINVAL; |
| 2844 | |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 2845 | stream->hold_preemption = props->hold_preemption; |
| 2846 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2847 | stream->oa_buffer.format = |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2848 | perf->oa_formats[props->oa_format].format; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2849 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2850 | stream->periodic = props->oa_periodic; |
| 2851 | if (stream->periodic) |
| 2852 | stream->period_exponent = props->oa_period_exponent; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2853 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2854 | if (stream->ctx) { |
| 2855 | ret = oa_get_render_ctx_id(stream); |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 2856 | if (ret) { |
| 2857 | DRM_DEBUG("Invalid context id to filter with\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2858 | return ret; |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 2859 | } |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2860 | } |
| 2861 | |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 2862 | ret = alloc_noa_wait(stream); |
| 2863 | if (ret) { |
| 2864 | DRM_DEBUG("Unable to allocate NOA wait batch buffer\n"); |
| 2865 | goto err_noa_wait_alloc; |
| 2866 | } |
| 2867 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 2868 | stream->oa_config = i915_perf_get_oa_config(perf, props->metrics_set); |
| 2869 | if (!stream->oa_config) { |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 2870 | DRM_DEBUG("Invalid OA config id=%i\n", props->metrics_set); |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 2871 | ret = -EINVAL; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 2872 | goto err_config; |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 2873 | } |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 2874 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2875 | /* PRM - observability performance counters: |
| 2876 | * |
| 2877 | * OACONTROL, performance counter enable, note: |
| 2878 | * |
| 2879 | * "When this bit is set, in order to have coherent counts, |
| 2880 | * RC6 power state and trunk clock gating must be disabled. |
| 2881 | * This can be achieved by programming MMIO registers as |
| 2882 | * 0xA094=0 and 0xA090[31]=1" |
| 2883 | * |
| 2884 | * In our case we are expecting that taking pm + FORCEWAKE |
| 2885 | * references will effectively disable RC6. |
| 2886 | */ |
Chris Wilson | a5efcde | 2019-10-11 20:03:17 +0100 | [diff] [blame] | 2887 | intel_engine_pm_get(stream->engine); |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2888 | intel_uncore_forcewake_get(stream->uncore, FORCEWAKE_ALL); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2889 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2890 | ret = alloc_oa_buffer(stream); |
sagar.a.kamble@intel.com | 987f8c4 | 2017-06-27 23:09:41 +0530 | [diff] [blame] | 2891 | if (ret) |
| 2892 | goto err_oa_buf_alloc; |
| 2893 | |
Lionel Landwerlin | ec431ea | 2019-02-05 09:50:29 +0000 | [diff] [blame] | 2894 | stream->ops = &i915_oa_stream_ops; |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 2895 | |
| 2896 | perf->sseu = props->sseu; |
Chris Wilson | a5af081 | 2020-02-27 08:57:05 +0000 | [diff] [blame] | 2897 | WRITE_ONCE(perf->exclusive_stream, stream); |
Lionel Landwerlin | ec431ea | 2019-02-05 09:50:29 +0000 | [diff] [blame] | 2898 | |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 2899 | ret = i915_perf_stream_enable_sync(stream); |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 2900 | if (ret) { |
| 2901 | DRM_DEBUG("Unable to enable metric set\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2902 | goto err_enable; |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 2903 | } |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2904 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 2905 | DRM_DEBUG("opening stream oa config uuid=%s\n", |
| 2906 | stream->oa_config->uuid); |
| 2907 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2908 | hrtimer_init(&stream->poll_check_timer, |
| 2909 | CLOCK_MONOTONIC, HRTIMER_MODE_REL); |
| 2910 | stream->poll_check_timer.function = oa_poll_check_timer_cb; |
| 2911 | init_waitqueue_head(&stream->poll_wq); |
| 2912 | spin_lock_init(&stream->oa_buffer.ptr_lock); |
| 2913 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2914 | return 0; |
| 2915 | |
| 2916 | err_enable: |
Chris Wilson | a5af081 | 2020-02-27 08:57:05 +0000 | [diff] [blame] | 2917 | WRITE_ONCE(perf->exclusive_stream, NULL); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 2918 | perf->ops.disable_metric_set(stream); |
Lionel Landwerlin | 41d3fdc | 2018-03-01 11:06:13 +0000 | [diff] [blame] | 2919 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 2920 | free_oa_buffer(stream); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2921 | |
| 2922 | err_oa_buf_alloc: |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 2923 | free_oa_configs(stream); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 2924 | |
Chris Wilson | 52111c4 | 2019-10-10 16:05:20 +0100 | [diff] [blame] | 2925 | intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL); |
Chris Wilson | a5efcde | 2019-10-11 20:03:17 +0100 | [diff] [blame] | 2926 | intel_engine_pm_put(stream->engine); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 2927 | |
| 2928 | err_config: |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 2929 | free_noa_wait(stream); |
| 2930 | |
| 2931 | err_noa_wait_alloc: |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 2932 | if (stream->ctx) |
| 2933 | oa_put_render_ctx_id(stream); |
| 2934 | |
| 2935 | return ret; |
| 2936 | } |
| 2937 | |
Chris Wilson | 7dc56af | 2019-09-24 15:59:50 +0100 | [diff] [blame] | 2938 | void i915_oa_init_reg_state(const struct intel_context *ce, |
| 2939 | const struct intel_engine_cs *engine) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2940 | { |
Chris Wilson | 28b6cb0 | 2017-08-10 18:57:43 +0100 | [diff] [blame] | 2941 | struct i915_perf_stream *stream; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2942 | |
Chris Wilson | 8a68d46 | 2019-03-05 18:03:30 +0000 | [diff] [blame] | 2943 | if (engine->class != RENDER_CLASS) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2944 | return; |
| 2945 | |
Chris Wilson | a5af081 | 2020-02-27 08:57:05 +0000 | [diff] [blame] | 2946 | /* perf.exclusive_stream serialised by lrc_configure_all_contexts() */ |
| 2947 | stream = READ_ONCE(engine->i915->perf.exclusive_stream); |
Umesh Nerlige Ramappa | ccdeed4 | 2019-12-06 11:43:39 -0800 | [diff] [blame] | 2948 | if (stream && INTEL_GEN(stream->perf->i915) < 12) |
Chris Wilson | 7dc56af | 2019-09-24 15:59:50 +0100 | [diff] [blame] | 2949 | gen8_update_reg_state_unlocked(ce, stream); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 2950 | } |
| 2951 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 2952 | /** |
| 2953 | * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation |
| 2954 | * @stream: An i915 perf stream |
| 2955 | * @file: An i915 perf stream file |
| 2956 | * @buf: destination buffer given by userspace |
| 2957 | * @count: the number of bytes userspace wants to read |
| 2958 | * @ppos: (inout) file seek position (unused) |
| 2959 | * |
| 2960 | * Besides wrapping &i915_perf_stream_ops->read this provides a common place to |
| 2961 | * ensure that if we've successfully copied any data then reporting that takes |
| 2962 | * precedence over any internal error status, so the data isn't lost. |
| 2963 | * |
| 2964 | * For example ret will be -ENOSPC whenever there is more buffered data than |
| 2965 | * can be copied to userspace, but that's only interesting if we weren't able |
| 2966 | * to copy some data because it implies the userspace buffer is too small to |
| 2967 | * receive a single record (and we never split records). |
| 2968 | * |
| 2969 | * Another case with ret == -EFAULT is more of a grey area since it would seem |
| 2970 | * like bad form for userspace to ask us to overrun its buffer, but the user |
| 2971 | * knows best: |
| 2972 | * |
| 2973 | * http://yarchive.net/comp/linux/partial_reads_writes.html |
| 2974 | * |
| 2975 | * Returns: The number of bytes copied or a negative error code on failure. |
| 2976 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 2977 | static ssize_t i915_perf_read_locked(struct i915_perf_stream *stream, |
| 2978 | struct file *file, |
| 2979 | char __user *buf, |
| 2980 | size_t count, |
| 2981 | loff_t *ppos) |
| 2982 | { |
| 2983 | /* Note we keep the offset (aka bytes read) separate from any |
| 2984 | * error status so that the final check for whether we return |
| 2985 | * the bytes read with a higher precedence than any error (see |
| 2986 | * comment below) doesn't need to be handled/duplicated in |
| 2987 | * stream->ops->read() implementations. |
| 2988 | */ |
| 2989 | size_t offset = 0; |
| 2990 | int ret = stream->ops->read(stream, buf, count, &offset); |
| 2991 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 2992 | return offset ?: (ret ?: -EAGAIN); |
| 2993 | } |
| 2994 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 2995 | /** |
| 2996 | * i915_perf_read - handles read() FOP for i915 perf stream FDs |
| 2997 | * @file: An i915 perf stream file |
| 2998 | * @buf: destination buffer given by userspace |
| 2999 | * @count: the number of bytes userspace wants to read |
| 3000 | * @ppos: (inout) file seek position (unused) |
| 3001 | * |
| 3002 | * The entry point for handling a read() on a stream file descriptor from |
| 3003 | * userspace. Most of the work is left to the i915_perf_read_locked() and |
| 3004 | * &i915_perf_stream_ops->read but to save having stream implementations (of |
| 3005 | * which we might have multiple later) we handle blocking read here. |
| 3006 | * |
| 3007 | * We can also consistently treat trying to read from a disabled stream |
| 3008 | * as an IO error so implementations can assume the stream is enabled |
| 3009 | * while reading. |
| 3010 | * |
| 3011 | * Returns: The number of bytes copied or a negative error code on failure. |
| 3012 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3013 | static ssize_t i915_perf_read(struct file *file, |
| 3014 | char __user *buf, |
| 3015 | size_t count, |
| 3016 | loff_t *ppos) |
| 3017 | { |
| 3018 | struct i915_perf_stream *stream = file->private_data; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3019 | struct i915_perf *perf = stream->perf; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3020 | ssize_t ret; |
| 3021 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3022 | /* To ensure it's handled consistently we simply treat all reads of a |
| 3023 | * disabled stream as an error. In particular it might otherwise lead |
| 3024 | * to a deadlock for blocking file descriptors... |
| 3025 | */ |
| 3026 | if (!stream->enabled) |
| 3027 | return -EIO; |
| 3028 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3029 | if (!(file->f_flags & O_NONBLOCK)) { |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3030 | /* There's the small chance of false positives from |
| 3031 | * stream->ops->wait_unlocked. |
| 3032 | * |
| 3033 | * E.g. with single context filtering since we only wait until |
| 3034 | * oabuffer has >= 1 report we don't immediately know whether |
| 3035 | * any reports really belong to the current context |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3036 | */ |
| 3037 | do { |
| 3038 | ret = stream->ops->wait_unlocked(stream); |
| 3039 | if (ret) |
| 3040 | return ret; |
| 3041 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3042 | mutex_lock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3043 | ret = i915_perf_read_locked(stream, file, |
| 3044 | buf, count, ppos); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3045 | mutex_unlock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3046 | } while (ret == -EAGAIN); |
| 3047 | } else { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3048 | mutex_lock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3049 | ret = i915_perf_read_locked(stream, file, buf, count, ppos); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3050 | mutex_unlock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3051 | } |
| 3052 | |
Linus Torvalds | a9a0884 | 2018-02-11 14:34:03 -0800 | [diff] [blame] | 3053 | /* We allow the poll checking to sometimes report false positive EPOLLIN |
Robert Bragg | 26ebd9c | 2017-05-11 16:43:25 +0100 | [diff] [blame] | 3054 | * events where we might actually report EAGAIN on read() if there's |
| 3055 | * not really any data available. In this situation though we don't |
Linus Torvalds | a9a0884 | 2018-02-11 14:34:03 -0800 | [diff] [blame] | 3056 | * want to enter a busy loop between poll() reporting a EPOLLIN event |
Robert Bragg | 26ebd9c | 2017-05-11 16:43:25 +0100 | [diff] [blame] | 3057 | * and read() returning -EAGAIN. Clearing the oa.pollin state here |
| 3058 | * effectively ensures we back off until the next hrtimer callback |
Linus Torvalds | a9a0884 | 2018-02-11 14:34:03 -0800 | [diff] [blame] | 3059 | * before reporting another EPOLLIN event. |
Robert Bragg | 26ebd9c | 2017-05-11 16:43:25 +0100 | [diff] [blame] | 3060 | */ |
| 3061 | if (ret >= 0 || ret == -EAGAIN) { |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3062 | /* Maybe make ->pollin per-stream state if we support multiple |
| 3063 | * concurrent streams in the future. |
| 3064 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 3065 | stream->pollin = false; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3066 | } |
| 3067 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3068 | return ret; |
| 3069 | } |
| 3070 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3071 | static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer) |
| 3072 | { |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 3073 | struct i915_perf_stream *stream = |
| 3074 | container_of(hrtimer, typeof(*stream), poll_check_timer); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3075 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 3076 | if (oa_buffer_check_unlocked(stream)) { |
| 3077 | stream->pollin = true; |
| 3078 | wake_up(&stream->poll_wq); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3079 | } |
| 3080 | |
| 3081 | hrtimer_forward_now(hrtimer, ns_to_ktime(POLL_PERIOD)); |
| 3082 | |
| 3083 | return HRTIMER_RESTART; |
| 3084 | } |
| 3085 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3086 | /** |
| 3087 | * i915_perf_poll_locked - poll_wait() with a suitable wait queue for stream |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3088 | * @stream: An i915 perf stream |
| 3089 | * @file: An i915 perf stream file |
| 3090 | * @wait: poll() state table |
| 3091 | * |
| 3092 | * For handling userspace polling on an i915 perf stream, this calls through to |
| 3093 | * &i915_perf_stream_ops->poll_wait to call poll_wait() with a wait queue that |
| 3094 | * will be woken for new stream data. |
| 3095 | * |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3096 | * Note: The &perf->lock mutex has been taken to serialize |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3097 | * with any non-file-operation driver hooks. |
| 3098 | * |
| 3099 | * Returns: any poll events that are ready without sleeping |
| 3100 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3101 | static __poll_t i915_perf_poll_locked(struct i915_perf_stream *stream, |
| 3102 | struct file *file, |
| 3103 | poll_table *wait) |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3104 | { |
Al Viro | afc9a42 | 2017-07-03 06:39:46 -0400 | [diff] [blame] | 3105 | __poll_t events = 0; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3106 | |
| 3107 | stream->ops->poll_wait(stream, file, wait); |
| 3108 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3109 | /* Note: we don't explicitly check whether there's something to read |
| 3110 | * here since this path may be very hot depending on what else |
| 3111 | * userspace is polling, or on the timeout in use. We rely solely on |
| 3112 | * the hrtimer/oa_poll_check_timer_cb to notify us when there are |
| 3113 | * samples to read. |
| 3114 | */ |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 3115 | if (stream->pollin) |
Linus Torvalds | a9a0884 | 2018-02-11 14:34:03 -0800 | [diff] [blame] | 3116 | events |= EPOLLIN; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3117 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3118 | return events; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3119 | } |
| 3120 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3121 | /** |
| 3122 | * i915_perf_poll - call poll_wait() with a suitable wait queue for stream |
| 3123 | * @file: An i915 perf stream file |
| 3124 | * @wait: poll() state table |
| 3125 | * |
| 3126 | * For handling userspace polling on an i915 perf stream, this ensures |
| 3127 | * poll_wait() gets called with a wait queue that will be woken for new stream |
| 3128 | * data. |
| 3129 | * |
| 3130 | * Note: Implementation deferred to i915_perf_poll_locked() |
| 3131 | * |
| 3132 | * Returns: any poll events that are ready without sleeping |
| 3133 | */ |
Al Viro | afc9a42 | 2017-07-03 06:39:46 -0400 | [diff] [blame] | 3134 | static __poll_t i915_perf_poll(struct file *file, poll_table *wait) |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3135 | { |
| 3136 | struct i915_perf_stream *stream = file->private_data; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3137 | struct i915_perf *perf = stream->perf; |
Al Viro | afc9a42 | 2017-07-03 06:39:46 -0400 | [diff] [blame] | 3138 | __poll_t ret; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3139 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3140 | mutex_lock(&perf->lock); |
| 3141 | ret = i915_perf_poll_locked(stream, file, wait); |
| 3142 | mutex_unlock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3143 | |
| 3144 | return ret; |
| 3145 | } |
| 3146 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3147 | /** |
| 3148 | * i915_perf_enable_locked - handle `I915_PERF_IOCTL_ENABLE` ioctl |
| 3149 | * @stream: A disabled i915 perf stream |
| 3150 | * |
| 3151 | * [Re]enables the associated capture of data for this stream. |
| 3152 | * |
| 3153 | * If a stream was previously enabled then there's currently no intention |
| 3154 | * to provide userspace any guarantee about the preservation of previously |
| 3155 | * buffered data. |
| 3156 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3157 | static void i915_perf_enable_locked(struct i915_perf_stream *stream) |
| 3158 | { |
| 3159 | if (stream->enabled) |
| 3160 | return; |
| 3161 | |
| 3162 | /* Allow stream->ops->enable() to refer to this */ |
| 3163 | stream->enabled = true; |
| 3164 | |
| 3165 | if (stream->ops->enable) |
| 3166 | stream->ops->enable(stream); |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 3167 | |
| 3168 | if (stream->hold_preemption) |
Chris Wilson | 9f3ccd4 | 2019-12-20 10:12:29 +0000 | [diff] [blame] | 3169 | intel_context_set_nopreempt(stream->pinned_ctx); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3170 | } |
| 3171 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3172 | /** |
| 3173 | * i915_perf_disable_locked - handle `I915_PERF_IOCTL_DISABLE` ioctl |
| 3174 | * @stream: An enabled i915 perf stream |
| 3175 | * |
| 3176 | * Disables the associated capture of data for this stream. |
| 3177 | * |
| 3178 | * The intention is that disabling an re-enabling a stream will ideally be |
| 3179 | * cheaper than destroying and re-opening a stream with the same configuration, |
| 3180 | * though there are no formal guarantees about what state or buffered data |
| 3181 | * must be retained between disabling and re-enabling a stream. |
| 3182 | * |
| 3183 | * Note: while a stream is disabled it's considered an error for userspace |
| 3184 | * to attempt to read from the stream (-EIO). |
| 3185 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3186 | static void i915_perf_disable_locked(struct i915_perf_stream *stream) |
| 3187 | { |
| 3188 | if (!stream->enabled) |
| 3189 | return; |
| 3190 | |
| 3191 | /* Allow stream->ops->disable() to refer to this */ |
| 3192 | stream->enabled = false; |
| 3193 | |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 3194 | if (stream->hold_preemption) |
Chris Wilson | 9f3ccd4 | 2019-12-20 10:12:29 +0000 | [diff] [blame] | 3195 | intel_context_clear_nopreempt(stream->pinned_ctx); |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 3196 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3197 | if (stream->ops->disable) |
| 3198 | stream->ops->disable(stream); |
| 3199 | } |
| 3200 | |
Chris Wilson | 7831e9a | 2019-10-14 21:14:03 +0100 | [diff] [blame] | 3201 | static long i915_perf_config_locked(struct i915_perf_stream *stream, |
| 3202 | unsigned long metrics_set) |
| 3203 | { |
| 3204 | struct i915_oa_config *config; |
| 3205 | long ret = stream->oa_config->id; |
| 3206 | |
| 3207 | config = i915_perf_get_oa_config(stream->perf, metrics_set); |
| 3208 | if (!config) |
| 3209 | return -EINVAL; |
| 3210 | |
| 3211 | if (config != stream->oa_config) { |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 3212 | struct i915_request *rq; |
Chris Wilson | 7831e9a | 2019-10-14 21:14:03 +0100 | [diff] [blame] | 3213 | |
| 3214 | /* |
| 3215 | * If OA is bound to a specific context, emit the |
| 3216 | * reconfiguration inline from that context. The update |
| 3217 | * will then be ordered with respect to submission on that |
| 3218 | * context. |
| 3219 | * |
| 3220 | * When set globally, we use a low priority kernel context, |
| 3221 | * so it will effectively take effect when idle. |
| 3222 | */ |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 3223 | rq = emit_oa_config(stream, config, oa_context(stream)); |
| 3224 | if (!IS_ERR(rq)) { |
Chris Wilson | 7831e9a | 2019-10-14 21:14:03 +0100 | [diff] [blame] | 3225 | config = xchg(&stream->oa_config, config); |
Chris Wilson | 4b4e973 | 2020-03-02 08:57:57 +0000 | [diff] [blame] | 3226 | i915_request_put(rq); |
| 3227 | } else { |
| 3228 | ret = PTR_ERR(rq); |
| 3229 | } |
Chris Wilson | 7831e9a | 2019-10-14 21:14:03 +0100 | [diff] [blame] | 3230 | } |
| 3231 | |
| 3232 | i915_oa_config_put(config); |
| 3233 | |
| 3234 | return ret; |
| 3235 | } |
| 3236 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3237 | /** |
| 3238 | * i915_perf_ioctl - support ioctl() usage with i915 perf stream FDs |
| 3239 | * @stream: An i915 perf stream |
| 3240 | * @cmd: the ioctl request |
| 3241 | * @arg: the ioctl data |
| 3242 | * |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3243 | * Note: The &perf->lock mutex has been taken to serialize |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3244 | * with any non-file-operation driver hooks. |
| 3245 | * |
| 3246 | * Returns: zero on success or a negative error code. Returns -EINVAL for |
| 3247 | * an unknown ioctl request. |
| 3248 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3249 | static long i915_perf_ioctl_locked(struct i915_perf_stream *stream, |
| 3250 | unsigned int cmd, |
| 3251 | unsigned long arg) |
| 3252 | { |
| 3253 | switch (cmd) { |
| 3254 | case I915_PERF_IOCTL_ENABLE: |
| 3255 | i915_perf_enable_locked(stream); |
| 3256 | return 0; |
| 3257 | case I915_PERF_IOCTL_DISABLE: |
| 3258 | i915_perf_disable_locked(stream); |
| 3259 | return 0; |
Chris Wilson | 7831e9a | 2019-10-14 21:14:03 +0100 | [diff] [blame] | 3260 | case I915_PERF_IOCTL_CONFIG: |
| 3261 | return i915_perf_config_locked(stream, arg); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3262 | } |
| 3263 | |
| 3264 | return -EINVAL; |
| 3265 | } |
| 3266 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3267 | /** |
| 3268 | * i915_perf_ioctl - support ioctl() usage with i915 perf stream FDs |
| 3269 | * @file: An i915 perf stream file |
| 3270 | * @cmd: the ioctl request |
| 3271 | * @arg: the ioctl data |
| 3272 | * |
| 3273 | * Implementation deferred to i915_perf_ioctl_locked(). |
| 3274 | * |
| 3275 | * Returns: zero on success or a negative error code. Returns -EINVAL for |
| 3276 | * an unknown ioctl request. |
| 3277 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3278 | static long i915_perf_ioctl(struct file *file, |
| 3279 | unsigned int cmd, |
| 3280 | unsigned long arg) |
| 3281 | { |
| 3282 | struct i915_perf_stream *stream = file->private_data; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3283 | struct i915_perf *perf = stream->perf; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3284 | long ret; |
| 3285 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3286 | mutex_lock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3287 | ret = i915_perf_ioctl_locked(stream, cmd, arg); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3288 | mutex_unlock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3289 | |
| 3290 | return ret; |
| 3291 | } |
| 3292 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3293 | /** |
| 3294 | * i915_perf_destroy_locked - destroy an i915 perf stream |
| 3295 | * @stream: An i915 perf stream |
| 3296 | * |
| 3297 | * Frees all resources associated with the given i915 perf @stream, disabling |
| 3298 | * any associated data capture in the process. |
| 3299 | * |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3300 | * Note: The &perf->lock mutex has been taken to serialize |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3301 | * with any non-file-operation driver hooks. |
| 3302 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3303 | static void i915_perf_destroy_locked(struct i915_perf_stream *stream) |
| 3304 | { |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3305 | if (stream->enabled) |
| 3306 | i915_perf_disable_locked(stream); |
| 3307 | |
| 3308 | if (stream->ops->destroy) |
| 3309 | stream->ops->destroy(stream); |
| 3310 | |
Chris Wilson | 69df05e | 2016-12-18 15:37:21 +0000 | [diff] [blame] | 3311 | if (stream->ctx) |
Chris Wilson | 5f09a9c | 2017-06-20 12:05:46 +0100 | [diff] [blame] | 3312 | i915_gem_context_put(stream->ctx); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3313 | |
| 3314 | kfree(stream); |
| 3315 | } |
| 3316 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3317 | /** |
| 3318 | * i915_perf_release - handles userspace close() of a stream file |
| 3319 | * @inode: anonymous inode associated with file |
| 3320 | * @file: An i915 perf stream file |
| 3321 | * |
| 3322 | * Cleans up any resources associated with an open i915 perf stream file. |
| 3323 | * |
| 3324 | * NB: close() can't really fail from the userspace point of view. |
| 3325 | * |
| 3326 | * Returns: zero on success or a negative error code. |
| 3327 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3328 | static int i915_perf_release(struct inode *inode, struct file *file) |
| 3329 | { |
| 3330 | struct i915_perf_stream *stream = file->private_data; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3331 | struct i915_perf *perf = stream->perf; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3332 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3333 | mutex_lock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3334 | i915_perf_destroy_locked(stream); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3335 | mutex_unlock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3336 | |
Lionel Landwerlin | a5af1df | 2019-07-09 15:33:39 +0300 | [diff] [blame] | 3337 | /* Release the reference the perf stream kept on the driver. */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3338 | drm_dev_put(&perf->i915->drm); |
Lionel Landwerlin | a5af1df | 2019-07-09 15:33:39 +0300 | [diff] [blame] | 3339 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3340 | return 0; |
| 3341 | } |
| 3342 | |
| 3343 | |
| 3344 | static const struct file_operations fops = { |
| 3345 | .owner = THIS_MODULE, |
| 3346 | .llseek = no_llseek, |
| 3347 | .release = i915_perf_release, |
| 3348 | .poll = i915_perf_poll, |
| 3349 | .read = i915_perf_read, |
| 3350 | .unlocked_ioctl = i915_perf_ioctl, |
Lionel Landwerlin | 191f896 | 2017-10-24 16:27:28 +0100 | [diff] [blame] | 3351 | /* Our ioctl have no arguments, so it's safe to use the same function |
| 3352 | * to handle 32bits compatibility. |
| 3353 | */ |
| 3354 | .compat_ioctl = i915_perf_ioctl, |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3355 | }; |
| 3356 | |
| 3357 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3358 | /** |
| 3359 | * i915_perf_open_ioctl_locked - DRM ioctl() for userspace to open a stream FD |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3360 | * @perf: i915 perf instance |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3361 | * @param: The open parameters passed to 'DRM_I915_PERF_OPEN` |
| 3362 | * @props: individually validated u64 property value pairs |
| 3363 | * @file: drm file |
| 3364 | * |
| 3365 | * See i915_perf_ioctl_open() for interface details. |
| 3366 | * |
| 3367 | * Implements further stream config validation and stream initialization on |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3368 | * behalf of i915_perf_open_ioctl() with the &perf->lock mutex |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3369 | * taken to serialize with any non-file-operation driver hooks. |
| 3370 | * |
| 3371 | * Note: at this point the @props have only been validated in isolation and |
| 3372 | * it's still necessary to validate that the combination of properties makes |
| 3373 | * sense. |
| 3374 | * |
| 3375 | * In the case where userspace is interested in OA unit metrics then further |
| 3376 | * config validation and stream initialization details will be handled by |
| 3377 | * i915_oa_stream_init(). The code here should only validate config state that |
| 3378 | * will be relevant to all stream types / backends. |
| 3379 | * |
| 3380 | * Returns: zero on success or a negative error code. |
| 3381 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3382 | static int |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3383 | i915_perf_open_ioctl_locked(struct i915_perf *perf, |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3384 | struct drm_i915_perf_open_param *param, |
| 3385 | struct perf_open_properties *props, |
| 3386 | struct drm_file *file) |
| 3387 | { |
| 3388 | struct i915_gem_context *specific_ctx = NULL; |
| 3389 | struct i915_perf_stream *stream = NULL; |
| 3390 | unsigned long f_flags = 0; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 3391 | bool privileged_op = true; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3392 | int stream_fd; |
| 3393 | int ret; |
| 3394 | |
| 3395 | if (props->single_context) { |
| 3396 | u32 ctx_handle = props->ctx_handle; |
| 3397 | struct drm_i915_file_private *file_priv = file->driver_priv; |
| 3398 | |
Imre Deak | 635f56c | 2017-07-14 18:12:41 +0300 | [diff] [blame] | 3399 | specific_ctx = i915_gem_context_lookup(file_priv, ctx_handle); |
| 3400 | if (!specific_ctx) { |
| 3401 | DRM_DEBUG("Failed to look up context with ID %u for opening perf stream\n", |
| 3402 | ctx_handle); |
| 3403 | ret = -ENOENT; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3404 | goto err; |
| 3405 | } |
| 3406 | } |
| 3407 | |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 3408 | /* |
| 3409 | * On Haswell the OA unit supports clock gating off for a specific |
| 3410 | * context and in this mode there's no visibility of metrics for the |
| 3411 | * rest of the system, which we consider acceptable for a |
| 3412 | * non-privileged client. |
| 3413 | * |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 3414 | * For Gen8->11 the OA unit no longer supports clock gating off for a |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 3415 | * specific context and the kernel can't securely stop the counters |
| 3416 | * from updating as system-wide / global values. Even though we can |
| 3417 | * filter reports based on the included context ID we can't block |
| 3418 | * clients from seeing the raw / global counter values via |
| 3419 | * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to |
| 3420 | * enable the OA unit by default. |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 3421 | * |
| 3422 | * For Gen12+ we gain a new OAR unit that only monitors the RCS on a |
| 3423 | * per context basis. So we can relax requirements there if the user |
| 3424 | * doesn't request global stream access (i.e. query based sampling |
| 3425 | * using MI_RECORD_PERF_COUNT. |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 3426 | */ |
Lionel Landwerlin | 0b0120d | 2019-11-11 11:53:08 +0200 | [diff] [blame] | 3427 | if (IS_HASWELL(perf->i915) && specific_ctx) |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 3428 | privileged_op = false; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 3429 | else if (IS_GEN(perf->i915, 12) && specific_ctx && |
| 3430 | (props->sample_flags & SAMPLE_OA_REPORT) == 0) |
| 3431 | privileged_op = false; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 3432 | |
Lionel Landwerlin | 0b0120d | 2019-11-11 11:53:08 +0200 | [diff] [blame] | 3433 | if (props->hold_preemption) { |
| 3434 | if (!props->single_context) { |
| 3435 | DRM_DEBUG("preemption disable with no context\n"); |
| 3436 | ret = -EINVAL; |
| 3437 | goto err; |
| 3438 | } |
| 3439 | privileged_op = true; |
| 3440 | } |
| 3441 | |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 3442 | /* |
| 3443 | * Asking for SSEU configuration is a priviliged operation. |
| 3444 | */ |
| 3445 | if (props->has_sseu) |
| 3446 | privileged_op = true; |
| 3447 | else |
| 3448 | get_default_sseu_config(&props->sseu, props->engine); |
| 3449 | |
Robert Bragg | ccdf634 | 2016-11-07 19:49:54 +0000 | [diff] [blame] | 3450 | /* Similar to perf's kernel.perf_paranoid_cpu sysctl option |
| 3451 | * we check a dev.i915.perf_stream_paranoid sysctl option |
| 3452 | * to determine if it's ok to access system wide OA counters |
| 3453 | * without CAP_SYS_ADMIN privileges. |
| 3454 | */ |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 3455 | if (privileged_op && |
Robert Bragg | ccdf634 | 2016-11-07 19:49:54 +0000 | [diff] [blame] | 3456 | i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) { |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 3457 | DRM_DEBUG("Insufficient privileges to open i915 perf stream\n"); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3458 | ret = -EACCES; |
| 3459 | goto err_ctx; |
| 3460 | } |
| 3461 | |
| 3462 | stream = kzalloc(sizeof(*stream), GFP_KERNEL); |
| 3463 | if (!stream) { |
| 3464 | ret = -ENOMEM; |
| 3465 | goto err_ctx; |
| 3466 | } |
| 3467 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3468 | stream->perf = perf; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3469 | stream->ctx = specific_ctx; |
| 3470 | |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3471 | ret = i915_oa_stream_init(stream, param, props); |
| 3472 | if (ret) |
| 3473 | goto err_alloc; |
| 3474 | |
| 3475 | /* we avoid simply assigning stream->sample_flags = props->sample_flags |
| 3476 | * to have _stream_init check the combination of sample flags more |
| 3477 | * thoroughly, but still this is the expected result at this point. |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3478 | */ |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3479 | if (WARN_ON(stream->sample_flags != props->sample_flags)) { |
| 3480 | ret = -ENODEV; |
Matthew Auld | 22f880c | 2017-03-27 21:34:59 +0100 | [diff] [blame] | 3481 | goto err_flags; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3482 | } |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3483 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3484 | if (param->flags & I915_PERF_FLAG_FD_CLOEXEC) |
| 3485 | f_flags |= O_CLOEXEC; |
| 3486 | if (param->flags & I915_PERF_FLAG_FD_NONBLOCK) |
| 3487 | f_flags |= O_NONBLOCK; |
| 3488 | |
| 3489 | stream_fd = anon_inode_getfd("[i915_perf]", &fops, stream, f_flags); |
| 3490 | if (stream_fd < 0) { |
| 3491 | ret = stream_fd; |
Lionel Landwerlin | 23b9e41 | 2019-10-08 15:01:11 +0100 | [diff] [blame] | 3492 | goto err_flags; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3493 | } |
| 3494 | |
| 3495 | if (!(param->flags & I915_PERF_FLAG_DISABLED)) |
| 3496 | i915_perf_enable_locked(stream); |
| 3497 | |
Lionel Landwerlin | a5af1df | 2019-07-09 15:33:39 +0300 | [diff] [blame] | 3498 | /* Take a reference on the driver that will be kept with stream_fd |
| 3499 | * until its release. |
| 3500 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3501 | drm_dev_get(&perf->i915->drm); |
Lionel Landwerlin | a5af1df | 2019-07-09 15:33:39 +0300 | [diff] [blame] | 3502 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3503 | return stream_fd; |
| 3504 | |
Matthew Auld | 22f880c | 2017-03-27 21:34:59 +0100 | [diff] [blame] | 3505 | err_flags: |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3506 | if (stream->ops->destroy) |
| 3507 | stream->ops->destroy(stream); |
| 3508 | err_alloc: |
| 3509 | kfree(stream); |
| 3510 | err_ctx: |
Chris Wilson | 69df05e | 2016-12-18 15:37:21 +0000 | [diff] [blame] | 3511 | if (specific_ctx) |
Chris Wilson | 5f09a9c | 2017-06-20 12:05:46 +0100 | [diff] [blame] | 3512 | i915_gem_context_put(specific_ctx); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3513 | err: |
| 3514 | return ret; |
| 3515 | } |
| 3516 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3517 | static u64 oa_exponent_to_ns(struct i915_perf *perf, int exponent) |
Robert Bragg | 155e941 | 2017-06-13 12:23:05 +0100 | [diff] [blame] | 3518 | { |
Lionel Landwerlin | 9f9b279 | 2017-10-27 15:59:31 +0100 | [diff] [blame] | 3519 | return div64_u64(1000000000ULL * (2ULL << exponent), |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3520 | 1000ULL * RUNTIME_INFO(perf->i915)->cs_timestamp_frequency_khz); |
Robert Bragg | 155e941 | 2017-06-13 12:23:05 +0100 | [diff] [blame] | 3521 | } |
| 3522 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3523 | /** |
| 3524 | * read_properties_unlocked - validate + copy userspace stream open properties |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3525 | * @perf: i915 perf instance |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3526 | * @uprops: The array of u64 key value pairs given by userspace |
| 3527 | * @n_props: The number of key value pairs expected in @uprops |
| 3528 | * @props: The stream configuration built up while validating properties |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3529 | * |
| 3530 | * Note this function only validates properties in isolation it doesn't |
| 3531 | * validate that the combination of properties makes sense or that all |
| 3532 | * properties necessary for a particular kind of stream have been set. |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3533 | * |
| 3534 | * Note that there currently aren't any ordering requirements for properties so |
| 3535 | * we shouldn't validate or assume anything about ordering here. This doesn't |
| 3536 | * rule out defining new properties with ordering requirements in the future. |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3537 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3538 | static int read_properties_unlocked(struct i915_perf *perf, |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3539 | u64 __user *uprops, |
| 3540 | u32 n_props, |
| 3541 | struct perf_open_properties *props) |
| 3542 | { |
| 3543 | u64 __user *uprop = uprops; |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 3544 | u32 i; |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 3545 | int ret; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3546 | |
| 3547 | memset(props, 0, sizeof(struct perf_open_properties)); |
| 3548 | |
| 3549 | if (!n_props) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 3550 | DRM_DEBUG("No i915 perf properties given\n"); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3551 | return -EINVAL; |
| 3552 | } |
| 3553 | |
Lionel Landwerlin | 9a61363 | 2019-10-10 16:05:19 +0100 | [diff] [blame] | 3554 | /* At the moment we only support using i915-perf on the RCS. */ |
| 3555 | props->engine = intel_engine_lookup_user(perf->i915, |
| 3556 | I915_ENGINE_CLASS_RENDER, |
| 3557 | 0); |
| 3558 | if (!props->engine) { |
| 3559 | DRM_DEBUG("No RENDER-capable engines\n"); |
| 3560 | return -EINVAL; |
| 3561 | } |
| 3562 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3563 | /* Considering that ID = 0 is reserved and assuming that we don't |
| 3564 | * (currently) expect any configurations to ever specify duplicate |
| 3565 | * values for a particular property ID then the last _PROP_MAX value is |
| 3566 | * one greater than the maximum number of properties we expect to get |
| 3567 | * from userspace. |
| 3568 | */ |
| 3569 | if (n_props >= DRM_I915_PERF_PROP_MAX) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 3570 | DRM_DEBUG("More i915 perf properties specified than exist\n"); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3571 | return -EINVAL; |
| 3572 | } |
| 3573 | |
| 3574 | for (i = 0; i < n_props; i++) { |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 3575 | u64 oa_period, oa_freq_hz; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3576 | u64 id, value; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3577 | |
| 3578 | ret = get_user(id, uprop); |
| 3579 | if (ret) |
| 3580 | return ret; |
| 3581 | |
| 3582 | ret = get_user(value, uprop + 1); |
| 3583 | if (ret) |
| 3584 | return ret; |
| 3585 | |
Matthew Auld | 0a309f9 | 2017-03-27 21:32:36 +0100 | [diff] [blame] | 3586 | if (id == 0 || id >= DRM_I915_PERF_PROP_MAX) { |
| 3587 | DRM_DEBUG("Unknown i915 perf property ID\n"); |
| 3588 | return -EINVAL; |
| 3589 | } |
| 3590 | |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3591 | switch ((enum drm_i915_perf_property_id)id) { |
| 3592 | case DRM_I915_PERF_PROP_CTX_HANDLE: |
| 3593 | props->single_context = 1; |
| 3594 | props->ctx_handle = value; |
| 3595 | break; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3596 | case DRM_I915_PERF_PROP_SAMPLE_OA: |
Lionel Landwerlin | b6dd47b | 2018-03-26 10:08:22 +0100 | [diff] [blame] | 3597 | if (value) |
| 3598 | props->sample_flags |= SAMPLE_OA_REPORT; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3599 | break; |
| 3600 | case DRM_I915_PERF_PROP_OA_METRICS_SET: |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 3601 | if (value == 0) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 3602 | DRM_DEBUG("Unknown OA metric set ID\n"); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3603 | return -EINVAL; |
| 3604 | } |
| 3605 | props->metrics_set = value; |
| 3606 | break; |
| 3607 | case DRM_I915_PERF_PROP_OA_FORMAT: |
| 3608 | if (value == 0 || value >= I915_OA_FORMAT_MAX) { |
Robert Bragg | 52c57c2 | 2017-05-11 16:43:29 +0100 | [diff] [blame] | 3609 | DRM_DEBUG("Out-of-range OA report format %llu\n", |
| 3610 | value); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3611 | return -EINVAL; |
| 3612 | } |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3613 | if (!perf->oa_formats[value].size) { |
Robert Bragg | 52c57c2 | 2017-05-11 16:43:29 +0100 | [diff] [blame] | 3614 | DRM_DEBUG("Unsupported OA report format %llu\n", |
| 3615 | value); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3616 | return -EINVAL; |
| 3617 | } |
| 3618 | props->oa_format = value; |
| 3619 | break; |
| 3620 | case DRM_I915_PERF_PROP_OA_EXPONENT: |
| 3621 | if (value > OA_EXPONENT_MAX) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 3622 | DRM_DEBUG("OA timer exponent too high (> %u)\n", |
| 3623 | OA_EXPONENT_MAX); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3624 | return -EINVAL; |
| 3625 | } |
| 3626 | |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 3627 | /* Theoretically we can program the OA unit to sample |
Robert Bragg | 155e941 | 2017-06-13 12:23:05 +0100 | [diff] [blame] | 3628 | * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns |
| 3629 | * for BXT. We don't allow such high sampling |
| 3630 | * frequencies by default unless root. |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3631 | */ |
Robert Bragg | 155e941 | 2017-06-13 12:23:05 +0100 | [diff] [blame] | 3632 | |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 3633 | BUILD_BUG_ON(sizeof(oa_period) != 8); |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3634 | oa_period = oa_exponent_to_ns(perf, value); |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 3635 | |
| 3636 | /* This check is primarily to ensure that oa_period <= |
| 3637 | * UINT32_MAX (before passing to do_div which only |
| 3638 | * accepts a u32 denominator), but we can also skip |
| 3639 | * checking anything < 1Hz which implicitly can't be |
| 3640 | * limited via an integer oa_max_sample_rate. |
| 3641 | */ |
| 3642 | if (oa_period <= NSEC_PER_SEC) { |
| 3643 | u64 tmp = NSEC_PER_SEC; |
| 3644 | do_div(tmp, oa_period); |
| 3645 | oa_freq_hz = tmp; |
| 3646 | } else |
| 3647 | oa_freq_hz = 0; |
| 3648 | |
| 3649 | if (oa_freq_hz > i915_oa_max_sample_rate && |
| 3650 | !capable(CAP_SYS_ADMIN)) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 3651 | DRM_DEBUG("OA exponent would exceed the max sampling frequency (sysctl dev.i915.oa_max_sample_rate) %uHz without root privileges\n", |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 3652 | i915_oa_max_sample_rate); |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 3653 | return -EACCES; |
| 3654 | } |
| 3655 | |
| 3656 | props->oa_periodic = true; |
| 3657 | props->oa_period_exponent = value; |
| 3658 | break; |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 3659 | case DRM_I915_PERF_PROP_HOLD_PREEMPTION: |
| 3660 | props->hold_preemption = !!value; |
| 3661 | break; |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 3662 | case DRM_I915_PERF_PROP_GLOBAL_SSEU: { |
| 3663 | struct drm_i915_gem_context_param_sseu user_sseu; |
| 3664 | |
| 3665 | if (copy_from_user(&user_sseu, |
| 3666 | u64_to_user_ptr(value), |
| 3667 | sizeof(user_sseu))) { |
| 3668 | DRM_DEBUG("Unable to copy global sseu parameter\n"); |
| 3669 | return -EFAULT; |
| 3670 | } |
| 3671 | |
| 3672 | ret = get_sseu_config(&props->sseu, props->engine, &user_sseu); |
| 3673 | if (ret) { |
| 3674 | DRM_DEBUG("Invalid SSEU configuration\n"); |
| 3675 | return ret; |
| 3676 | } |
| 3677 | props->has_sseu = true; |
| 3678 | break; |
| 3679 | } |
Matthew Auld | 0a309f9 | 2017-03-27 21:32:36 +0100 | [diff] [blame] | 3680 | case DRM_I915_PERF_PROP_MAX: |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3681 | MISSING_CASE(id); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3682 | return -EINVAL; |
| 3683 | } |
| 3684 | |
| 3685 | uprop += 2; |
| 3686 | } |
| 3687 | |
| 3688 | return 0; |
| 3689 | } |
| 3690 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3691 | /** |
| 3692 | * i915_perf_open_ioctl - DRM ioctl() for userspace to open a stream FD |
| 3693 | * @dev: drm device |
| 3694 | * @data: ioctl data copied from userspace (unvalidated) |
| 3695 | * @file: drm file |
| 3696 | * |
| 3697 | * Validates the stream open parameters given by userspace including flags |
| 3698 | * and an array of u64 key, value pair properties. |
| 3699 | * |
| 3700 | * Very little is assumed up front about the nature of the stream being |
| 3701 | * opened (for instance we don't assume it's for periodic OA unit metrics). An |
| 3702 | * i915-perf stream is expected to be a suitable interface for other forms of |
| 3703 | * buffered data written by the GPU besides periodic OA metrics. |
| 3704 | * |
| 3705 | * Note we copy the properties from userspace outside of the i915 perf |
| 3706 | * mutex to avoid an awkward lockdep with mmap_sem. |
| 3707 | * |
| 3708 | * Most of the implementation details are handled by |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3709 | * i915_perf_open_ioctl_locked() after taking the &perf->lock |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3710 | * mutex for serializing with any non-file-operation driver hooks. |
| 3711 | * |
| 3712 | * Return: A newly opened i915 Perf stream file descriptor or negative |
| 3713 | * error code on failure. |
| 3714 | */ |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3715 | int i915_perf_open_ioctl(struct drm_device *dev, void *data, |
| 3716 | struct drm_file *file) |
| 3717 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3718 | struct i915_perf *perf = &to_i915(dev)->perf; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3719 | struct drm_i915_perf_open_param *param = data; |
| 3720 | struct perf_open_properties props; |
| 3721 | u32 known_open_flags; |
| 3722 | int ret; |
| 3723 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3724 | if (!perf->i915) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 3725 | DRM_DEBUG("i915 perf interface not available for this system\n"); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3726 | return -ENOTSUPP; |
| 3727 | } |
| 3728 | |
| 3729 | known_open_flags = I915_PERF_FLAG_FD_CLOEXEC | |
| 3730 | I915_PERF_FLAG_FD_NONBLOCK | |
| 3731 | I915_PERF_FLAG_DISABLED; |
| 3732 | if (param->flags & ~known_open_flags) { |
Robert Bragg | 7708550 | 2016-12-01 17:21:52 +0000 | [diff] [blame] | 3733 | DRM_DEBUG("Unknown drm_i915_perf_open_param flag\n"); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3734 | return -EINVAL; |
| 3735 | } |
| 3736 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3737 | ret = read_properties_unlocked(perf, |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3738 | u64_to_user_ptr(param->properties_ptr), |
| 3739 | param->num_properties, |
| 3740 | &props); |
| 3741 | if (ret) |
| 3742 | return ret; |
| 3743 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3744 | mutex_lock(&perf->lock); |
| 3745 | ret = i915_perf_open_ioctl_locked(perf, param, &props, file); |
| 3746 | mutex_unlock(&perf->lock); |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 3747 | |
| 3748 | return ret; |
| 3749 | } |
| 3750 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3751 | /** |
| 3752 | * i915_perf_register - exposes i915-perf to userspace |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3753 | * @i915: i915 device instance |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3754 | * |
| 3755 | * In particular OA metric sets are advertised under a sysfs metrics/ |
| 3756 | * directory allowing userspace to enumerate valid IDs that can be |
| 3757 | * used to open an i915-perf stream. |
| 3758 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3759 | void i915_perf_register(struct drm_i915_private *i915) |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3760 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3761 | struct i915_perf *perf = &i915->perf; |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 3762 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3763 | if (!perf->i915) |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3764 | return; |
| 3765 | |
| 3766 | /* To be sure we're synchronized with an attempted |
| 3767 | * i915_perf_open_ioctl(); considering that we register after |
| 3768 | * being exposed to userspace. |
| 3769 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3770 | mutex_lock(&perf->lock); |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3771 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3772 | perf->metrics_kobj = |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3773 | kobject_create_and_add("metrics", |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3774 | &i915->drm.primary->kdev->kobj); |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3775 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3776 | mutex_unlock(&perf->lock); |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3777 | } |
| 3778 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3779 | /** |
| 3780 | * i915_perf_unregister - hide i915-perf from userspace |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3781 | * @i915: i915 device instance |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 3782 | * |
| 3783 | * i915-perf state cleanup is split up into an 'unregister' and |
| 3784 | * 'deinit' phase where the interface is first hidden from |
| 3785 | * userspace by i915_perf_unregister() before cleaning up |
| 3786 | * remaining state in i915_perf_fini(). |
| 3787 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3788 | void i915_perf_unregister(struct drm_i915_private *i915) |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3789 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3790 | struct i915_perf *perf = &i915->perf; |
| 3791 | |
| 3792 | if (!perf->metrics_kobj) |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3793 | return; |
| 3794 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3795 | kobject_put(perf->metrics_kobj); |
| 3796 | perf->metrics_kobj = NULL; |
Robert Bragg | 442b8c0 | 2016-11-07 19:49:53 +0000 | [diff] [blame] | 3797 | } |
| 3798 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3799 | static bool gen8_is_valid_flex_addr(struct i915_perf *perf, u32 addr) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3800 | { |
| 3801 | static const i915_reg_t flex_eu_regs[] = { |
| 3802 | EU_PERF_CNTL0, |
| 3803 | EU_PERF_CNTL1, |
| 3804 | EU_PERF_CNTL2, |
| 3805 | EU_PERF_CNTL3, |
| 3806 | EU_PERF_CNTL4, |
| 3807 | EU_PERF_CNTL5, |
| 3808 | EU_PERF_CNTL6, |
| 3809 | }; |
| 3810 | int i; |
| 3811 | |
| 3812 | for (i = 0; i < ARRAY_SIZE(flex_eu_regs); i++) { |
Lionel Landwerlin | 7c52a22 | 2017-11-13 23:34:52 +0000 | [diff] [blame] | 3813 | if (i915_mmio_reg_offset(flex_eu_regs[i]) == addr) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3814 | return true; |
| 3815 | } |
| 3816 | return false; |
| 3817 | } |
| 3818 | |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3819 | #define ADDR_IN_RANGE(addr, start, end) \ |
| 3820 | ((addr) >= (start) && \ |
| 3821 | (addr) <= (end)) |
| 3822 | |
| 3823 | #define REG_IN_RANGE(addr, start, end) \ |
| 3824 | ((addr) >= i915_mmio_reg_offset(start) && \ |
| 3825 | (addr) <= i915_mmio_reg_offset(end)) |
| 3826 | |
| 3827 | #define REG_EQUAL(addr, mmio) \ |
| 3828 | ((addr) == i915_mmio_reg_offset(mmio)) |
| 3829 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3830 | static bool gen7_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3831 | { |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3832 | return REG_IN_RANGE(addr, OASTARTTRIG1, OASTARTTRIG8) || |
| 3833 | REG_IN_RANGE(addr, OAREPORTTRIG1, OAREPORTTRIG8) || |
| 3834 | REG_IN_RANGE(addr, OACEC0_0, OACEC7_1); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3835 | } |
| 3836 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3837 | static bool gen7_is_valid_mux_addr(struct i915_perf *perf, u32 addr) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3838 | { |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3839 | return REG_EQUAL(addr, HALF_SLICE_CHICKEN2) || |
| 3840 | REG_IN_RANGE(addr, MICRO_BP0_0, NOA_WRITE) || |
| 3841 | REG_IN_RANGE(addr, OA_PERFCNT1_LO, OA_PERFCNT2_HI) || |
| 3842 | REG_IN_RANGE(addr, OA_PERFMATRIX_LO, OA_PERFMATRIX_HI); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3843 | } |
| 3844 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3845 | static bool gen8_is_valid_mux_addr(struct i915_perf *perf, u32 addr) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3846 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3847 | return gen7_is_valid_mux_addr(perf, addr) || |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3848 | REG_EQUAL(addr, WAIT_FOR_RC6_EXIT) || |
| 3849 | REG_IN_RANGE(addr, RPM_CONFIG0, NOA_CONFIG(8)); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3850 | } |
| 3851 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3852 | static bool gen10_is_valid_mux_addr(struct i915_perf *perf, u32 addr) |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 3853 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3854 | return gen8_is_valid_mux_addr(perf, addr) || |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3855 | REG_EQUAL(addr, GEN10_NOA_WRITE_HIGH) || |
| 3856 | REG_IN_RANGE(addr, OA_PERFCNT3_LO, OA_PERFCNT4_HI); |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 3857 | } |
| 3858 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3859 | static bool hsw_is_valid_mux_addr(struct i915_perf *perf, u32 addr) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3860 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3861 | return gen7_is_valid_mux_addr(perf, addr) || |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3862 | ADDR_IN_RANGE(addr, 0x25100, 0x2FF90) || |
| 3863 | REG_IN_RANGE(addr, HSW_MBVID2_NOA0, HSW_MBVID2_NOA9) || |
| 3864 | REG_EQUAL(addr, HSW_MBVID2_MISR0); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3865 | } |
| 3866 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3867 | static bool chv_is_valid_mux_addr(struct i915_perf *perf, u32 addr) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3868 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3869 | return gen7_is_valid_mux_addr(perf, addr) || |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3870 | ADDR_IN_RANGE(addr, 0x182300, 0x1823A4); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3871 | } |
| 3872 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 3873 | static bool gen12_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr) |
| 3874 | { |
| 3875 | return REG_IN_RANGE(addr, GEN12_OAG_OASTARTTRIG1, GEN12_OAG_OASTARTTRIG8) || |
| 3876 | REG_IN_RANGE(addr, GEN12_OAG_OAREPORTTRIG1, GEN12_OAG_OAREPORTTRIG8) || |
| 3877 | REG_IN_RANGE(addr, GEN12_OAG_CEC0_0, GEN12_OAG_CEC7_1) || |
| 3878 | REG_IN_RANGE(addr, GEN12_OAG_SCEC0_0, GEN12_OAG_SCEC7_1) || |
| 3879 | REG_EQUAL(addr, GEN12_OAA_DBG_REG) || |
| 3880 | REG_EQUAL(addr, GEN12_OAG_OA_PESS) || |
| 3881 | REG_EQUAL(addr, GEN12_OAG_SPCTR_CNF); |
| 3882 | } |
| 3883 | |
| 3884 | static bool gen12_is_valid_mux_addr(struct i915_perf *perf, u32 addr) |
| 3885 | { |
| 3886 | return REG_EQUAL(addr, NOA_WRITE) || |
| 3887 | REG_EQUAL(addr, GEN10_NOA_WRITE_HIGH) || |
| 3888 | REG_EQUAL(addr, GDT_CHICKEN_BITS) || |
| 3889 | REG_EQUAL(addr, WAIT_FOR_RC6_EXIT) || |
| 3890 | REG_EQUAL(addr, RPM_CONFIG0) || |
| 3891 | REG_EQUAL(addr, RPM_CONFIG1) || |
| 3892 | REG_IN_RANGE(addr, NOA_CONFIG(0), NOA_CONFIG(8)); |
| 3893 | } |
| 3894 | |
Jani Nikula | 739f3ab | 2019-01-16 11:15:19 +0200 | [diff] [blame] | 3895 | static u32 mask_reg_value(u32 reg, u32 val) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3896 | { |
| 3897 | /* HALF_SLICE_CHICKEN2 is programmed with a the |
| 3898 | * WaDisableSTUnitPowerOptimization workaround. Make sure the value |
| 3899 | * programmed by userspace doesn't change this. |
| 3900 | */ |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3901 | if (REG_EQUAL(reg, HALF_SLICE_CHICKEN2)) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3902 | val = val & ~_MASKED_BIT_ENABLE(GEN8_ST_PO_DISABLE); |
| 3903 | |
| 3904 | /* WAIT_FOR_RC6_EXIT has only one bit fullfilling the function |
| 3905 | * indicated by its name and a bunch of selection fields used by OA |
| 3906 | * configs. |
| 3907 | */ |
Umesh Nerlige Ramappa | fc21523 | 2019-10-25 12:37:45 -0700 | [diff] [blame] | 3908 | if (REG_EQUAL(reg, WAIT_FOR_RC6_EXIT)) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3909 | val = val & ~_MASKED_BIT_ENABLE(HSW_WAIT_FOR_RC6_EXIT_ENABLE); |
| 3910 | |
| 3911 | return val; |
| 3912 | } |
| 3913 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3914 | static struct i915_oa_reg *alloc_oa_regs(struct i915_perf *perf, |
| 3915 | bool (*is_valid)(struct i915_perf *perf, u32 addr), |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3916 | u32 __user *regs, |
| 3917 | u32 n_regs) |
| 3918 | { |
| 3919 | struct i915_oa_reg *oa_regs; |
| 3920 | int err; |
| 3921 | u32 i; |
| 3922 | |
| 3923 | if (!n_regs) |
| 3924 | return NULL; |
| 3925 | |
Linus Torvalds | 96d4f26 | 2019-01-03 18:57:57 -0800 | [diff] [blame] | 3926 | if (!access_ok(regs, n_regs * sizeof(u32) * 2)) |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3927 | return ERR_PTR(-EFAULT); |
| 3928 | |
| 3929 | /* No is_valid function means we're not allowing any register to be programmed. */ |
| 3930 | GEM_BUG_ON(!is_valid); |
| 3931 | if (!is_valid) |
| 3932 | return ERR_PTR(-EINVAL); |
| 3933 | |
| 3934 | oa_regs = kmalloc_array(n_regs, sizeof(*oa_regs), GFP_KERNEL); |
| 3935 | if (!oa_regs) |
| 3936 | return ERR_PTR(-ENOMEM); |
| 3937 | |
| 3938 | for (i = 0; i < n_regs; i++) { |
| 3939 | u32 addr, value; |
| 3940 | |
| 3941 | err = get_user(addr, regs); |
| 3942 | if (err) |
| 3943 | goto addr_err; |
| 3944 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3945 | if (!is_valid(perf, addr)) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3946 | DRM_DEBUG("Invalid oa_reg address: %X\n", addr); |
| 3947 | err = -EINVAL; |
| 3948 | goto addr_err; |
| 3949 | } |
| 3950 | |
| 3951 | err = get_user(value, regs + 1); |
| 3952 | if (err) |
| 3953 | goto addr_err; |
| 3954 | |
| 3955 | oa_regs[i].addr = _MMIO(addr); |
| 3956 | oa_regs[i].value = mask_reg_value(addr, value); |
| 3957 | |
| 3958 | regs += 2; |
| 3959 | } |
| 3960 | |
| 3961 | return oa_regs; |
| 3962 | |
| 3963 | addr_err: |
| 3964 | kfree(oa_regs); |
| 3965 | return ERR_PTR(err); |
| 3966 | } |
| 3967 | |
| 3968 | static ssize_t show_dynamic_id(struct device *dev, |
| 3969 | struct device_attribute *attr, |
| 3970 | char *buf) |
| 3971 | { |
| 3972 | struct i915_oa_config *oa_config = |
| 3973 | container_of(attr, typeof(*oa_config), sysfs_metric_id); |
| 3974 | |
| 3975 | return sprintf(buf, "%d\n", oa_config->id); |
| 3976 | } |
| 3977 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3978 | static int create_dynamic_oa_sysfs_entry(struct i915_perf *perf, |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3979 | struct i915_oa_config *oa_config) |
| 3980 | { |
Chris Wilson | 28152a2 | 2017-08-03 23:37:00 +0100 | [diff] [blame] | 3981 | sysfs_attr_init(&oa_config->sysfs_metric_id.attr); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3982 | oa_config->sysfs_metric_id.attr.name = "id"; |
| 3983 | oa_config->sysfs_metric_id.attr.mode = S_IRUGO; |
| 3984 | oa_config->sysfs_metric_id.show = show_dynamic_id; |
| 3985 | oa_config->sysfs_metric_id.store = NULL; |
| 3986 | |
| 3987 | oa_config->attrs[0] = &oa_config->sysfs_metric_id.attr; |
| 3988 | oa_config->attrs[1] = NULL; |
| 3989 | |
| 3990 | oa_config->sysfs_metric.name = oa_config->uuid; |
| 3991 | oa_config->sysfs_metric.attrs = oa_config->attrs; |
| 3992 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 3993 | return sysfs_create_group(perf->metrics_kobj, |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 3994 | &oa_config->sysfs_metric); |
| 3995 | } |
| 3996 | |
| 3997 | /** |
| 3998 | * i915_perf_add_config_ioctl - DRM ioctl() for userspace to add a new OA config |
| 3999 | * @dev: drm device |
| 4000 | * @data: ioctl data (pointer to struct drm_i915_perf_oa_config) copied from |
| 4001 | * userspace (unvalidated) |
| 4002 | * @file: drm file |
| 4003 | * |
| 4004 | * Validates the submitted OA register to be saved into a new OA config that |
| 4005 | * can then be used for programming the OA unit and its NOA network. |
| 4006 | * |
| 4007 | * Returns: A new allocated config number to be used with the perf open ioctl |
| 4008 | * or a negative error code on failure. |
| 4009 | */ |
| 4010 | int i915_perf_add_config_ioctl(struct drm_device *dev, void *data, |
| 4011 | struct drm_file *file) |
| 4012 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4013 | struct i915_perf *perf = &to_i915(dev)->perf; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4014 | struct drm_i915_perf_oa_config *args = data; |
| 4015 | struct i915_oa_config *oa_config, *tmp; |
Mao Wenan | c415ef2 | 2019-12-04 09:01:54 +0800 | [diff] [blame] | 4016 | struct i915_oa_reg *regs; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4017 | int err, id; |
| 4018 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4019 | if (!perf->i915) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4020 | DRM_DEBUG("i915 perf interface not available for this system\n"); |
| 4021 | return -ENOTSUPP; |
| 4022 | } |
| 4023 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4024 | if (!perf->metrics_kobj) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4025 | DRM_DEBUG("OA metrics weren't advertised via sysfs\n"); |
| 4026 | return -EINVAL; |
| 4027 | } |
| 4028 | |
| 4029 | if (i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) { |
| 4030 | DRM_DEBUG("Insufficient privileges to add i915 OA config\n"); |
| 4031 | return -EACCES; |
| 4032 | } |
| 4033 | |
| 4034 | if ((!args->mux_regs_ptr || !args->n_mux_regs) && |
| 4035 | (!args->boolean_regs_ptr || !args->n_boolean_regs) && |
| 4036 | (!args->flex_regs_ptr || !args->n_flex_regs)) { |
| 4037 | DRM_DEBUG("No OA registers given\n"); |
| 4038 | return -EINVAL; |
| 4039 | } |
| 4040 | |
| 4041 | oa_config = kzalloc(sizeof(*oa_config), GFP_KERNEL); |
| 4042 | if (!oa_config) { |
| 4043 | DRM_DEBUG("Failed to allocate memory for the OA config\n"); |
| 4044 | return -ENOMEM; |
| 4045 | } |
| 4046 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4047 | oa_config->perf = perf; |
| 4048 | kref_init(&oa_config->ref); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4049 | |
| 4050 | if (!uuid_is_valid(args->uuid)) { |
| 4051 | DRM_DEBUG("Invalid uuid format for OA config\n"); |
| 4052 | err = -EINVAL; |
| 4053 | goto reg_err; |
| 4054 | } |
| 4055 | |
| 4056 | /* Last character in oa_config->uuid will be 0 because oa_config is |
| 4057 | * kzalloc. |
| 4058 | */ |
| 4059 | memcpy(oa_config->uuid, args->uuid, sizeof(args->uuid)); |
| 4060 | |
| 4061 | oa_config->mux_regs_len = args->n_mux_regs; |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4062 | regs = alloc_oa_regs(perf, |
| 4063 | perf->ops.is_valid_mux_reg, |
| 4064 | u64_to_user_ptr(args->mux_regs_ptr), |
| 4065 | args->n_mux_regs); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4066 | |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4067 | if (IS_ERR(regs)) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4068 | DRM_DEBUG("Failed to create OA config for mux_regs\n"); |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4069 | err = PTR_ERR(regs); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4070 | goto reg_err; |
| 4071 | } |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4072 | oa_config->mux_regs = regs; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4073 | |
| 4074 | oa_config->b_counter_regs_len = args->n_boolean_regs; |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4075 | regs = alloc_oa_regs(perf, |
| 4076 | perf->ops.is_valid_b_counter_reg, |
| 4077 | u64_to_user_ptr(args->boolean_regs_ptr), |
| 4078 | args->n_boolean_regs); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4079 | |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4080 | if (IS_ERR(regs)) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4081 | DRM_DEBUG("Failed to create OA config for b_counter_regs\n"); |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4082 | err = PTR_ERR(regs); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4083 | goto reg_err; |
| 4084 | } |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4085 | oa_config->b_counter_regs = regs; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4086 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4087 | if (INTEL_GEN(perf->i915) < 8) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4088 | if (args->n_flex_regs != 0) { |
| 4089 | err = -EINVAL; |
| 4090 | goto reg_err; |
| 4091 | } |
| 4092 | } else { |
| 4093 | oa_config->flex_regs_len = args->n_flex_regs; |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4094 | regs = alloc_oa_regs(perf, |
| 4095 | perf->ops.is_valid_flex_reg, |
| 4096 | u64_to_user_ptr(args->flex_regs_ptr), |
| 4097 | args->n_flex_regs); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4098 | |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4099 | if (IS_ERR(regs)) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4100 | DRM_DEBUG("Failed to create OA config for flex_regs\n"); |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4101 | err = PTR_ERR(regs); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4102 | goto reg_err; |
| 4103 | } |
Chris Wilson | c2fba93 | 2019-10-13 10:52:11 +0100 | [diff] [blame] | 4104 | oa_config->flex_regs = regs; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4105 | } |
| 4106 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4107 | err = mutex_lock_interruptible(&perf->metrics_lock); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4108 | if (err) |
| 4109 | goto reg_err; |
| 4110 | |
| 4111 | /* We shouldn't have too many configs, so this iteration shouldn't be |
| 4112 | * too costly. |
| 4113 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4114 | idr_for_each_entry(&perf->metrics_idr, tmp, id) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4115 | if (!strcmp(tmp->uuid, oa_config->uuid)) { |
| 4116 | DRM_DEBUG("OA config already exists with this uuid\n"); |
| 4117 | err = -EADDRINUSE; |
| 4118 | goto sysfs_err; |
| 4119 | } |
| 4120 | } |
| 4121 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4122 | err = create_dynamic_oa_sysfs_entry(perf, oa_config); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4123 | if (err) { |
| 4124 | DRM_DEBUG("Failed to create sysfs entry for OA config\n"); |
| 4125 | goto sysfs_err; |
| 4126 | } |
| 4127 | |
| 4128 | /* Config id 0 is invalid, id 1 for kernel stored test config. */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4129 | oa_config->id = idr_alloc(&perf->metrics_idr, |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4130 | oa_config, 2, |
| 4131 | 0, GFP_KERNEL); |
| 4132 | if (oa_config->id < 0) { |
| 4133 | DRM_DEBUG("Failed to create sysfs entry for OA config\n"); |
| 4134 | err = oa_config->id; |
| 4135 | goto sysfs_err; |
| 4136 | } |
| 4137 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4138 | mutex_unlock(&perf->metrics_lock); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4139 | |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 4140 | DRM_DEBUG("Added config %s id=%i\n", oa_config->uuid, oa_config->id); |
| 4141 | |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4142 | return oa_config->id; |
| 4143 | |
| 4144 | sysfs_err: |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4145 | mutex_unlock(&perf->metrics_lock); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4146 | reg_err: |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4147 | i915_oa_config_put(oa_config); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4148 | DRM_DEBUG("Failed to add new OA config\n"); |
| 4149 | return err; |
| 4150 | } |
| 4151 | |
| 4152 | /** |
| 4153 | * i915_perf_remove_config_ioctl - DRM ioctl() for userspace to remove an OA config |
| 4154 | * @dev: drm device |
| 4155 | * @data: ioctl data (pointer to u64 integer) copied from userspace |
| 4156 | * @file: drm file |
| 4157 | * |
| 4158 | * Configs can be removed while being used, the will stop appearing in sysfs |
| 4159 | * and their content will be freed when the stream using the config is closed. |
| 4160 | * |
| 4161 | * Returns: 0 on success or a negative error code on failure. |
| 4162 | */ |
| 4163 | int i915_perf_remove_config_ioctl(struct drm_device *dev, void *data, |
| 4164 | struct drm_file *file) |
| 4165 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4166 | struct i915_perf *perf = &to_i915(dev)->perf; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4167 | u64 *arg = data; |
| 4168 | struct i915_oa_config *oa_config; |
| 4169 | int ret; |
| 4170 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4171 | if (!perf->i915) { |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4172 | DRM_DEBUG("i915 perf interface not available for this system\n"); |
| 4173 | return -ENOTSUPP; |
| 4174 | } |
| 4175 | |
| 4176 | if (i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) { |
| 4177 | DRM_DEBUG("Insufficient privileges to remove i915 OA config\n"); |
| 4178 | return -EACCES; |
| 4179 | } |
| 4180 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4181 | ret = mutex_lock_interruptible(&perf->metrics_lock); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4182 | if (ret) |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4183 | return ret; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4184 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4185 | oa_config = idr_find(&perf->metrics_idr, *arg); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4186 | if (!oa_config) { |
| 4187 | DRM_DEBUG("Failed to remove unknown OA config\n"); |
| 4188 | ret = -ENOENT; |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4189 | goto err_unlock; |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4190 | } |
| 4191 | |
| 4192 | GEM_BUG_ON(*arg != oa_config->id); |
| 4193 | |
Lionel Landwerlin | 4f6ccc7 | 2019-10-14 21:14:02 +0100 | [diff] [blame] | 4194 | sysfs_remove_group(perf->metrics_kobj, &oa_config->sysfs_metric); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4195 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4196 | idr_remove(&perf->metrics_idr, *arg); |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 4197 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4198 | mutex_unlock(&perf->metrics_lock); |
| 4199 | |
Lionel Landwerlin | 9bd9be6 | 2018-03-26 10:08:28 +0100 | [diff] [blame] | 4200 | DRM_DEBUG("Removed config %s id=%i\n", oa_config->uuid, oa_config->id); |
| 4201 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4202 | i915_oa_config_put(oa_config); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4203 | |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4204 | return 0; |
| 4205 | |
| 4206 | err_unlock: |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4207 | mutex_unlock(&perf->metrics_lock); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4208 | return ret; |
| 4209 | } |
| 4210 | |
Robert Bragg | ccdf634 | 2016-11-07 19:49:54 +0000 | [diff] [blame] | 4211 | static struct ctl_table oa_table[] = { |
| 4212 | { |
| 4213 | .procname = "perf_stream_paranoid", |
| 4214 | .data = &i915_perf_stream_paranoid, |
| 4215 | .maxlen = sizeof(i915_perf_stream_paranoid), |
| 4216 | .mode = 0644, |
| 4217 | .proc_handler = proc_dointvec_minmax, |
Matteo Croce | eec4844 | 2019-07-18 15:58:50 -0700 | [diff] [blame] | 4218 | .extra1 = SYSCTL_ZERO, |
| 4219 | .extra2 = SYSCTL_ONE, |
Robert Bragg | ccdf634 | 2016-11-07 19:49:54 +0000 | [diff] [blame] | 4220 | }, |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 4221 | { |
| 4222 | .procname = "oa_max_sample_rate", |
| 4223 | .data = &i915_oa_max_sample_rate, |
| 4224 | .maxlen = sizeof(i915_oa_max_sample_rate), |
| 4225 | .mode = 0644, |
| 4226 | .proc_handler = proc_dointvec_minmax, |
Matteo Croce | eec4844 | 2019-07-18 15:58:50 -0700 | [diff] [blame] | 4227 | .extra1 = SYSCTL_ZERO, |
Robert Bragg | 00319ba | 2016-11-07 19:49:55 +0000 | [diff] [blame] | 4228 | .extra2 = &oa_sample_rate_hard_limit, |
| 4229 | }, |
Robert Bragg | ccdf634 | 2016-11-07 19:49:54 +0000 | [diff] [blame] | 4230 | {} |
| 4231 | }; |
| 4232 | |
| 4233 | static struct ctl_table i915_root[] = { |
| 4234 | { |
| 4235 | .procname = "i915", |
| 4236 | .maxlen = 0, |
| 4237 | .mode = 0555, |
| 4238 | .child = oa_table, |
| 4239 | }, |
| 4240 | {} |
| 4241 | }; |
| 4242 | |
| 4243 | static struct ctl_table dev_root[] = { |
| 4244 | { |
| 4245 | .procname = "dev", |
| 4246 | .maxlen = 0, |
| 4247 | .mode = 0555, |
| 4248 | .child = i915_root, |
| 4249 | }, |
| 4250 | {} |
| 4251 | }; |
| 4252 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 4253 | /** |
Venkata Sandeep Dhanalakota | 3dc716fd | 2019-12-13 07:51:51 -0800 | [diff] [blame] | 4254 | * i915_perf_init - initialize i915-perf state on module bind |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4255 | * @i915: i915 device instance |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 4256 | * |
| 4257 | * Initializes i915-perf state without exposing anything to userspace. |
| 4258 | * |
| 4259 | * Note: i915-perf initialization is split into an 'init' and 'register' |
| 4260 | * phase with the i915_perf_register() exposing state to userspace. |
| 4261 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4262 | void i915_perf_init(struct drm_i915_private *i915) |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 4263 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4264 | struct i915_perf *perf = &i915->perf; |
Robert Bragg | d796515 | 2016-11-07 19:49:52 +0000 | [diff] [blame] | 4265 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4266 | /* XXX const struct i915_perf_ops! */ |
| 4267 | |
| 4268 | if (IS_HASWELL(i915)) { |
| 4269 | perf->ops.is_valid_b_counter_reg = gen7_is_valid_b_counter_addr; |
| 4270 | perf->ops.is_valid_mux_reg = hsw_is_valid_mux_addr; |
| 4271 | perf->ops.is_valid_flex_reg = NULL; |
| 4272 | perf->ops.enable_metric_set = hsw_enable_metric_set; |
| 4273 | perf->ops.disable_metric_set = hsw_disable_metric_set; |
| 4274 | perf->ops.oa_enable = gen7_oa_enable; |
| 4275 | perf->ops.oa_disable = gen7_oa_disable; |
| 4276 | perf->ops.read = gen7_oa_read; |
| 4277 | perf->ops.oa_hw_tail_read = gen7_oa_hw_tail_read; |
| 4278 | |
| 4279 | perf->oa_formats = hsw_oa_formats; |
| 4280 | } else if (HAS_LOGICAL_RING_CONTEXTS(i915)) { |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 4281 | /* Note: that although we could theoretically also support the |
| 4282 | * legacy ringbuffer mode on BDW (and earlier iterations of |
| 4283 | * this driver, before upstreaming did this) it didn't seem |
| 4284 | * worth the complexity to maintain now that BDW+ enable |
| 4285 | * execlist mode by default. |
| 4286 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4287 | perf->ops.read = gen8_oa_read; |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 4288 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4289 | if (IS_GEN_RANGE(i915, 8, 9)) { |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 4290 | perf->oa_formats = gen8_plus_oa_formats; |
| 4291 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4292 | perf->ops.is_valid_b_counter_reg = |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4293 | gen7_is_valid_b_counter_addr; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4294 | perf->ops.is_valid_mux_reg = |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4295 | gen8_is_valid_mux_addr; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4296 | perf->ops.is_valid_flex_reg = |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4297 | gen8_is_valid_flex_addr; |
Lionel Landwerlin | 701f823 | 2017-08-03 17:58:08 +0100 | [diff] [blame] | 4298 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4299 | if (IS_CHERRYVIEW(i915)) { |
| 4300 | perf->ops.is_valid_mux_reg = |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4301 | chv_is_valid_mux_addr; |
| 4302 | } |
Robert Bragg | 155e941 | 2017-06-13 12:23:05 +0100 | [diff] [blame] | 4303 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 4304 | perf->ops.oa_enable = gen8_oa_enable; |
| 4305 | perf->ops.oa_disable = gen8_oa_disable; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4306 | perf->ops.enable_metric_set = gen8_enable_metric_set; |
| 4307 | perf->ops.disable_metric_set = gen8_disable_metric_set; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 4308 | perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read; |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4309 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4310 | if (IS_GEN(i915, 8)) { |
| 4311 | perf->ctx_oactxctrl_offset = 0x120; |
| 4312 | perf->ctx_flexeu0_offset = 0x2ce; |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4313 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4314 | perf->gen8_valid_ctx_bit = BIT(25); |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4315 | } else { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4316 | perf->ctx_oactxctrl_offset = 0x128; |
| 4317 | perf->ctx_flexeu0_offset = 0x3de; |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4318 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4319 | perf->gen8_valid_ctx_bit = BIT(16); |
Lionel Landwerlin | ba6b7c1 | 2017-11-10 19:08:41 +0000 | [diff] [blame] | 4320 | } |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4321 | } else if (IS_GEN_RANGE(i915, 10, 11)) { |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 4322 | perf->oa_formats = gen8_plus_oa_formats; |
| 4323 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4324 | perf->ops.is_valid_b_counter_reg = |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 4325 | gen7_is_valid_b_counter_addr; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4326 | perf->ops.is_valid_mux_reg = |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 4327 | gen10_is_valid_mux_addr; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4328 | perf->ops.is_valid_flex_reg = |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 4329 | gen8_is_valid_flex_addr; |
| 4330 | |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 4331 | perf->ops.oa_enable = gen8_oa_enable; |
| 4332 | perf->ops.oa_disable = gen8_oa_disable; |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4333 | perf->ops.enable_metric_set = gen8_enable_metric_set; |
| 4334 | perf->ops.disable_metric_set = gen10_disable_metric_set; |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 4335 | perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read; |
Lionel Landwerlin | 95690a0 | 2017-11-10 19:08:43 +0000 | [diff] [blame] | 4336 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4337 | if (IS_GEN(i915, 10)) { |
| 4338 | perf->ctx_oactxctrl_offset = 0x128; |
| 4339 | perf->ctx_flexeu0_offset = 0x3de; |
Lionel Landwerlin | 8dcfdfb | 2019-06-10 11:19:14 +0300 | [diff] [blame] | 4340 | } else { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4341 | perf->ctx_oactxctrl_offset = 0x124; |
| 4342 | perf->ctx_flexeu0_offset = 0x78e; |
Lionel Landwerlin | 8dcfdfb | 2019-06-10 11:19:14 +0300 | [diff] [blame] | 4343 | } |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4344 | perf->gen8_valid_ctx_bit = BIT(16); |
Lionel Landwerlin | 00a7f0d | 2019-10-25 12:37:46 -0700 | [diff] [blame] | 4345 | } else if (IS_GEN(i915, 12)) { |
| 4346 | perf->oa_formats = gen12_oa_formats; |
| 4347 | |
| 4348 | perf->ops.is_valid_b_counter_reg = |
| 4349 | gen12_is_valid_b_counter_addr; |
| 4350 | perf->ops.is_valid_mux_reg = |
| 4351 | gen12_is_valid_mux_addr; |
| 4352 | perf->ops.is_valid_flex_reg = |
| 4353 | gen8_is_valid_flex_addr; |
| 4354 | |
| 4355 | perf->ops.oa_enable = gen12_oa_enable; |
| 4356 | perf->ops.oa_disable = gen12_oa_disable; |
| 4357 | perf->ops.enable_metric_set = gen12_enable_metric_set; |
| 4358 | perf->ops.disable_metric_set = gen12_disable_metric_set; |
| 4359 | perf->ops.oa_hw_tail_read = gen12_oa_hw_tail_read; |
| 4360 | |
| 4361 | perf->ctx_flexeu0_offset = 0; |
| 4362 | perf->ctx_oactxctrl_offset = 0x144; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 4363 | } |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 4364 | } |
| 4365 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4366 | if (perf->ops.enable_metric_set) { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4367 | mutex_init(&perf->lock); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 4368 | |
Lionel Landwerlin | 9f9b279 | 2017-10-27 15:59:31 +0100 | [diff] [blame] | 4369 | oa_sample_rate_hard_limit = 1000 * |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4370 | (RUNTIME_INFO(i915)->cs_timestamp_frequency_khz / 2); |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 4371 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4372 | mutex_init(&perf->metrics_lock); |
| 4373 | idr_init(&perf->metrics_idr); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4374 | |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 4375 | /* We set up some ratelimit state to potentially throttle any |
| 4376 | * _NOTES about spurious, invalid OA reports which we don't |
| 4377 | * forward to userspace. |
| 4378 | * |
| 4379 | * We print a _NOTE about any throttling when closing the |
| 4380 | * stream instead of waiting until driver _fini which no one |
| 4381 | * would ever see. |
| 4382 | * |
| 4383 | * Using the same limiting factors as printk_ratelimit() |
| 4384 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4385 | ratelimit_state_init(&perf->spurious_report_rs, 5 * HZ, 10); |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 4386 | /* Since we use a DRM_NOTE for spurious reports it would be |
| 4387 | * inconsistent to let __ratelimit() automatically print a |
| 4388 | * warning for throttling. |
| 4389 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4390 | ratelimit_set_flags(&perf->spurious_report_rs, |
Umesh Nerlige Ramappa | a37f08a | 2019-08-06 16:30:02 -0700 | [diff] [blame] | 4391 | RATELIMIT_MSG_ON_RELEASE); |
| 4392 | |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 4393 | atomic64_set(&perf->noa_programming_delay, |
| 4394 | 500 * 1000 /* 500us */); |
| 4395 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4396 | perf->i915 = i915; |
Robert Bragg | 19f81df | 2017-06-13 12:23:03 +0100 | [diff] [blame] | 4397 | } |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 4398 | } |
| 4399 | |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4400 | static int destroy_config(int id, void *p, void *data) |
| 4401 | { |
Lionel Landwerlin | 6a45008 | 2019-10-12 08:23:06 +0100 | [diff] [blame] | 4402 | i915_oa_config_put(p); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4403 | return 0; |
| 4404 | } |
| 4405 | |
Venkata Sandeep Dhanalakota | 3dc716fd | 2019-12-13 07:51:51 -0800 | [diff] [blame] | 4406 | void i915_perf_sysctl_register(void) |
| 4407 | { |
| 4408 | sysctl_header = register_sysctl_table(dev_root); |
| 4409 | } |
| 4410 | |
| 4411 | void i915_perf_sysctl_unregister(void) |
| 4412 | { |
| 4413 | unregister_sysctl_table(sysctl_header); |
| 4414 | } |
| 4415 | |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 4416 | /** |
| 4417 | * i915_perf_fini - Counter part to i915_perf_init() |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4418 | * @i915: i915 device instance |
Robert Bragg | 16d98b3 | 2016-12-07 21:40:33 +0000 | [diff] [blame] | 4419 | */ |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4420 | void i915_perf_fini(struct drm_i915_private *i915) |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 4421 | { |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4422 | struct i915_perf *perf = &i915->perf; |
| 4423 | |
| 4424 | if (!perf->i915) |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 4425 | return; |
| 4426 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4427 | idr_for_each(&perf->metrics_idr, destroy_config, perf); |
| 4428 | idr_destroy(&perf->metrics_idr); |
Lionel Landwerlin | f89823c | 2017-08-03 18:05:50 +0100 | [diff] [blame] | 4429 | |
Chris Wilson | 8f8b117 | 2019-10-07 22:09:41 +0100 | [diff] [blame] | 4430 | memset(&perf->ops, 0, sizeof(perf->ops)); |
| 4431 | perf->i915 = NULL; |
Robert Bragg | eec688e | 2016-11-07 19:49:47 +0000 | [diff] [blame] | 4432 | } |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 4433 | |
Lionel Landwerlin | b8d49f2 | 2019-10-14 21:14:01 +0100 | [diff] [blame] | 4434 | /** |
| 4435 | * i915_perf_ioctl_version - Version of the i915-perf subsystem |
| 4436 | * |
| 4437 | * This version number is used by userspace to detect available features. |
| 4438 | */ |
| 4439 | int i915_perf_ioctl_version(void) |
| 4440 | { |
Chris Wilson | 7831e9a | 2019-10-14 21:14:03 +0100 | [diff] [blame] | 4441 | /* |
| 4442 | * 1: Initial version |
| 4443 | * I915_PERF_IOCTL_ENABLE |
| 4444 | * I915_PERF_IOCTL_DISABLE |
| 4445 | * |
| 4446 | * 2: Added runtime modification of OA config. |
| 4447 | * I915_PERF_IOCTL_CONFIG |
Lionel Landwerlin | 9cd20ef | 2019-10-14 21:14:04 +0100 | [diff] [blame] | 4448 | * |
| 4449 | * 3: Add DRM_I915_PERF_PROP_HOLD_PREEMPTION parameter to hold |
| 4450 | * preemption on a particular context so that performance data is |
| 4451 | * accessible from a delta of MI_RPC reports without looking at the |
| 4452 | * OA buffer. |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 4453 | * |
| 4454 | * 4: Add DRM_I915_PERF_PROP_ALLOWED_SSEU to limit what contexts can |
| 4455 | * be run for the duration of the performance recording based on |
| 4456 | * their SSEU configuration. |
Chris Wilson | 7831e9a | 2019-10-14 21:14:03 +0100 | [diff] [blame] | 4457 | */ |
Lionel Landwerlin | 11ecbdd | 2020-03-17 15:22:22 +0200 | [diff] [blame^] | 4458 | return 4; |
Lionel Landwerlin | b8d49f2 | 2019-10-14 21:14:01 +0100 | [diff] [blame] | 4459 | } |
| 4460 | |
Lionel Landwerlin | daed3e4 | 2019-10-12 08:23:07 +0100 | [diff] [blame] | 4461 | #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) |
| 4462 | #include "selftests/i915_perf.c" |
| 4463 | #endif |