Georgi Djakov | 11f1cec | 2019-01-16 18:10:56 +0200 | [diff] [blame] | 1 | .. SPDX-License-Identifier: GPL-2.0 |
| 2 | |
| 3 | ===================================== |
Louis Taylor | 67dd7d8 | 2019-11-01 19:33:14 +0000 | [diff] [blame] | 4 | Generic System Interconnect Subsystem |
Georgi Djakov | 11f1cec | 2019-01-16 18:10:56 +0200 | [diff] [blame] | 5 | ===================================== |
| 6 | |
| 7 | Introduction |
| 8 | ------------ |
| 9 | |
| 10 | This framework is designed to provide a standard kernel interface to control |
| 11 | the settings of the interconnects on an SoC. These settings can be throughput, |
| 12 | latency and priority between multiple interconnected devices or functional |
| 13 | blocks. This can be controlled dynamically in order to save power or provide |
| 14 | maximum performance. |
| 15 | |
| 16 | The interconnect bus is hardware with configurable parameters, which can be |
| 17 | set on a data path according to the requests received from various drivers. |
| 18 | An example of interconnect buses are the interconnects between various |
| 19 | components or functional blocks in chipsets. There can be multiple interconnects |
| 20 | on an SoC that can be multi-tiered. |
| 21 | |
| 22 | Below is a simplified diagram of a real-world SoC interconnect bus topology. |
| 23 | |
| 24 | :: |
| 25 | |
| 26 | +----------------+ +----------------+ |
| 27 | | HW Accelerator |--->| M NoC |<---------------+ |
| 28 | +----------------+ +----------------+ | |
| 29 | | | +------------+ |
| 30 | +-----+ +-------------+ V +------+ | | |
| 31 | | DDR | | +--------+ | PCIe | | | |
| 32 | +-----+ | | Slaves | +------+ | | |
| 33 | ^ ^ | +--------+ | | C NoC | |
| 34 | | | V V | | |
| 35 | +------------------+ +------------------------+ | | +-----+ |
| 36 | | |-->| |-->| |-->| CPU | |
| 37 | | |-->| |<--| | +-----+ |
| 38 | | Mem NoC | | S NoC | +------------+ |
| 39 | | |<--| |---------+ | |
| 40 | | |<--| |<------+ | | +--------+ |
| 41 | +------------------+ +------------------------+ | | +-->| Slaves | |
| 42 | ^ ^ ^ ^ ^ | | +--------+ |
| 43 | | | | | | | V |
| 44 | +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+ |
| 45 | | CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves | |
| 46 | +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+ |
| 47 | | |
| 48 | +-------+ |
| 49 | | Modem | |
| 50 | +-------+ |
| 51 | |
| 52 | Terminology |
| 53 | ----------- |
| 54 | |
| 55 | Interconnect provider is the software definition of the interconnect hardware. |
| 56 | The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC |
| 57 | and Mem NoC. |
| 58 | |
| 59 | Interconnect node is the software definition of the interconnect hardware |
| 60 | port. Each interconnect provider consists of multiple interconnect nodes, |
| 61 | which are connected to other SoC components including other interconnect |
| 62 | providers. The point on the diagram where the CPUs connect to the memory is |
| 63 | called an interconnect node, which belongs to the Mem NoC interconnect provider. |
| 64 | |
| 65 | Interconnect endpoints are the first or the last element of the path. Every |
| 66 | endpoint is a node, but not every node is an endpoint. |
| 67 | |
| 68 | Interconnect path is everything between two endpoints including all the nodes |
| 69 | that have to be traversed to reach from a source to destination node. It may |
| 70 | include multiple master-slave pairs across several interconnect providers. |
| 71 | |
| 72 | Interconnect consumers are the entities which make use of the data paths exposed |
| 73 | by the providers. The consumers send requests to providers requesting various |
| 74 | throughput, latency and priority. Usually the consumers are device drivers, that |
| 75 | send request based on their needs. An example for a consumer is a video decoder |
| 76 | that supports various formats and image sizes. |
| 77 | |
| 78 | Interconnect providers |
| 79 | ---------------------- |
| 80 | |
| 81 | Interconnect provider is an entity that implements methods to initialize and |
| 82 | configure interconnect bus hardware. The interconnect provider drivers should |
| 83 | be registered with the interconnect provider core. |
| 84 | |
| 85 | .. kernel-doc:: include/linux/interconnect-provider.h |
| 86 | |
| 87 | Interconnect consumers |
| 88 | ---------------------- |
| 89 | |
| 90 | Interconnect consumers are the clients which use the interconnect APIs to |
| 91 | get paths between endpoints and set their bandwidth/latency/QoS requirements |
Jonathan Corbet | 3f715b1 | 2019-05-24 15:05:41 -0600 | [diff] [blame] | 92 | for these interconnect paths. These interfaces are not currently |
| 93 | documented. |
Leonard Crestez | 1a0013c | 2019-11-19 00:34:01 +0200 | [diff] [blame] | 94 | |
| 95 | Interconnect debugfs interfaces |
| 96 | ------------------------------- |
| 97 | |
| 98 | Like several other subsystems interconnect will create some files for debugging |
| 99 | and introspection. Files in debugfs are not considered ABI so application |
| 100 | software shouldn't rely on format details change between kernel versions. |
| 101 | |
| 102 | ``/sys/kernel/debug/interconnect/interconnect_summary``: |
| 103 | |
| 104 | Show all interconnect nodes in the system with their aggregated bandwidth |
| 105 | request. Indented under each node show bandwidth requests from each device. |
| 106 | |
| 107 | ``/sys/kernel/debug/interconnect/interconnect_graph``: |
| 108 | |
| 109 | Show the interconnect graph in the graphviz dot format. It shows all |
| 110 | interconnect nodes and links in the system and groups together nodes from the |
| 111 | same provider as subgraphs. The format is human-readable and can also be piped |
| 112 | through dot to generate diagrams in many graphical formats:: |
| 113 | |
| 114 | $ cat /sys/kernel/debug/interconnect/interconnect_graph | \ |
| 115 | dot -Tsvg > interconnect_graph.svg |