Merge "drm/msm/dsi-staging: use usleep for wait during command transfer to panel"
diff --git a/Documentation/arm/msm/rpm.txt b/Documentation/arm/msm/rpm.txt
new file mode 100644
index 0000000..d5be6a7
--- /dev/null
+++ b/Documentation/arm/msm/rpm.txt
@@ -0,0 +1,157 @@
+Introduction
+============
+
+Resource Power Manager (RPM)
+
+RPM is a dedicated hardware engine for managing shared SoC resources,
+which includes buses, clocks, power rails, etc. The goal of RPM is
+to achieve the maximum power savings while satisfying the SoC's
+operational and performance requirements. RPM accepts resource
+requests from multiple RPM masters. It arbitrates and aggregates the
+requests, and configures the shared resources. The RPM masters are
+the application processor, the modem processor, as well as some
+hardware accelerators.
+
+The RPM driver provides an API for interacting with RPM. Kernel code
+calls the RPM driver to request RPM-managed, shared resources.
+Kernel code can also register with the driver for RPM notifications,
+which are sent when the status of shared resources change.
+
+Hardware description
+====================
+
+RPM exposes a separate region of registers to each of the RPM masters.
+In general, each register represents some shared resource(s). At a
+very basic level, a master requests resources by writing to the
+registers, then generating an interrupt to RPM. RPM processes the
+request, writes acknowledgment to the registers, then generates an
+interrupt to the master.
+
+In addition to the master-specific regions, RPM also exposes a shared
+region that contains the current status of the shared resources. Only
+RPM can write to the status region, but every master can read from it.
+
+RPM contains internal logics that aggregate and arbitrate among
+requests from the various RPM masters. It interfaces with the PMIC,
+the bus arbitration block, and the clock controller block in order to
+configure the shared resources.
+
+Software description
+====================
+
+The RPM driver encapsulates the low level RPM interactions, which
+rely on reading/writing registers and generating/processing
+interrupts, and provides a higher level synchronuous set/clear/get
+interface. Most functions take an array of id-value pairs.
+The ids identify the RPM registers which would correspond to some
+RPM resources, the values specify the new resource values.
+
+The RPM driver synchronizes accesses to RPM. It protects against
+simultaneous accesses from multiple tasks, on SMP cores, in task
+contexts, and in atomic contexts.
+
+Design
+======
+
+Design goals:
+- Encapsulate low level RPM interactions.
+- Provide a synchronuous set/clear/get interface.
+- Synchronize simultaneous software accesses to RPM.
+
+Power Management
+================
+
+RPM is part of the power management architecture for MSM 8660. RPM
+manages shared system resources to lower system power.
+
+SMP/multi-core
+==============
+
+The RPM driver uses mutex to synchronize client accesses among tasks.
+It uses spinlocks to synchronize accesses from atomic contexts and
+SMP cores.
+
+Security
+========
+
+None.
+
+Performance
+===========
+
+None.
+
+Interface
+=========
+
+msm_rpm_get_status():
+The function reads the shared status region and returns the current
+resource values, which are the arbitrated/aggregated results across
+all RPM masters.
+
+msm_rpm_set():
+The function makes a resource request to RPM.
+
+msm_rpm_set_noirq():
+The function is similar to msm_rpm_set() except that it must be
+called with interrupts masked. If possible, use msm_rpm_set()
+instead, to maximize CPU throughput.
+
+msm_rpm_clear():
+The function makes a resource request to RPM to clear resource values.
+Once the values are cleared, the resources revert back to their default
+values for this RPM master. RPM internally uses the default values as
+the requests from this RPM master when arbitrating and aggregating with
+requests from other RPM masters.
+
+msm_rpm_clear_noirq():
+The function is similar to msm_rpm_clear() except that it must be
+called with interrupts masked. If possible, use msm_rpm_clear()
+instead, to maximize CPU throughput.
+
+msm_rpm_register_notification():
+The function registers for RPM notification. When the specified
+resources change their status on RPM, RPM sends out notifications
+and the driver will "up" the semaphore in struct
+msm_rpm_notification.
+
+msm_rpm_unregister_notification():
+The function unregisters a notification.
+
+msm_rpm_init():
+The function initializes the RPM driver with platform specific data.
+
+Driver parameters
+=================
+
+None.
+
+Config options
+==============
+
+MSM_RPM
+
+Dependencies
+============
+
+None.
+
+User space utilities
+====================
+
+None.
+
+Other
+=====
+
+None.
+
+Known issues
+============
+
+None.
+
+To do
+=====
+
+None.
diff --git a/Documentation/devicetree/bindings/arm/arch_timer.txt b/Documentation/devicetree/bindings/arm/arch_timer.txt
index ad440a2..e926aea 100644
--- a/Documentation/devicetree/bindings/arm/arch_timer.txt
+++ b/Documentation/devicetree/bindings/arm/arch_timer.txt
@@ -31,6 +31,12 @@
This also affects writes to the tval register, due to the implicit
counter read.
+- hisilicon,erratum-161010101 : A boolean property. Indicates the
+ presence of Hisilicon erratum 161010101, which says that reading the
+ counters is unreliable in some cases, and reads may return a value 32
+ beyond the correct value. This also affects writes to the tval
+ registers, due to the implicit counter read.
+
** Optional properties:
- arm,cpu-registers-not-fw-configured : Firmware does not initialize
diff --git a/Documentation/devicetree/bindings/arm/msm/msm.txt b/Documentation/devicetree/bindings/arm/msm/msm.txt
index 61226c9..b3d4d44 100644
--- a/Documentation/devicetree/bindings/arm/msm/msm.txt
+++ b/Documentation/devicetree/bindings/arm/msm/msm.txt
@@ -172,6 +172,9 @@
- HDK device:
compatible = "qcom,hdk"
+- IPC device:
+ compatible = "qcom,ipc"
+
Boards (SoC type + board variant):
compatible = "qcom,apq8016"
@@ -201,6 +204,7 @@
compatible = "qcom,apq8017-mtp"
compatible = "qcom,apq8053-cdp"
compatible = "qcom,apq8053-mtp"
+compatible = "qcom,apq8053-ipc"
compatible = "qcom,mdm9630-cdp"
compatible = "qcom,mdm9630-mtp"
compatible = "qcom,mdm9630-sim"
@@ -311,6 +315,7 @@
compatible = "qcom,msm8953-sim"
compatible = "qcom,msm8953-cdp"
compatible = "qcom,msm8953-mtp"
+compatible = "qcom,msm8953-ipc"
compatible = "qcom,msm8953-qrd"
compatible = "qcom,msm8953-qrd-sku3"
compatible = "qcom,sdm450-mtp"
diff --git a/Documentation/devicetree/bindings/arm/msm/qcom,osm.txt b/Documentation/devicetree/bindings/arm/msm/qcom,osm.txt
index bce983a..7496f4d 100644
--- a/Documentation/devicetree/bindings/arm/msm/qcom,osm.txt
+++ b/Documentation/devicetree/bindings/arm/msm/qcom,osm.txt
@@ -21,10 +21,27 @@
Usage: required
Value type: <stringlist>
Definition: Address names. Must be "osm_l3_base", "osm_pwrcl_base",
- "osm_perfcl_base".
+ "osm_perfcl_base", and "cpr_rc".
Must be specified in the same order as the corresponding
addresses are specified in the reg property.
+- vdd_l3_mx_ao-supply
+ Usage: required
+ Value type: <phandle>
+ Definition: Phandle to the MX active-only regulator device.
+
+- vdd_pwrcl_mx_ao-supply
+ Usage: required
+ Value type: <phandle>
+ Definition: Phandle to the MX active-only regulator device.
+
+- qcom,mx-turbo-freq
+ Usage: required
+ Value type: <array>
+ Definition: List of frequencies for the 3 clock domains (following the
+ order of L3, power, and performance clusters) that denote
+ the lowest rate that requires a TURBO vote on the MX rail.
+
- l3-devs
Usage: optional
Value type: <phandle>
@@ -46,10 +63,15 @@
compatible = "qcom,clk-cpu-osm";
reg = <0x17d41000 0x1400>,
<0x17d43000 0x1400>,
- <0x17d45800 0x1400>;
- reg-names = "osm_l3_base", "osm_pwrcl_base", "osm_perfcl_base";
+ <0x17d45800 0x1400>,
+ <0x784248 0x4>;
+ reg-names = "osm_l3_base", "osm_pwrcl_base", "osm_perfcl_base",
+ "cpr_rc";
+ vdd_l3_mx_ao-supply = <&pm8998_s6_level_ao>;
+ vdd_pwrcl_mx_ao-supply = <&pm8998_s6_level_ao>;
- l3-devs = <&phandle0 &phandle1 &phandle2>;
+ qcom,mx-turbo-freq = <1478400000 1689600000 3300000001>;
+ l3-devs = <&l3_cpu0 &l3_cpu4 &l3_cdsp>;
clock-names = "xo_ao";
clocks = <&clock_rpmh RPMH_CXO_CLK_A>;
diff --git a/Documentation/devicetree/bindings/arm/msm/qdss_mhi.txt b/Documentation/devicetree/bindings/arm/msm/qdss_mhi.txt
new file mode 100644
index 0000000..928a4f4
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/qdss_mhi.txt
@@ -0,0 +1,15 @@
+Qualcomm Technologies, Inc. QDSS bridge Driver
+
+This device will enable routing debug data from modem
+subsystem to APSS host.
+
+Required properties:
+-compatible : "qcom,qdss-mhi".
+-qcom,mhi : phandle of MHI Device to connect to.
+
+Example:
+ qcom,qdss-mhi {
+ compatible = "qcom,qdss-mhi";
+ qcom,mhi = <&mhi_0>;
+ };
+
diff --git a/Documentation/devicetree/bindings/arm/msm/rpm-smd.txt b/Documentation/devicetree/bindings/arm/msm/rpm-smd.txt
new file mode 100644
index 0000000..4cba3ec
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/rpm-smd.txt
@@ -0,0 +1,41 @@
+Resource Power Manager(RPM)
+
+RPM is a dedicated hardware engine for managing shared SoC resources,
+which includes buses, clocks, power rails, etc. The goal of RPM is
+to achieve the maximum power savings while satisfying the SoC's
+operational and performance requirements. RPM accepts resource
+requests from multiple RPM masters. It arbitrates and aggregates the
+requests, and configures the shared resources. The RPM masters are
+the application processor, the modem processor, as well as hardware
+accelerators. The RPM driver communicates with the hardware engine using
+SMD.
+
+The devicetree representation of the SPM block should be:
+
+Required properties
+
+- compatible: "qcom,rpm-smd" or "qcom,rpm-glink"
+- rpm-channel-name: The string corresponding to the channel name of the
+ peripheral subsystem. Required for both smd and
+ glink transports.
+- rpm-channel-type: The interal SMD edge for this subsystem found in
+ <soc/qcom/smd.h>
+- qcom,glink-edge: Logical name of the remote subsystem. This is a required
+ property when rpm-smd driver using glink as trasport.
+
+Optional properties
+- rpm-standalone: Allow RPM driver to run in standalone mode irrespective of RPM
+ channel presence.
+- reg: Contains the memory address at which rpm messaging format version is
+ stored. If this field is not present, the target only supports v0 format.
+
+Example:
+
+ qcom,rpm-smd@68150 {
+ compatible = "qcom,rpm-smd", "qcom,rpm-glink";
+ reg = <0x68150 0x3200>;
+ qcom,rpm-channel-name = "rpm_requests";
+ qcom,rpm-channel-type = 15; /* SMD_APPS_RPM */
+ qcom,glink-edge = "rpm";
+ }
+}
diff --git a/Documentation/devicetree/bindings/arm/msm/smdpkt.txt b/Documentation/devicetree/bindings/arm/msm/smdpkt.txt
new file mode 100644
index 0000000..be9084b
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/smdpkt.txt
@@ -0,0 +1,43 @@
+Qualcomm Technologies, Inc Shared Memory Packet Driver (smdpkt)
+
+[Root level node]
+Required properties:
+-compatible : should be "qcom,smdpkt"
+
+[Second level nodes]
+qcom,smdpkt-port-names
+Required properties:
+-qcom,smdpkt-remote : the remote subsystem name
+-qcom,smdpkt-port-name : the smd channel name
+-qcom,smdpkt-dev-name : the smdpkt device name
+
+Example:
+
+ qcom,smdpkt {
+ compatible = "qcom,smdpkt";
+
+ qcom,smdpkt-data5-cntl {
+ qcom,smdpkt-remote = "modem";
+ qcom,smdpkt-port-name = "DATA5_CNTL";
+ qcom,smdpkt-dev-name = "smdcntl0";
+ };
+
+ qcom,smdpkt-data6-cntl {
+ qcom,smdpkt-remote = "modem";
+ qcom,smdpkt-port-name = "DATA6_CNTL";
+ qcom,smdpkt-dev-name = "smdcntl1";
+ };
+
+ qcom,smdpkt-cxm-qmi-port-8064 {
+ qcom,smdpkt-remote = "wcnss";
+ qcom,smdpkt-port-name = "CXM_QMI_PORT_8064";
+ qcom,smdpkt-dev-name = "smd_cxm_qmi";
+ };
+
+ qcom,smdpkt-loopback {
+ qcom,smdpkt-remote = "modem";
+ qcom,smdpkt-port-name = "LOOPBACK";
+ qcom,smdpkt-dev-name = "smd_pkt_loopback";
+ };
+ };
+
diff --git a/Documentation/devicetree/bindings/arm/msm/smdtty.txt b/Documentation/devicetree/bindings/arm/msm/smdtty.txt
new file mode 100644
index 0000000..a445c60
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/smdtty.txt
@@ -0,0 +1,40 @@
+Qualcomm Technologies, Inc Shared Memory TTY Driver (smdtty)
+
+[Root level node]
+Required properties:
+-compatible : should be "qcom,smdtty"
+
+[Second level nodes]
+qcom,smdtty-port-names
+Required properties:
+-qcom,smdtty-remote: the remote subsystem name
+-qcom,smdtty-port-name : the smd channel name
+
+Optional properties:
+-qcom,smdtty-dev-name : the smdtty device name
+
+Required alias:
+- The index into TTY subsystem is specified via an alias with the following format
+ 'smd{n}' where n is the tty device index.
+
+Example:
+ aliases {
+ smd1 = &smdtty_apps_fm;
+ smd36 = &smdtty_loopback;
+ };
+
+ qcom,smdtty {
+ compatible = "qcom,smdtty";
+
+ smdtty_apps_fm: qcom,smdtty-apps-fm {
+ qcom,smdtty-remote = "wcnss";
+ qcom,smdtty-port-name = "APPS_FM";
+ };
+
+ smdtty_loopback: smdtty-loopback {
+ qcom,smdtty-remote = "modem";
+ qcom,smdtty-port-name = "LOOPBACK";
+ qcom,smdtty-dev-name = "LOOPBACK_TTY";
+ };
+ };
+
diff --git a/Documentation/devicetree/bindings/clock/qcom,a7-cpucc.txt b/Documentation/devicetree/bindings/clock/qcom,a7-cpucc.txt
new file mode 100644
index 0000000..2782b9c
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/qcom,a7-cpucc.txt
@@ -0,0 +1,48 @@
+Qualcomm Application A7 CPU clock driver
+-------------------------------------
+
+It is the clock controller driver which provides higher frequency
+clocks and allows A7 CPU frequency scaling on sdxpoorwills based platforms.
+
+Required properties:
+- compatible : shall contain only one of the following:
+ "qcom,cpu-sdxpoorwills",
+- clocks : Phandle to the clock device.
+- clock-names: Names of the used clocks.
+- qcom,a7cc-init-rate = Initial rate which needs to be set from cpu driver.
+- reg : shall contain base register offset and size.
+- reg-names : Names of the bases for the above registers.
+- vdd_dig_ao-supply : The regulator powering the APSS PLL.
+- cpu-vdd-supply : The regulator powering the APSS RCG.
+- qcom,rcg-reg-offset : Register offset for APSS RCG.
+- qcom,speedX-bin-vZ : A table of CPU frequency (Hz) to regulator voltage (uV) mapping.
+ Format: <freq uV>
+ This represents the max frequency possible for each possible
+ power configuration for a CPU that's binned as speed bin X,
+ speed bin revision Z. Speed bin values can be between [0-7]
+ and the version can be between [0-3].
+- #clock-cells : shall contain 1.
+
+Optional properties :
+- reg-names: "efuse",
+
+Example:
+ clock_cpu: qcom,clock-a7@17808100 {
+ compatible = "qcom,cpu-sdxpoorwills";
+ clocks = <&clock_rpmh RPMH_CXO_CLK_A>;
+ clock-names = "xo_ao";
+ qcom,a7cc-init-rate = <1497600000>;
+ reg = <0x17808100 0x7F10>;
+ reg-names = "apcs_pll";
+
+ vdd_dig_ao-supply = <&pmxpoorwills_s5_level_ao>;
+ cpu-vdd-supply = <&pmxpoorwills_s5_level_ao>;
+ qcom,rcg-reg-offset = <0x7F08>;
+ qcom,speed0-bin-v0 =
+ < 0 RPMH_REGULATOR_LEVEL_OFF>,
+ < 345600000 RPMH_REGULATOR_LEVEL_LOW_SVS>,
+ < 576000000 RPMH_REGULATOR_LEVEL_SVS>,
+ < 1094400000 RPMH_REGULATOR_LEVEL_NOM>,
+ < 1497600000 RPMH_REGULATOR_LEVEL_TURBO>;
+ #clock-cells = <1>;
+ };
diff --git a/Documentation/devicetree/bindings/clock/qcom,gcc.txt b/Documentation/devicetree/bindings/clock/qcom,gcc.txt
index 78bb87a..7330db4 100644
--- a/Documentation/devicetree/bindings/clock/qcom,gcc.txt
+++ b/Documentation/devicetree/bindings/clock/qcom,gcc.txt
@@ -21,6 +21,7 @@
"qcom,gcc-sdm845-v2.1"
"qcom,gcc-sdm670"
"qcom,debugcc-sdm845"
+ "qcom,gcc-sdxpoorwills"
- reg : shall contain base register location and length
- #clock-cells : shall contain 1
diff --git a/Documentation/devicetree/bindings/clock/qcom,rpmh.txt b/Documentation/devicetree/bindings/clock/qcom,rpmh.txt
index 9ad7263..d57f61a 100644
--- a/Documentation/devicetree/bindings/clock/qcom,rpmh.txt
+++ b/Documentation/devicetree/bindings/clock/qcom,rpmh.txt
@@ -3,6 +3,7 @@
Required properties :
- compatible : shall contain "qcom,rpmh-clk-sdm845" or "qcom,rpmh-clk-sdm670"
+ or "qcom,rpmh-clk-sdxpoorwills"
- #clock-cells : must contain 1
- mboxes : list of RPMh mailbox phandle and channel identifier tuples.
diff --git a/Documentation/devicetree/bindings/clock/qoriq-clock.txt b/Documentation/devicetree/bindings/clock/qoriq-clock.txt
index 16a3ec4..1bd2c76 100644
--- a/Documentation/devicetree/bindings/clock/qoriq-clock.txt
+++ b/Documentation/devicetree/bindings/clock/qoriq-clock.txt
@@ -31,6 +31,7 @@
* "fsl,t4240-clockgen"
* "fsl,b4420-clockgen"
* "fsl,b4860-clockgen"
+ * "fsl,ls1012a-clockgen"
* "fsl,ls1021a-clockgen"
Chassis-version clock strings include:
* "fsl,qoriq-clockgen-1.0": for chassis 1.0 clocks
diff --git a/Documentation/devicetree/bindings/display/msm/dsi.txt b/Documentation/devicetree/bindings/display/msm/dsi.txt
index 3534f04..fc95288 100644
--- a/Documentation/devicetree/bindings/display/msm/dsi.txt
+++ b/Documentation/devicetree/bindings/display/msm/dsi.txt
@@ -121,6 +121,12 @@
If ping pong split is enabled, this time should not be higher
than two times the dsi link rate time.
If the property is not specified, then the default value is 14000 us.
+- qcom,panel-allow-phy-poweroff: A boolean property indicates that panel allows to turn off the phy power
+ supply during idle screen. A panel should be able to handle the dsi lanes
+ in floating state(not LP00 or LP11) to turn on this property. Software
+ turns off PHY pmic power supply, phy ldo and DSI Lane ldo during
+ idle screen (footswitch control off) when this property is enabled.
+- qcom,dsi-phy-regulator-min-datarate-bps: Minimum per lane data rate (bps) to turn on PHY regulator.
[1] Documentation/devicetree/bindings/clocks/clock-bindings.txt
[2] Documentation/devicetree/bindings/graph.txt
@@ -229,4 +235,6 @@
vddio-supply = <&pma8084_l12>;
qcom,dsi-phy-regulator-ldo-mode;
+ qcom,panel-allow-phy-poweroff;
+ qcom,dsi-phy-regulator-min-datarate-bps = <1200000000>;
};
diff --git a/Documentation/devicetree/bindings/leds/leds-qpnp-flash.txt b/Documentation/devicetree/bindings/leds/leds-qpnp-flash.txt
new file mode 100644
index 0000000..a7a2eda
--- /dev/null
+++ b/Documentation/devicetree/bindings/leds/leds-qpnp-flash.txt
@@ -0,0 +1,180 @@
+Qualcomm Technologies Inc. PNP Flash LED
+
+QPNP (Qualcomm Technologies Inc. Plug N Play) Flash LED (Light
+Emitting Diode) driver is used to provide illumination to
+camera sensor when background light is dim to capture good
+picture. It can also be used for flashlight/torch application.
+It is part of PMIC on Qualcomm Technologies Inc. reference platforms.
+The PMIC is connected to the host processor via SPMI bus.
+
+Required properties:
+- compatible : should be "qcom,qpnp-flash-led"
+- reg : base address and size for flash LED modules
+
+Optional properties:
+- qcom,headroom : headroom to use. Values should be 250, 300,
+ 400 and 500 in mV.
+- qcom,startup-dly : delay before flashing after flash executed.
+ Values should 10, 32, 64, and 128 in us.
+- qcom,clamp-curr : current to clamp at when voltage droop happens.
+ Values are in integer from 0 to 1000 inclusive,
+ indicating 0 to 1000 mA.
+- qcom,self-check-enabled : boolean type. self fault check enablement
+- qcom,thermal-derate-enabled : boolean type. derate enablement when module
+ temperature reaches threshold
+- qcom,thermal-derate-threshold : thermal threshold for derate. Values
+ should be 95, 105, 115, 125 in C.
+- qcom,thermal-derate-rate : derate rate when module temperature
+ reaches threshold. Values should be
+ "1_PERCENT", "1P25_PERCENT", "2_PERCENT",
+ "2P5_PERCENT", "5_PERCENT" in string.
+- qcom,current-ramp-enabled : boolean type. stepped current ramp enablement
+- qcom,ramp-up-step : current ramp up rate. Values should be
+ "0P2US", "0P4US", "0P8US", "1P6US", "3P3US",
+ "6P7US", "13P5US", "27US".
+- qcom,ramp-dn-step : current ramp down rate. Values should be
+ "0P2US", "0P4US", "0P8US", "1P6US", "3P3US",
+ "6P7US", "13P5US", "27US".
+- qcom,vph-pwr-droop-enabled : boolean type. VPH power droop enablement. Enablement
+ allows current clamp when phone power drops below
+ pre-determined threshold
+- qcom,vph-pwr-droop-threshold : VPH power threshold for module to clamp current.
+ Values are 2500 - 3200 in mV with 100 mV steps.
+- qcom,vph-pwr-droop-debounce-time : debounce time for module to confirm a voltage
+ droop is happening. Values are 0, 10, 32, 64
+ in us.
+- qcom,pmic-charger-support : Boolean type. This tells if flash utilizes charger boost
+ support
+- qcom,headroom-sense-ch0-enabled: Boolean type. This configures headroom sensing enablement
+ for LED channel 0
+- qcom,headroom-sense-ch1-enabled: Boolean type. This configures headroom sensing enablement
+ for LED channel 1
+- qcom,power-detect-enabled : Boolean type. This enables driver to get maximum flash LED
+ current at current battery level to avoid intensity clamp
+ when battery voltage is low
+- qcom,otst2-moduled-enabled : Boolean type. This enables driver to enable MASK to support
+ OTST2 connection.
+- qcom,follow-otst2-rb-disabled : Boolean type. This allows driver to reset/deset module.
+ By default, driver resets module. This entry allows driver to
+ bypass reset module sequence.
+- qcom,die-current-derate-enabled: Boolean type. This enables driver to get maximum flash LED
+ current, based on PMIC die temperature threshold to
+ avoid significant current derate from hardware. This property
+ is not needed if PMIC is older than PMI8994v2.0.
+- qcom,die-temp-vadc : VADC channel source for flash LED. This property is not
+ needed if PMIC is older than PMI8994v2.0.
+- qcom,die-temp-threshold : Integer type array for PMIC die temperature threshold.
+ Array should have at least one value. Values should be in
+ celcius. This property is not needed if PMIC is older than
+ PMI8994v2.0.
+- qcom,die-temp-derate-current : Integer type arrray for PMIC die temperature derate
+ current. Array should have at least one value. Values
+ should be in mA. This property is not needed if PMIC is older
+ than PMI8994v2.0.
+
+Required properties inside child node. Chile node contains settings for each individual LED.
+Each LED hardware needs a node for itself and a switch node to control brightness.
+For the purpose of turning on/off LED and better regulator control, "led:switch" node
+is introduced. "led:switch" acquires several existing properties from other nodes for
+operational simplification. For backward compatibility purpose, switch node can be optional:
+- label : type of led that will be used, either "flash" or "torch".
+- qcom,led-name : name of the LED. Accepted values are "led:flash_0",
+ "led:flash_1", "led:torch_0", "led:torch_1"
+- qcom,default-led-trigger : trigger for the camera flash and torch. Accepted values are
+ "flash0_trigger", "flash1_trigger", "torch0_trigger", torch1_trigger"
+- qcom,id : enumerated ID for each physical LED. Accepted values are "0",
+ "1", etc..
+- qcom,max-current : maximum current allowed on this LED. Valid values should be
+ integer from 0 to 1000 inclusive, indicating 0 to 1000 mA.
+- qcom,pmic-revid : PMIC revision id source. This property is needed for PMI8996
+ revision check.
+
+Optional properties inside child node:
+- qcom,current : default current intensity for LED. Accepted values should be
+ integer from 0 t 1000 inclusive, indicating 0 to 1000 mA.
+- qcom,duration : Duration for flash LED. When duration time expires, hardware will turn off
+ flash LED. Values should be from 10 ms to 1280 ms with 10 ms incremental
+ step. Not applicable to torch. It is required for LED:SWITCH node to handle
+ LED used as flash.
+- reg<n> : reg<n> (<n> represents number. eg 0,1,2,..) property is to add support for
+ multiple power sources. It includes two properties regulator-name and max-voltage.
+ Required property inside regulator node:
+ - regulator-name : This denotes this node is a regulator node and which
+ regulator to use.
+ Optional property inside regulator node:
+ - max-voltage : This specifies max voltage of regulator. Some switch
+ or boost regulator does not need this property.
+
+Example:
+ qcom,leds@d300 {
+ compatible = "qcom,qpnp-flash-led";
+ status = "okay";
+ reg = <0xd300 0x100>;
+ label = "flash";
+ qcom,headroom = <500>;
+ qcom,startup-dly = <128>;
+ qcom,clamp-curr = <200>;
+ qcom,pmic-charger-support;
+ qcom,self-check-enabled;
+ qcom,thermal-derate-enabled;
+ qcom,thermal-derate-threshold = <80>;
+ qcom,thermal-derate-rate = "4_PERCENT";
+ qcom,current-ramp-enabled;
+ qcom,ramp_up_step = "27US";
+ qcom,ramp_dn_step = "27US";
+ qcom,vph-pwr-droop-enabled;
+ qcom,vph-pwr-droop-threshold = <3200>;
+ qcom,vph-pwr-droop-debounce-time = <10>;
+ qcom,headroom-sense-ch0-enabled;
+ qcom,headroom-sense-ch1-enabled;
+ qcom,die-current-derate-enabled;
+ qcom,die-temp-vadc = <&pmi8994_vadc>;
+ qcom,die-temp-threshold = <85 80 75 70 65>;
+ qcom,die-temp-derate-current = <400 800 1200 1600 2000>;
+ qcom,pmic-revid = <&pmi8994_revid>;
+
+ pm8226_flash0: qcom,flash_0 {
+ label = "flash";
+ qcom,led-name = "led:flash_0";
+ qcom,default-led-trigger =
+ "flash0_trigger";
+ qcom,max-current = <1000>;
+ qcom,id = <0>;
+ qcom,duration = <1280>;
+ qcom,current = <625>;
+ };
+
+ pm8226_torch: qcom,torch_0 {
+ label = "torch";
+ qcom,led-name = "led:torch_0";
+ qcom,default-led-trigger =
+ "torch0_trigger";
+ boost-supply = <&pm8226_chg_boost>;
+ qcom,max-current = <200>;
+ qcom,id = <0>;
+ qcom,current = <120>;
+ qcom,max-current = <200>;
+ reg0 {
+ regulator-name =
+ "pm8226_chg_boost";
+ max-voltage = <3600000>;
+ };
+ };
+
+ pm8226_switch: qcom,switch {
+ lable = "switch";
+ qcom,led-name = "led:switch";
+ qcom,default-led-trigger =
+ "switch_trigger";
+ qcom,id = <2>;
+ qcom,current = <625>;
+ qcom,duration = <1280>;
+ qcom,max-current = <1000>;
+ reg0 {
+ regulator-name =
+ "pm8226_chg_boost";
+ max-voltage = <3600000>;
+ };
+ };
+ };
+
diff --git a/Documentation/devicetree/bindings/media/video/msm-cam-cci.txt b/Documentation/devicetree/bindings/media/video/msm-cam-cci.txt
index 1127544..cd4d222 100644
--- a/Documentation/devicetree/bindings/media/video/msm-cam-cci.txt
+++ b/Documentation/devicetree/bindings/media/video/msm-cam-cci.txt
@@ -180,6 +180,9 @@
should contain phandle of respective ir-cut node
- qcom,special-support-sensors: if only some special sensors are supported
on this board, add sensor name in this property.
+- use-shared-clk : It is booloean property. This property is required
+ if the clk is shared clk between different sensor and ois, if this
+ device need to be opened together.
- clock-rates: clock rate in Hz.
- clock-cntl-level: says what all different cloc level node has.
- clock-cntl-support: Says whether clock control support is present or not
@@ -248,6 +251,9 @@
required from the regulators mentioned in the regulator-names property
(in the same order).
- cam_vaf-supply : should contain regulator from which ois voltage is supplied
+- use-shared-clk : It is booloean property. This property is required
+ if the clk is shared clk between different sensor and ois, if this
+ device need to be opened together.
Example:
@@ -354,8 +360,8 @@
status = "ok";
shared-gpios = <18 19>;
pinctrl-names = "cam_res_mgr_default", "cam_res_mgr_suspend";
- pinctrl-0 = <&cam_res_mgr_active>;
- pinctrl-1 = <&cam_res_mgr_suspend>;
+ pinctrl-0 = <&cam_shared_clk_active &cam_res_mgr_active>;
+ pinctrl-1 = <&cam_shared_clk_suspend &cam_res_mgr_suspend>;
};
qcom,cam-sensor@0 {
@@ -374,7 +380,7 @@
cam_vio-supply = <&pm845_lvs1>;
cam_vana-supply = <&pmi8998_bob>;
regulator-names = "cam_vdig", "cam_vio", "cam_vana";
- rgltr-cntrl-support;
+ rgltr-cntrl-support;
rgltr-min-voltage = <0 3312000 1352000>;
rgltr-max-voltage = <0 3312000 1352000>;
rgltr-load-current = <0 80000 105000>;
@@ -398,6 +404,7 @@
sensor-mode = <0>;
cci-master = <0>;
status = "ok";
+ use-shared-clk;
clocks = <&clock_mmss clk_mclk0_clk_src>,
<&clock_mmss clk_camss_mclk0_clk>;
clock-names = "cam_src_clk", "cam_clk";
diff --git a/Documentation/devicetree/bindings/media/video/msm-cam-lrme.txt b/Documentation/devicetree/bindings/media/video/msm-cam-lrme.txt
new file mode 100644
index 0000000..9a37922
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/video/msm-cam-lrme.txt
@@ -0,0 +1,149 @@
+* Qualcomm Technologies, Inc. MSM Camera LRME
+
+The MSM camera Low Resolution Motion Estimation device provides dependency
+definitions for enabling Camera LRME HW. MSM camera LRME is implemented in
+multiple device nodes. The root LRME device node has properties defined to
+hint the driver about the LRME HW nodes available during the probe sequence.
+Each node has multiple properties defined for interrupts, clocks and
+regulators.
+
+=======================
+Required Node Structure
+=======================
+LRME root interface node takes care of the handling LRME high level
+driver handling and controls underlying LRME hardware present.
+
+- compatible
+ Usage: required
+ Value type: <string>
+ Definition: Should be "qcom,cam-lrme"
+
+- compat-hw-name
+ Usage: required
+ Value type: <string>
+ Definition: Should be "qcom,lrme"
+
+- num-lrme
+ Usage: required
+ Value type: <u32>
+ Definition: Number of supported LRME HW blocks
+
+Example:
+ qcom,cam-lrme {
+ compatible = "qcom,cam-lrme";
+ compat-hw-name = "qcom,lrme";
+ num-lrme = <1>;
+ };
+
+=======================
+Required Node Structure
+=======================
+LRME Node provides interface for Low Resolution Motion Estimation hardware
+driver about the device register map, interrupt map, clocks, regulators.
+
+- cell-index
+ Usage: required
+ Value type: <u32>
+ Definition: Node instance number
+
+- compatible
+ Usage: required
+ Value type: <string>
+ Definition: Should be "qcom,lrme"
+
+- reg-names
+ Usage: optional
+ Value type: <string>
+ Definition: Name of the register resources
+
+- reg
+ Usage: optional
+ Value type: <u32>
+ Definition: Register values
+
+- reg-cam-base
+ Usage: optional
+ Value type: <u32>
+ Definition: Offset of the register space compared to
+ to Camera base register space
+
+- interrupt-names
+ Usage: optional
+ Value type: <string>
+ Definition: Name of the interrupt
+
+- interrupts
+ Usage: optional
+ Value type: <u32>
+ Definition: Interrupt line associated with LRME HW
+
+- regulator-names
+ Usage: required
+ Value type: <string>
+ Definition: Name of the regulator resources for LRME HW
+
+- camss-supply
+ Usage: required
+ Value type: <phandle>
+ Definition: Regulator reference corresponding to the names listed
+ in "regulator-names"
+
+- clock-names
+ Usage: required
+ Value type: <string>
+ Definition: List of clock names required for LRME HW
+
+- clocks
+ Usage: required
+ Value type: <phandle>
+ Definition: List of clocks required for LRME HW
+
+- clock-rates
+ Usage: required
+ Value type: <u32>
+ Definition: List of clocks rates
+
+- clock-cntl-level
+ Usage: required
+ Value type: <string>
+ Definition: List of strings corresponds clock-rates levels
+ Supported strings: minsvs, lowsvs, svs, svs_l1, nominal, turbo
+
+- src-clock-name
+ Usage: required
+ Value type: <string>
+ Definition: Source clock name
+
+Examples:
+ cam_lrme: qcom,lrme@ac6b000 {
+ cell-index = <0>;
+ compatible = "qcom,lrme";
+ reg-names = "lrme";
+ reg = <0xac6b000 0xa00>;
+ reg-cam-base = <0x6b000>;
+ interrupt-names = "lrme";
+ interrupts = <0 476 0>;
+ regulator-names = "camss";
+ camss-supply = <&titan_top_gdsc>;
+ clock-names = "camera_ahb",
+ "camera_axi",
+ "soc_ahb_clk",
+ "cpas_ahb_clk",
+ "camnoc_axi_clk",
+ "lrme_clk_src",
+ "lrme_clk";
+ clocks = <&clock_gcc GCC_CAMERA_AHB_CLK>,
+ <&clock_gcc GCC_CAMERA_AXI_CLK>,
+ <&clock_camcc CAM_CC_SOC_AHB_CLK>,
+ <&clock_camcc CAM_CC_CPAS_AHB_CLK>,
+ <&clock_camcc CAM_CC_CAMNOC_AXI_CLK>,
+ <&clock_camcc CAM_CC_LRME_CLK_SRC>,
+ <&clock_camcc CAM_CC_LRME_CLK>;
+ clock-rates = <0 0 0 0 0 0 0>,
+ <0 0 0 0 0 19200000 19200000>,
+ <0 0 0 0 0 19200000 19200000>,
+ <0 0 0 0 0 19200000 19200000>;
+ clock-cntl-level = "lowsvs", "svs", "svs_l1", "turbo";
+ src-clock-name = "lrme_core_clk_src";
+ };
+
diff --git a/Documentation/devicetree/bindings/mmc/sdhci-msm.txt b/Documentation/devicetree/bindings/mmc/sdhci-msm.txt
index 6222881..001f74f3 100644
--- a/Documentation/devicetree/bindings/mmc/sdhci-msm.txt
+++ b/Documentation/devicetree/bindings/mmc/sdhci-msm.txt
@@ -41,6 +41,11 @@
"HS200_1p2v" - indicates that host can support HS200 at 1.2v.
"DDR_1p8v" - indicates that host can support DDR mode at 1.8v.
"DDR_1p2v" - indicates that host can support DDR mode at 1.2v.
+ - qcom,bus-aggr-clk-rates: this is an array that specifies the frequency for
+ the bus-aggr-clk which should be set corresponding to the
+ frequency used from clk-rate. The Frequency of this clock
+ should be decided based on the power mode in which the
+ apps clk would run with frequency in clk-rates.
- qcom,devfreq,freq-table - specifies supported frequencies for clock scaling.
Clock scaling logic shall toggle between these frequencies based
on card load. In case the defined frequencies are over or below
diff --git a/Documentation/devicetree/bindings/pci/msm_ep_pcie.txt b/Documentation/devicetree/bindings/pci/msm_ep_pcie.txt
new file mode 100644
index 0000000..faf56c2
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/msm_ep_pcie.txt
@@ -0,0 +1,141 @@
+MSM PCI express endpoint
+
+Required properties:
+ - compatible: should be "qcom,pcie-ep".
+ - reg: should contain PCIe register maps.
+ - reg-names: indicates various resources passed to driver by name.
+ Should be "msi", "dm_core", "elbi", "parf", "phy", "mmio".
+ These correspond to different modules within the PCIe domain.
+ - #address-cells: Should provide a value of 0.
+ - interrupt-parent: Should be the PCIe device node itself here.
+ - interrupts: Should be in the format <0 1 2> and it is an index to the
+ interrupt-map that contains PCIe related interrupts.
+ - #interrupt-cells: Should provide a value of 1.
+ - #interrupt-map-mask: should provide a value of 0xffffffff.
+ - interrupt-map: Must create mapping for the number of interrupts
+ that are defined in above interrupts property.
+ For PCIe device node, it should define 6 mappings for
+ the corresponding PCIe interrupts supporting the
+ specification.
+ - interrupt-names: indicates interrupts passed to driver by name.
+ Should be "int_pm_turnoff", "int_dstate_change",
+ "int_l1sub_timeout", "int_link_up",
+ "int_link_down", "int_bridge_flush_n".
+ - perst-gpio: PERST GPIO specified by PCIe spec.
+ - wake-gpio: WAKE GPIO specified by PCIe spec.
+ - clkreq-gpio: CLKREQ GPIO specified by PCIe spec.
+ - <supply-name>-supply: phandle to the regulator device tree node.
+ Refer to the schematics for the corresponding voltage regulators.
+ vreg-1.8-supply: phandle to the analog supply for the PCIe controller.
+ vreg-0.9-supply: phandle to the analog supply for the PCIe controller.
+
+Optional Properties:
+ - qcom,<supply-name>-voltage-level: specifies voltage levels for supply.
+ Should be specified in pairs (max, min, optimal), units uV.
+ - clock-names: list of names of clock inputs.
+ Should be "pcie_0_pipe_clk",
+ "pcie_0_aux_clk", "pcie_0_cfg_ahb_clk",
+ "pcie_0_mstr_axi_clk", "pcie_0_slv_axi_clk",
+ "pcie_0_ldo";
+ - max-clock-frequency-hz: list of the maximum operating frequencies stored
+ in the same order of clock names;
+ - resets: reset specifier pair consists of phandle for the reset controller
+ and reset lines used by this controller.
+ - reset-names: reset signal names sorted in the same order as the property
+ of resets.
+ - qcom,pcie-phy-ver: version of PCIe PHY.
+ - qcom,phy-init: The initialization sequence to bring up the PCIe PHY.
+ Should be specified in groups (offset, value, delay, direction).
+ - qcom,phy-status-reg: Register offset for PHY status.
+ - qcom,dbi-base-reg: Register offset for DBI base address.
+ - qcom,slv-space-reg: Register offset for slave address space size.
+ - qcom,pcie-link-speed: generation of PCIe link speed. The value could be
+ 1, 2 or 3.
+ - qcom,pcie-active-config: boolean type; active configuration of PCIe
+ addressing.
+ - qcom,pcie-aggregated-irq: boolean type; interrupts are aggregated.
+ - qcom,pcie-mhi-a7-irq: boolean type; MHI a7 has separate irq.
+ - qcom,pcie-perst-enum: Link enumeration will be triggered by PERST
+ deassertion.
+ - mdm2apstatus-gpio: GPIO used by PCIe endpoint side to notify the host side.
+ - Refer to "Documentation/devicetree/bindings/arm/msm/msm_bus.txt" for
+ below optional properties:
+ - qcom,msm-bus,name
+ - qcom,msm-bus,num-cases
+ - qcom,msm-bus,num-paths
+ - qcom,msm-bus,vectors-KBps
+
+Example:
+
+ pcie_ep: qcom,pcie@bfffd000 {
+ compatible = "qcom,pcie-ep";
+
+ reg = <0xbfffd000 0x1000>,
+ <0xbfffe000 0x1000>,
+ <0xbffff000 0x1000>,
+ <0xfc520000 0x2000>,
+ <0xfc526000 0x1000>,
+ <0xfc527000 0x1000>;
+ reg-names = "msi", "dm_core", "elbi", "parf", "phy", "mmio";
+
+ #address-cells = <0>;
+ interrupt-parent = <&pcie_ep>;
+ interrupts = <0 1 2 3 4 5>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0xffffffff>;
+ interrupt-map = <0 &intc 0 44 0
+ 1 &intc 0 46 0
+ 2 &intc 0 47 0
+ 3 &intc 0 50 0
+ 4 &intc 0 51 0
+ 5 &intc 0 52 0>;
+ interrupt-names = "int_pm_turnoff", "int_dstate_change",
+ "int_l1sub_timeout", "int_link_up",
+ "int_link_down", "int_bridge_flush_n";
+
+ perst-gpio = <&msmgpio 65 0>;
+ wake-gpio = <&msmgpio 61 0>;
+ clkreq-gpio = <&msmgpio 64 0>;
+ mdm2apstatus-gpio = <&tlmm_pinmux 16 0>;
+
+ gdsc-vdd-supply = <&gdsc_pcie_0>;
+ vreg-1.8-supply = <&pmd9635_l8>;
+ vreg-0.9-supply = <&pmd9635_l4>;
+
+ qcom,vreg-1.8-voltage-level = <1800000 1800000 1000>;
+ qcom,vreg-0.9-voltage-level = <950000 950000 24000>;
+
+ clock-names = "pcie_0_pipe_clk",
+ "pcie_0_aux_clk", "pcie_0_cfg_ahb_clk",
+ "pcie_0_mstr_axi_clk", "pcie_0_slv_axi_clk",
+ "pcie_0_ldo";
+ max-clock-frequency-hz = <62500000>, <1000000>,
+ <0>, <0>, <0>, <0>;
+
+ resets = <&clock_gcc GCC_PCIE_BCR>,
+ <&clock_gcc GCC_PCIE_PHY_BCR>;
+
+ reset-names = "pcie_0_core_reset", "pcie_0_phy_reset";
+
+ qcom,msm-bus,name = "pcie-ep";
+ qcom,msm-bus,num-cases = <2>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <45 512 0 0>,
+ <45 512 500 800>;
+
+ qcom,pcie-link-speed = <1>;
+ qcom,pcie-active-config;
+ qcom,pcie-aggregated-irq;
+ qcom,pcie-mhi-a7-irq;
+ qcom,pcie-perst-enum;
+ qcom,phy-status-reg = <0x728>;
+ qcom,dbi-base-reg = <0x168>;
+ qcom,slv-space-reg = <0x16c>;
+
+ qcom,phy-init = <0x604 0x03 0x0 0x1
+ 0x048 0x08 0x0 0x1
+ 0x64c 0x4d 0x0 0x1
+ 0x600 0x00 0x0 0x1
+ 0x608 0x03 0x0 0x1>;
+ };
diff --git a/Documentation/devicetree/bindings/platform/msm/ipa.txt b/Documentation/devicetree/bindings/platform/msm/ipa.txt
index 6b40b30..d272b7f 100644
--- a/Documentation/devicetree/bindings/platform/msm/ipa.txt
+++ b/Documentation/devicetree/bindings/platform/msm/ipa.txt
@@ -83,6 +83,8 @@
for MHI event rings ids.
- qcom,ipa-tz-unlock-reg: Register start addresses and ranges which
need to be unlocked by TZ.
+- qcom,ipa-uc-monitor-holb: Boolean context flag to indicate whether
+ monitoring of holb via IPA uc is required.
IPA pipe sub nodes (A2 static pipes configurations):
diff --git a/Documentation/devicetree/bindings/power/supply/qcom/qpnp-smb2.txt b/Documentation/devicetree/bindings/power/supply/qcom/qpnp-smb2.txt
index f247a8d..8795aff 100644
--- a/Documentation/devicetree/bindings/power/supply/qcom/qpnp-smb2.txt
+++ b/Documentation/devicetree/bindings/power/supply/qcom/qpnp-smb2.txt
@@ -137,12 +137,6 @@
be based off battery voltage. For both SOC and battery voltage,
charger receives the signal from FG to resume charging.
-- qcom,micro-usb
- Usage: optional
- Value type: <empty>
- Definition: Boolean flag which indicates that the platform only support
- micro usb port.
-
- qcom,suspend-input-on-debug-batt
Usage: optional
Value type: <empty>
diff --git a/Documentation/devicetree/bindings/qdsp/msm-fastrpc.txt b/Documentation/devicetree/bindings/qdsp/msm-fastrpc.txt
index b0db996..0c5f696 100644
--- a/Documentation/devicetree/bindings/qdsp/msm-fastrpc.txt
+++ b/Documentation/devicetree/bindings/qdsp/msm-fastrpc.txt
@@ -13,6 +13,7 @@
Optional properties:
- qcom,fastrpc-glink: Flag to use glink instead of smd for IPC
- qcom,rpc-latency-us: FastRPC QoS latency vote
+- qcom,adsp-remoteheap-vmid: FastRPC remote heap VMID number
Optional subnodes:
- qcom,msm_fastrpc_compute_cb : Child nodes representing the compute context
@@ -28,6 +29,7 @@
compatible = "qcom,msm-fastrpc-adsp";
qcom,fastrpc-glink;
qcom,rpc-latency-us = <2343>;
+ qcom,adsp-remoteheap-vmid = <37>;
qcom,msm_fastrpc_compute_cb_1 {
compatible = "qcom,msm-fastrpc-compute-cb";
diff --git a/Documentation/devicetree/bindings/regulator/qpnp-labibb-regulator.txt b/Documentation/devicetree/bindings/regulator/qpnp-labibb-regulator.txt
index 795ee95..a5e607d 100644
--- a/Documentation/devicetree/bindings/regulator/qpnp-labibb-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/qpnp-labibb-regulator.txt
@@ -88,11 +88,11 @@
50, 60, 70 and 80.
- interrupts: Specify the interrupts as per the interrupt
encoding.
- Currently "lab-vreg-ok" is required for
- LCD mode in pmi8998. For AMOLED mode,
- "lab-vreg-ok" is required only when SWIRE
- control is enabled and skipping 2nd SWIRE
- pulse is required in pmi8952/8996.
+ Currently "lab-vreg-ok" is required and "lab-sc_err"
+ is optional for LCD mode in pmi8998.
+ For AMOLED mode, "lab-vreg-ok" is required
+ only when SWIRE control is enabled and skipping
+ 2nd SWIRE pulse is required in pmi8952/8996.
- interrupt-names: Interrupt names to match up 1-to-1 with
the interrupts specified in 'interrupts'
property.
@@ -153,6 +153,10 @@
any value in the allowed limit.
- qcom,notify-lab-vreg-ok-sts: A boolean property which upon set will
poll and notify the lab_vreg_ok status.
+- qcom,qpnp-lab-sc-wait-time-ms: This property is used to specify the time
+ (in ms) to poll for the short circuit
+ detection. If not specified the default time
+ is 5 sec.
Following properties are available only for PM660A:
@@ -209,6 +213,14 @@
IBB subnode optional properties:
+- interrupts: Specify the interrupts as per the interrupt
+ encoding.
+ Currently "ibb-sc-err" could be used for LCD mode
+ in pmi8998 to detect the short circuit fault.
+- interrupt-names: Interrupt names to match up 1-to-1 with
+ the interrupts specified in 'interrupts'
+ property.
+
- qcom,qpnp-ibb-discharge-resistor: The discharge resistor in Kilo Ohms which
controls the soft start time. Supported values
are 300, 64, 32 and 16.
diff --git a/Documentation/devicetree/bindings/regulator/qpnp-oledb-regulator.txt b/Documentation/devicetree/bindings/regulator/qpnp-oledb-regulator.txt
index 38f599b..55fde0d 100644
--- a/Documentation/devicetree/bindings/regulator/qpnp-oledb-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/qpnp-oledb-regulator.txt
@@ -14,6 +14,11 @@
Value type: <string>
Definition: should be "qcom,qpnp-oledb-regulator".
+- qcom,pmic-revid
+ Usage: required
+ Value type: <phandle>
+ Definition: Used to identify the PMIC subtype.
+
- reg
Usage: required
Value type: <prop-encoded-array>
@@ -57,13 +62,6 @@
rail. This property is applicable only if qcom,ext-pin-ctl
property is specified and it is specific to PM660A.
-- qcom,force-pd-control
- Usage: optional
- Value type: <bool>
- Definition: Used to enable the pull down control forcibly via SPMI by
- disabling the pull down configuration done by hardware
- automatically through SWIRE pulses.
-
- qcom,pbs-client
Usage: optional
Value type: <phandle>
@@ -224,6 +222,7 @@
compatible = "qcom,qpnp-oledb-regulator";
#address-cells = <1>;
#size-cells = <1>;
+ qcom,pmic-revid = <&pm660l_revid>;
reg = <0xe000 0x100>;
label = "oledb";
diff --git a/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt b/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
index d4db970..34c2963 100644
--- a/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
+++ b/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
@@ -2016,6 +2016,66 @@
qcom,aux-codec = <&stub_codec>;
};
+* SDX ASoC Machine driver
+
+Required properties:
+- compatible : "qcom,sdx-asoc-snd-tavil"
+- qcom,model : The user-visible name of this sound card.
+- qcom,prim_mi2s_aux_master : Handle to prim_master pinctrl configurations
+- qcom,prim_mi2s_aux_slave : Handle to prim_slave pinctrl configurations
+- qcom,sec_mi2s_aux_master : Handle to sec_master pinctrl configurations
+- qcom,sec_mi2s_aux_slave : Handle to sec_slave pinctrl configurations
+- asoc-platform: This is phandle list containing the references to platform device
+ nodes that are used as part of the sound card dai-links.
+- asoc-platform-names: This property contains list of platform names. The order of
+ the platform names should match to that of the phandle order
+ given in "asoc-platform".
+- asoc-cpu: This is phandle list containing the references to cpu dai device nodes
+ that are used as part of the sound card dai-links.
+- asoc-cpu-names: This property contains list of cpu dai names. The order of the
+ cpu dai names should match to that of the phandle order give
+ in "asoc-cpu". The cpu names are in the form of "%s.%d" form,
+ where the id (%d) field represents the back-end AFE port id that
+ this CPU dai is associated with.
+
+Example:
+
+ sound-tavil {
+ compatible = "qcom,sdx-asoc-snd-tavil";
+ qcom,model = "sdx-tavil-i2s-snd-card";
+ qcom,prim_mi2s_aux_master = <&prim_master>;
+ qcom,prim_mi2s_aux_slave = <&prim_slave>;
+ qcom,sec_mi2s_aux_master = <&sec_master>;
+ qcom,sec_mi2s_aux_slave = <&sec_slave>;
+
+ asoc-platform = <&pcm0>, <&pcm1>, <&voip>, <&voice>,
+ <&loopback>, <&hostless>, <&afe>, <&routing>,
+ <&pcm_dtmf>, <&host_pcm>, <&compress>;
+ asoc-platform-names = "msm-pcm-dsp.0", "msm-pcm-dsp.1",
+ "msm-voip-dsp", "msm-pcm-voice",
+ "msm-pcm-loopback", "msm-pcm-hostless",
+ "msm-pcm-afe", "msm-pcm-routing",
+ "msm-pcm-dtmf", "msm-voice-host-pcm",
+ "msm-compress-dsp";
+ asoc-cpu = <&dai_pri_auxpcm>, <&mi2s_prim>, <&mi2s_sec>,
+ <&dtmf_tx>,
+ <&rx_capture_tx>, <&rx_playback_rx>,
+ <&tx_capture_tx>, <&tx_playback_rx>,
+ <&afe_pcm_rx>, <&afe_pcm_tx>, <&afe_proxy_rx>,
+ <&afe_proxy_tx>, <&incall_record_rx>,
+ <&incall_record_tx>, <&incall_music_rx>,
+ <&dai_sec_auxpcm>;
+ asoc-cpu-names = "msm-dai-q6-auxpcm.1",
+ "msm-dai-q6-mi2s.0", "msm-dai-q6-mi2s.1",
+ "msm-dai-stub-dev.4", "msm-dai-stub-dev.5",
+ "msm-dai-stub-dev.6", "msm-dai-stub-dev.7",
+ "msm-dai-stub-dev.8", "msm-dai-q6-dev.224",
+ "msm-dai-q6-dev.225", "msm-dai-q6-dev.241",
+ "msm-dai-q6-dev.240", "msm-dai-q6-dev.32771",
+ "msm-dai-q6-dev.32772", "msm-dai-q6-dev.32773",
+ "msm-dai-q6-auxpcm.2";
+ };
+
* APQ8096 Automotive ASoC Machine driver
Required properties:
diff --git a/Documentation/devicetree/bindings/sound/wcd_codec.txt b/Documentation/devicetree/bindings/sound/wcd_codec.txt
index c848ab5..6d2ae5e 100644
--- a/Documentation/devicetree/bindings/sound/wcd_codec.txt
+++ b/Documentation/devicetree/bindings/sound/wcd_codec.txt
@@ -3,7 +3,7 @@
Required properties:
- compatible : "qcom,tasha-slim-pgd" or "qcom,tasha-i2c-pgd" for Tasha Codec
- or "qcom,tavil-slim-pgd" for Tavil Codec
+ "qcom,tavil-slim-pgd" or "qcom,tavil-i2c-pgd" for Tavil Codec
- elemental-addr: codec slimbus slave PGD enumeration address.(48 bits)
- qcom,cdc-reset-gpio: gpio used for codec SOC reset.
diff --git a/Documentation/devicetree/bindings/usb/msm-ssusb.txt b/Documentation/devicetree/bindings/usb/msm-ssusb.txt
index 4bb75aa..881f9ca 100644
--- a/Documentation/devicetree/bindings/usb/msm-ssusb.txt
+++ b/Documentation/devicetree/bindings/usb/msm-ssusb.txt
@@ -65,15 +65,23 @@
- qcom,use-pdc-interrupts: It present, it configures provided PDC IRQ with required
configuration for wakeup functionality.
- extcon: phandles to external connector devices. First phandle should point to
- external connector, which provide "USB" cable events, the second
- should point to external connector device, which provide "USB-HOST"
- cable events. A single phandle may be specified if a single connector
- device provides both "USB" and "USB-HOST" events.
+ external connector, which provide type-C based "USB" cable events, the
+ second should point to external connector device, which provide type-C
+ "USB-HOST" cable events. A single phandle may be specified if a single
+ connector device provides both "USB" and "USB-HOST" events. An optional
+ third phandle may be specified for EUD based attach/detach events. A
+ mandatory fourth phandle has to be specified to provide microUSB based
+ "USB" cable events. An optional fifth phandle may be specified to provide
+ microUSB based "USB-HOST" cable events. Only the fourth phandle may be
+ specified if a single connector device provides both "USB" and "USB-HOST"
+ events.
- qcom,num-gsi-evt-buffs: If present, specifies number of GSI based hardware accelerated
event buffers. 1 event buffer is needed per h/w accelerated endpoint.
- qcom,pm-qos-latency: This represents max tolerable CPU latency in microsecs,
which is used as a vote by driver to get max performance in perf mode.
- qcom,smmu-s1-bypass: If present, configure SMMU to bypass stage 1 translation.
+- qcom,no-vbus-vote-with-type-C: If present, then do not try to get and enable VBUS
+ regulator in type-C host mode from dwc3-msm driver.
Sub nodes:
- Sub node for "DWC3- USB3 controller".
diff --git a/Makefile b/Makefile
index 1e85d9b..061197a 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,6 @@
VERSION = 4
PATCHLEVEL = 9
-SUBLEVEL = 60
+SUBLEVEL = 65
EXTRAVERSION =
NAME = Roaring Lionus
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 393c23f..d8d8b82 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -572,6 +572,7 @@
select USE_OF
select PINCTRL
select ARCH_WANT_KMAP_ATOMIC_FLUSH
+ select SND_SOC_COMPRESS
help
Support for Qualcomm MSM/QSD based systems. This runs on the
apps processor of the MSM/QSD and depends on a shared memory
diff --git a/arch/arm/boot/dts/am33xx.dtsi b/arch/arm/boot/dts/am33xx.dtsi
index 795c146..a3277e6 100644
--- a/arch/arm/boot/dts/am33xx.dtsi
+++ b/arch/arm/boot/dts/am33xx.dtsi
@@ -143,10 +143,11 @@
};
scm_conf: scm_conf@0 {
- compatible = "syscon";
+ compatible = "syscon", "simple-bus";
reg = <0x0 0x800>;
#address-cells = <1>;
#size-cells = <1>;
+ ranges = <0 0 0x800>;
scm_clocks: clocks {
#address-cells = <1>;
diff --git a/arch/arm/boot/dts/armada-375.dtsi b/arch/arm/boot/dts/armada-375.dtsi
index cc952cf..024f1b7 100644
--- a/arch/arm/boot/dts/armada-375.dtsi
+++ b/arch/arm/boot/dts/armada-375.dtsi
@@ -176,9 +176,9 @@
reg = <0x8000 0x1000>;
cache-unified;
cache-level = <2>;
- arm,double-linefill-incr = <1>;
+ arm,double-linefill-incr = <0>;
arm,double-linefill-wrap = <0>;
- arm,double-linefill = <1>;
+ arm,double-linefill = <0>;
prefetch-data = <1>;
};
diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi
index 2d76688..c60cfe9 100644
--- a/arch/arm/boot/dts/armada-38x.dtsi
+++ b/arch/arm/boot/dts/armada-38x.dtsi
@@ -143,9 +143,9 @@
reg = <0x8000 0x1000>;
cache-unified;
cache-level = <2>;
- arm,double-linefill-incr = <1>;
+ arm,double-linefill-incr = <0>;
arm,double-linefill-wrap = <0>;
- arm,double-linefill = <1>;
+ arm,double-linefill = <0>;
prefetch-data = <1>;
};
diff --git a/arch/arm/boot/dts/armada-39x.dtsi b/arch/arm/boot/dts/armada-39x.dtsi
index 34cba87..aeecfa7 100644
--- a/arch/arm/boot/dts/armada-39x.dtsi
+++ b/arch/arm/boot/dts/armada-39x.dtsi
@@ -111,9 +111,9 @@
reg = <0x8000 0x1000>;
cache-unified;
cache-level = <2>;
- arm,double-linefill-incr = <1>;
+ arm,double-linefill-incr = <0>;
arm,double-linefill-wrap = <0>;
- arm,double-linefill = <1>;
+ arm,double-linefill = <0>;
prefetch-data = <1>;
};
diff --git a/arch/arm/boot/dts/dm814x.dtsi b/arch/arm/boot/dts/dm814x.dtsi
index d87efab..ff57a20 100644
--- a/arch/arm/boot/dts/dm814x.dtsi
+++ b/arch/arm/boot/dts/dm814x.dtsi
@@ -252,7 +252,7 @@
};
uart1: uart@20000 {
- compatible = "ti,omap3-uart";
+ compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart1";
reg = <0x20000 0x2000>;
clock-frequency = <48000000>;
@@ -262,7 +262,7 @@
};
uart2: uart@22000 {
- compatible = "ti,omap3-uart";
+ compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart2";
reg = <0x22000 0x2000>;
clock-frequency = <48000000>;
@@ -272,7 +272,7 @@
};
uart3: uart@24000 {
- compatible = "ti,omap3-uart";
+ compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart3";
reg = <0x24000 0x2000>;
clock-frequency = <48000000>;
@@ -332,10 +332,11 @@
ranges = <0 0x140000 0x20000>;
scm_conf: scm_conf@0 {
- compatible = "syscon";
+ compatible = "syscon", "simple-bus";
reg = <0x0 0x800>;
#address-cells = <1>;
#size-cells = <1>;
+ ranges = <0 0 0x800>;
scm_clocks: clocks {
#address-cells = <1>;
diff --git a/arch/arm/boot/dts/dm816x.dtsi b/arch/arm/boot/dts/dm816x.dtsi
index cbdfbc4..62c0a61 100644
--- a/arch/arm/boot/dts/dm816x.dtsi
+++ b/arch/arm/boot/dts/dm816x.dtsi
@@ -371,7 +371,7 @@
};
uart1: uart@48020000 {
- compatible = "ti,omap3-uart";
+ compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart1";
reg = <0x48020000 0x2000>;
clock-frequency = <48000000>;
@@ -381,7 +381,7 @@
};
uart2: uart@48022000 {
- compatible = "ti,omap3-uart";
+ compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart2";
reg = <0x48022000 0x2000>;
clock-frequency = <48000000>;
@@ -391,7 +391,7 @@
};
uart3: uart@48024000 {
- compatible = "ti,omap3-uart";
+ compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart3";
reg = <0x48024000 0x2000>;
clock-frequency = <48000000>;
diff --git a/arch/arm/boot/dts/omap5-uevm.dts b/arch/arm/boot/dts/omap5-uevm.dts
index 53d31a8..f3a3e6b 100644
--- a/arch/arm/boot/dts/omap5-uevm.dts
+++ b/arch/arm/boot/dts/omap5-uevm.dts
@@ -18,6 +18,10 @@
reg = <0 0x80000000 0 0x7f000000>; /* 2032 MB */
};
+ aliases {
+ ethernet = ðernet;
+ };
+
leds {
compatible = "gpio-leds";
led1 {
@@ -72,6 +76,23 @@
>;
};
+&usbhsehci {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ hub@2 {
+ compatible = "usb424,3503";
+ reg = <2>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ ethernet: usbether@3 {
+ compatible = "usb424,9730";
+ reg = <3>;
+ };
+};
+
&wlcore {
compatible = "ti,wl1837";
};
diff --git a/arch/arm/boot/dts/qcom/Makefile b/arch/arm/boot/dts/qcom/Makefile
index 3826bad..c51581d 100644
--- a/arch/arm/boot/dts/qcom/Makefile
+++ b/arch/arm/boot/dts/qcom/Makefile
@@ -3,17 +3,15 @@
sdxpoorwills-cdp.dtb \
sdxpoorwills-mtp.dtb
-
-ifeq ($(CONFIG_ARM64),y)
-always := $(dtb-y)
-subdir-y := $(dts-dirs)
-else
targets += dtbs
targets += $(addprefix ../, $(dtb-y))
$(obj)/../%.dtb: $(src)/%.dts FORCE
$(call if_changed_dep,dtc)
+include $(srctree)/arch/arm64/boot/dts/qcom/Makefile
+$(obj)/../%.dtb: $(src)/../../../../arm64/boot/dts/qcom/%.dts FORCE
+ $(call if_changed_dep,dtc)
+
dtbs: $(addprefix $(obj)/../,$(dtb-y))
-endif
clean-files := *.dtb
diff --git a/arch/arm/boot/dts/qcom/msm-smb138x.dtsi b/arch/arm/boot/dts/qcom/msm-smb138x.dtsi
new file mode 100644
index 0000000..fa21dd7
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/msm-smb138x.dtsi
@@ -0,0 +1,137 @@
+/* Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <dt-bindings/interrupt-controller/irq.h>
+
+&i2c_7 {
+ status = "okay";
+ smb138x: qcom,smb138x@8 {
+ compatible = "qcom,i2c-pmic";
+ reg = <0x8>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interrupt-parent = <&spmi_bus>;
+ interrupts = <0x0 0xd1 0x0 IRQ_TYPE_LEVEL_LOW>;
+ interrupt_names = "smb138x";
+ interrupt-controller;
+ #interrupt-cells = <3>;
+ qcom,periph-map = <0x10 0x11 0x12 0x13 0x14 0x16 0x36>;
+
+ smb138x_revid: qcom,revid@100 {
+ compatible = "qcom,qpnp-revid";
+ reg = <0x100 0x100>;
+ };
+
+ smb138x_tadc: qcom,tadc@3600 {
+ compatible = "qcom,tadc";
+ reg = <0x3600 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ #io-channel-cells = <1>;
+ interrupt-parent = <&smb138x>;
+ interrupts = <0x36 0x0 IRQ_TYPE_EDGE_BOTH>;
+ interrupt-names = "eoc";
+
+ batt_temp@0 {
+ reg = <0>;
+ qcom,rbias = <68100>;
+ qcom,rtherm-at-25degc = <68000>;
+ qcom,beta-coefficient = <3450>;
+ };
+
+ skin_temp@1 {
+ reg = <1>;
+ qcom,rbias = <33000>;
+ qcom,rtherm-at-25degc = <68000>;
+ qcom,beta-coefficient = <3450>;
+ };
+
+ die_temp@2 {
+ reg = <2>;
+ qcom,scale = <(-1306)>;
+ qcom,offset = <397904>;
+ };
+
+ batt_i@3 {
+ reg = <3>;
+ qcom,channel = <3>;
+ qcom,scale = <(-20000000)>;
+ };
+
+ batt_v@4 {
+ reg = <4>;
+ qcom,scale = <5000000>;
+ };
+
+ input_i@5 {
+ reg = <5>;
+ qcom,scale = <14285714>;
+ };
+
+ input_v@6 {
+ reg = <6>;
+ qcom,scale = <25000000>;
+ };
+
+ otg_i@7 {
+ reg = <7>;
+ qcom,scale = <5714286>;
+ };
+ };
+
+ smb1381_charger: qcom,smb1381-charger@1000 {
+ compatible = "qcom,smb138x-parallel-slave";
+ qcom,pmic-revid = <&smb138x_revid>;
+ reg = <0x1000 0x700>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ interrupt-parent = <&smb138x>;
+ io-channels =
+ <&smb138x_tadc 1>,
+ <&smb138x_tadc 2>,
+ <&smb138x_tadc 3>,
+ <&smb138x_tadc 14>,
+ <&smb138x_tadc 15>,
+ <&smb138x_tadc 16>,
+ <&smb138x_tadc 17>;
+ io-channel-names =
+ "connector_temp",
+ "charger_temp",
+ "batt_i",
+ "connector_temp_thr1",
+ "connector_temp_thr2",
+ "connector_temp_thr3",
+ "charger_temp_max";
+
+ qcom,chgr@1000 {
+ reg = <0x1000 0x100>;
+ interrupts = <0x10 0x1 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "chg-state-change";
+ };
+
+ qcom,chgr-misc@1600 {
+ reg = <0x1600 0x100>;
+ interrupts = <0x16 0x1 IRQ_TYPE_EDGE_RISING>,
+ <0x16 0x6 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "wdog-bark",
+ "temperature-change";
+ };
+ };
+ };
+};
+
+&smb1381_charger {
+ smb138x_vbus: qcom,smb138x-vbus {
+ status = "disabled";
+ regulator-name = "smb138x-vbus";
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdx-audio-lpass.dtsi b/arch/arm/boot/dts/qcom/sdx-audio-lpass.dtsi
new file mode 100644
index 0000000..0fd3b34
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/sdx-audio-lpass.dtsi
@@ -0,0 +1,261 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+&soc {
+ qcom,msm-adsp-loader {
+ compatible = "qcom,adsp-loader";
+ qcom,adsp-state = <0>;
+ qcom,proc-img-to-load = "modem";
+ };
+
+ qcom,msm-audio-ion {
+ compatible = "qcom,msm-audio-ion";
+ qcom,scm-mp-enabled;
+ memory-region = <&audio_mem>;
+ };
+
+ pcm0: qcom,msm-pcm {
+ compatible = "qcom,msm-pcm-dsp";
+ qcom,msm-pcm-dsp-id = <0>;
+ };
+
+ routing: qcom,msm-pcm-routing {
+ compatible = "qcom,msm-pcm-routing";
+ };
+
+ pcm1: qcom,msm-pcm-low-latency {
+ compatible = "qcom,msm-pcm-dsp";
+ qcom,msm-pcm-dsp-id = <1>;
+ qcom,msm-pcm-low-latency;
+ qcom,latency-level = "ultra";
+ };
+
+ qcom,msm-compr-dsp {
+ compatible = "qcom,msm-compr-dsp";
+ };
+
+ voip: qcom,msm-voip-dsp {
+ compatible = "qcom,msm-voip-dsp";
+ };
+
+ voice: qcom,msm-pcm-voice {
+ compatible = "qcom,msm-pcm-voice";
+ qcom,destroy-cvd;
+ };
+
+ stub_codec: qcom,msm-stub-codec {
+ compatible = "qcom,msm-stub-codec";
+ };
+
+ qcom,msm-dai-fe {
+ compatible = "qcom,msm-dai-fe";
+ };
+
+ afe: qcom,msm-pcm-afe {
+ compatible = "qcom,msm-pcm-afe";
+ };
+
+ hostless: qcom,msm-pcm-hostless {
+ compatible = "qcom,msm-pcm-hostless";
+ };
+
+ host_pcm: qcom,msm-voice-host-pcm {
+ compatible = "qcom,msm-voice-host-pcm";
+ };
+
+ loopback: qcom,msm-pcm-loopback {
+ compatible = "qcom,msm-pcm-loopback";
+ };
+
+ compress: qcom,msm-compress-dsp {
+ compatible = "qcom,msm-compress-dsp";
+ qcom,adsp-version = "MDSP 1.2";
+ };
+
+ qcom,msm-dai-stub {
+ compatible = "qcom,msm-dai-stub";
+ dtmf_tx: qcom,msm-dai-stub-dtmf-tx {
+ compatible = "qcom,msm-dai-stub-dev";
+ qcom,msm-dai-stub-dev-id = <4>;
+ };
+
+ rx_capture_tx: qcom,msm-dai-stub-host-rx-capture-tx {
+ compatible = "qcom,msm-dai-stub-dev";
+ qcom,msm-dai-stub-dev-id = <5>;
+ };
+
+ rx_playback_rx: qcom,msm-dai-stub-host-rx-playback-rx {
+ compatible = "qcom,msm-dai-stub-dev";
+ qcom,msm-dai-stub-dev-id = <6>;
+ };
+
+ tx_capture_tx: qcom,msm-dai-stub-host-tx-capture-tx {
+ compatible = "qcom,msm-dai-stub-dev";
+ qcom,msm-dai-stub-dev-id = <7>;
+ };
+
+ tx_playback_rx: qcom,msm-dai-stub-host-tx-playback-rx {
+ compatible = "qcom,msm-dai-stub-dev";
+ qcom,msm-dai-stub-dev-id = <8>;
+ };
+ };
+
+ qcom,msm-dai-q6 {
+ compatible = "qcom,msm-dai-q6";
+ afe_pcm_rx: qcom,msm-dai-q6-be-afe-pcm-rx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <224>;
+ };
+
+ afe_pcm_tx: qcom,msm-dai-q6-be-afe-pcm-tx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <225>;
+ };
+
+ afe_proxy_rx: qcom,msm-dai-q6-afe-proxy-rx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <241>;
+ };
+
+ afe_proxy_tx: qcom,msm-dai-q6-afe-proxy-tx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <240>;
+ };
+
+ incall_record_rx: qcom,msm-dai-q6-incall-record-rx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <32771>;
+ };
+
+ incall_record_tx: qcom,msm-dai-q6-incall-record-tx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <32772>;
+ };
+
+ incall_music_rx: qcom,msm-dai-q6-incall-music-rx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <32773>;
+ };
+ };
+
+ pcm_dtmf: qcom,msm-pcm-dtmf {
+ compatible = "qcom,msm-pcm-dtmf";
+ };
+
+ cpu-pmu {
+ compatible = "arm,cortex-a7-pmu";
+ qcom,irq-is-percpu;
+ interrupts = <1 8 0x100>;
+ };
+
+ dai_pri_auxpcm: qcom,msm-pri-auxpcm {
+ compatible = "qcom,msm-auxpcm-dev";
+ qcom,msm-cpudai-auxpcm-mode = <0>, <0>;
+ qcom,msm-cpudai-auxpcm-sync = <1>, <1>;
+ qcom,msm-cpudai-auxpcm-frame = <5>, <4>;
+ qcom,msm-cpudai-auxpcm-quant = <2>, <2>;
+ qcom,msm-cpudai-auxpcm-num-slots = <1>, <1>;
+ qcom,msm-cpudai-auxpcm-slot-mapping = <1>, <1>;
+ qcom,msm-cpudai-auxpcm-data = <0>, <0>;
+ qcom,msm-cpudai-auxpcm-pcm-clk-rate = <2048000>, <2048000>;
+ qcom,msm-auxpcm-interface = "primary";
+ qcom,msm-cpudai-afe-clk-ver = <2>;
+ };
+
+ dai_sec_auxpcm: qcom,msm-sec-auxpcm {
+ compatible = "qcom,msm-auxpcm-dev";
+ qcom,msm-cpudai-auxpcm-mode = <0>, <0>;
+ qcom,msm-cpudai-auxpcm-sync = <1>, <1>;
+ qcom,msm-cpudai-auxpcm-frame = <5>, <4>;
+ qcom,msm-cpudai-auxpcm-quant = <2>, <2>;
+ qcom,msm-cpudai-auxpcm-num-slots = <1>, <1>;
+ qcom,msm-cpudai-auxpcm-slot-mapping = <1>, <1>;
+ qcom,msm-cpudai-auxpcm-data = <0>, <0>;
+ qcom,msm-cpudai-auxpcm-pcm-clk-rate = <2048000>, <2048000>;
+ qcom,msm-auxpcm-interface = "secondary";
+ qcom,msm-cpudai-afe-clk-ver = <2>;
+ };
+
+ qcom,msm-dai-mi2s {
+ compatible = "qcom,msm-dai-mi2s";
+ mi2s_prim: qcom,msm-dai-q6-mi2s-prim {
+ compatible = "qcom,msm-dai-q6-mi2s";
+ qcom,msm-dai-q6-mi2s-dev-id = <0>;
+ qcom,msm-mi2s-rx-lines = <2>;
+ qcom,msm-mi2s-tx-lines = <1>;
+ };
+ mi2s_sec: qcom,msm-dai-q6-mi2s-sec {
+ compatible = "qcom,msm-dai-q6-mi2s";
+ qcom,msm-dai-q6-mi2s-dev-id = <1>;
+ qcom,msm-mi2s-rx-lines = <2>;
+ qcom,msm-mi2s-tx-lines = <1>;
+ };
+
+ };
+
+ prim_master: prim_master_pinctrl {
+ compatible = "qcom,msm-cdc-pinctrl";
+ pinctrl-names = "aud_active", "aud_sleep";
+ pinctrl-0 = <&pri_ws_active_master
+ &pri_sck_active_master
+ &pri_dout_active
+ &pri_din_active>;
+ pinctrl-1 = <&pri_ws_sleep
+ &pri_sck_sleep
+ &pri_dout_sleep
+ &pri_din_sleep>;
+ qcom,mi2s-auxpcm-cdc-gpios;
+ };
+
+ prim_slave: prim_slave_pinctrl {
+ compatible = "qcom,msm-cdc-pinctrl";
+ pinctrl-names = "aud_active", "aud_sleep";
+ pinctrl-0 = <&pri_ws_active_slave
+ &pri_sck_active_slave
+ &pri_dout_active
+ &pri_din_active>;
+ pinctrl-1 = <&pri_ws_sleep
+ &pri_sck_sleep
+ &pri_dout_sleep
+ &pri_din_sleep>;
+ qcom,mi2s-auxpcm-cdc-gpios;
+ };
+
+ sec_master: sec_master_pinctrl {
+ compatible = "qcom,msm-cdc-pinctrl";
+ pinctrl-names = "aud_active", "aud_sleep";
+ pinctrl-0 = <&sec_ws_active_master
+ &sec_sck_active_master
+ &sec_dout_active
+ &sec_din_active>;
+ pinctrl-1 = <&sec_ws_sleep
+ &sec_sck_sleep
+ &sec_dout_sleep
+ &sec_din_sleep>;
+ qcom,mi2s-auxpcm-cdc-gpios;
+ };
+
+ sec_slave: sec_slave_pinctrl {
+ compatible = "qcom,msm-cdc-pinctrl";
+ pinctrl-names = "aud_active", "aud_sleep";
+ pinctrl-0 = <&sec_ws_active_slave
+ &sec_sck_active_slave
+ &sec_dout_active
+ &sec_din_active>;
+ pinctrl-1 = <&sec_ws_sleep
+ &sec_sck_sleep
+ &sec_dout_sleep
+ &sec_din_sleep>;
+ qcom,mi2s-auxpcm-cdc-gpios;
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdx-wsa881x.dtsi b/arch/arm/boot/dts/qcom/sdx-wsa881x.dtsi
new file mode 100644
index 0000000..a294e6c
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/sdx-wsa881x.dtsi
@@ -0,0 +1,45 @@
+/* Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+&i2c_3 {
+ tavil_codec {
+ swr_master {
+ compatible = "qcom,swr-wcd";
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ wsa881x_0211: wsa881x@20170211 {
+ compatible = "qcom,wsa881x";
+ reg = <0x00 0x20170211>;
+ qcom,spkr-sd-n-node = <&wsa_spkr_wcd_sd1>;
+ };
+
+ wsa881x_0212: wsa881x@20170212 {
+ compatible = "qcom,wsa881x";
+ reg = <0x00 0x20170212>;
+ qcom,spkr-sd-n-node = <&wsa_spkr_wcd_sd2>;
+ };
+
+ wsa881x_0213: wsa881x@21170213 {
+ compatible = "qcom,wsa881x";
+ reg = <0x00 0x21170213>;
+ qcom,spkr-sd-n-node = <&wsa_spkr_wcd_sd1>;
+ };
+
+ wsa881x_0214: wsa881x@21170214 {
+ compatible = "qcom,wsa881x";
+ reg = <0x00 0x21170214>;
+ qcom,spkr-sd-n-node = <&wsa_spkr_wcd_sd2>;
+ };
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-audio-overlay.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills-audio-overlay.dtsi
new file mode 100644
index 0000000..f90bd7f
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-audio-overlay.dtsi
@@ -0,0 +1,143 @@
+/*
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "sdxpoorwills-wcd.dtsi"
+#include "sdx-wsa881x.dtsi"
+#include <dt-bindings/clock/qcom,audio-ext-clk.h>
+
+&snd_934x {
+ qcom,audio-routing =
+ "AIF4 VI", "MCLK",
+ "RX_BIAS", "MCLK",
+ "MADINPUT", "MCLK",
+ "AMIC2", "MIC BIAS2",
+ "MIC BIAS2", "Headset Mic",
+ "AMIC3", "MIC BIAS2",
+ "MIC BIAS2", "ANCRight Headset Mic",
+ "AMIC4", "MIC BIAS2",
+ "MIC BIAS2", "ANCLeft Headset Mic",
+ "AMIC5", "MIC BIAS3",
+ "MIC BIAS3", "Handset Mic",
+ "DMIC0", "MIC BIAS1",
+ "MIC BIAS1", "Digital Mic0",
+ "DMIC1", "MIC BIAS1",
+ "MIC BIAS1", "Digital Mic1",
+ "DMIC2", "MIC BIAS3",
+ "MIC BIAS3", "Digital Mic2",
+ "DMIC3", "MIC BIAS3",
+ "MIC BIAS3", "Digital Mic3",
+ "DMIC4", "MIC BIAS4",
+ "MIC BIAS4", "Digital Mic4",
+ "DMIC5", "MIC BIAS4",
+ "MIC BIAS4", "Digital Mic5",
+ "SpkrLeft IN", "SPK1 OUT",
+ "SpkrRight IN", "SPK2 OUT";
+
+ qcom,msm-mbhc-hphl-swh = <1>;
+ qcom,msm-mbhc-gnd-swh = <1>;
+ qcom,msm-mbhc-hs-mic-max-threshold-mv = <1700>;
+ qcom,msm-mbhc-hs-mic-min-threshold-mv = <50>;
+ qcom,tavil-mclk-clk-freq = <12288000>;
+
+ asoc-codec = <&stub_codec>;
+ asoc-codec-names = "msm-stub-codec.1";
+
+ qcom,wsa-max-devs = <2>;
+ qcom,wsa-devs = <&wsa881x_0211>, <&wsa881x_0212>,
+ <&wsa881x_0213>, <&wsa881x_0214>;
+ qcom,wsa-aux-dev-prefix = "SpkrLeft", "SpkrRight",
+ "SpkrLeft", "SpkrRight";
+};
+
+&soc {
+ wcd9xxx_intc: wcd9xxx-irq {
+ status = "ok";
+ compatible = "qcom,wcd9xxx-irq";
+ interrupt-controller;
+ #interrupt-cells = <1>;
+ interrupt-parent = <&tlmm>;
+ qcom,gpio-connect = <&tlmm 71 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&wcd_intr_default>;
+ };
+
+ clock_audio_up: audio_ext_clk_up {
+ compatible = "qcom,audio-ref-clk";
+ qcom,codec-mclk-clk-freq = <12288000>;
+ pinctrl-names = "sleep", "active";
+ pinctrl-0 = <&i2s_mclk_sleep>;
+ pinctrl-1 = <&i2s_mclk_active>;
+ #clock-cells = <1>;
+ };
+
+ wcd_rst_gpio: msm_cdc_pinctrl@77 {
+ compatible = "qcom,msm-cdc-pinctrl";
+ qcom,cdc-rst-n-gpio = <&tlmm 77 0>;
+ pinctrl-names = "aud_active", "aud_sleep";
+ pinctrl-0 = <&cdc_reset_active>;
+ pinctrl-1 = <&cdc_reset_sleep>;
+ };
+};
+
+&i2c_3 {
+ wcd934x_cdc: tavil_codec {
+ compatible = "qcom,tavil-i2c-pgd";
+ elemental-addr = [00 01 50 02 17 02];
+
+ interrupt-parent = <&wcd9xxx_intc>;
+ interrupts = <0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
+ 17 18 19 20 21 22 23 24 25 26 27 28 29
+ 30 31>;
+
+ qcom,wcd-rst-gpio-node = <&wcd_rst_gpio>;
+
+ clock-names = "wcd_clk";
+ clocks = <&clock_audio_up AUDIO_LPASS_MCLK>;
+
+ cdc-vdd-buck-supply = <&pmxpoorwills_l6>;
+ qcom,cdc-vdd-buck-voltage = <1800000 1800000>;
+ qcom,cdc-vdd-buck-current = <650000>;
+
+ cdc-buck-sido-supply = <&pmxpoorwills_l6>;
+ qcom,cdc-buck-sido-voltage = <1800000 1800000>;
+ qcom,cdc-buck-sido-current = <250000>;
+
+ cdc-vdd-tx-h-supply = <&pmxpoorwills_l6>;
+ qcom,cdc-vdd-tx-h-voltage = <1800000 1800000>;
+ qcom,cdc-vdd-tx-h-current = <25000>;
+
+ cdc-vdd-rx-h-supply = <&pmxpoorwills_l6>;
+ qcom,cdc-vdd-rx-h-voltage = <1800000 1800000>;
+ qcom,cdc-vdd-rx-h-current = <25000>;
+
+ cdc-vddpx-1-supply = <&pmxpoorwills_l6>;
+ qcom,cdc-vddpx-1-voltage = <1800000 1800000>;
+ qcom,cdc-vddpx-1-current = <10000>;
+
+ qcom,cdc-static-supplies = "cdc-vdd-buck",
+ "cdc-buck-sido",
+ "cdc-vdd-tx-h",
+ "cdc-vdd-rx-h",
+ "cdc-vddpx-1";
+
+ qcom,cdc-micbias1-mv = <1800>;
+ qcom,cdc-micbias2-mv = <1800>;
+ qcom,cdc-micbias3-mv = <1800>;
+ qcom,cdc-micbias4-mv = <1800>;
+
+ qcom,cdc-mclk-clk-rate = <12288000>;
+ qcom,cdc-dmic-sample-rate = <4800000>;
+
+ qcom,wdsp-cmpnt-dev-name = "tavil_codec";
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-audio.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills-audio.dtsi
new file mode 100644
index 0000000..a3eba9a
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-audio.dtsi
@@ -0,0 +1,51 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "sdx-audio-lpass.dtsi"
+
+&soc {
+ snd_934x: sound-tavil {
+ compatible = "qcom,sdx-asoc-snd-tavil";
+ qcom,model = "sdx-tavil-i2s-snd-card";
+ qcom,prim_mi2s_aux_master = <&prim_master>;
+ qcom,prim_mi2s_aux_slave = <&prim_slave>;
+ qcom,sec_mi2s_aux_master = <&sec_master>;
+ qcom,sec_mi2s_aux_slave = <&sec_slave>;
+
+ asoc-platform = <&pcm0>, <&pcm1>, <&voip>, <&voice>,
+ <&loopback>, <&hostless>, <&afe>, <&routing>,
+ <&pcm_dtmf>, <&host_pcm>, <&compress>;
+ asoc-platform-names = "msm-pcm-dsp.0", "msm-pcm-dsp.1",
+ "msm-voip-dsp", "msm-pcm-voice",
+ "msm-pcm-loopback", "msm-pcm-hostless",
+ "msm-pcm-afe", "msm-pcm-routing",
+ "msm-pcm-dtmf", "msm-voice-host-pcm",
+ "msm-compress-dsp";
+ asoc-cpu = <&dai_pri_auxpcm>, <&mi2s_prim>, <&mi2s_sec>,
+ <&dtmf_tx>,
+ <&rx_capture_tx>, <&rx_playback_rx>,
+ <&tx_capture_tx>, <&tx_playback_rx>,
+ <&afe_pcm_rx>, <&afe_pcm_tx>, <&afe_proxy_rx>,
+ <&afe_proxy_tx>, <&incall_record_rx>,
+ <&incall_record_tx>, <&incall_music_rx>,
+ <&dai_sec_auxpcm>;
+ asoc-cpu-names = "msm-dai-q6-auxpcm.1",
+ "msm-dai-q6-mi2s.0", "msm-dai-q6-mi2s.1",
+ "msm-dai-stub-dev.4", "msm-dai-stub-dev.5",
+ "msm-dai-stub-dev.6", "msm-dai-stub-dev.7",
+ "msm-dai-stub-dev.8", "msm-dai-q6-dev.224",
+ "msm-dai-q6-dev.225", "msm-dai-q6-dev.241",
+ "msm-dai-q6-dev.240", "msm-dai-q6-dev.32771",
+ "msm-dai-q6-dev.32772", "msm-dai-q6-dev.32773",
+ "msm-dai-q6-auxpcm.2";
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-cdp-audio-overlay.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills-cdp-audio-overlay.dtsi
new file mode 100644
index 0000000..a7943cd
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-cdp-audio-overlay.dtsi
@@ -0,0 +1,22 @@
+/*
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "sdxpoorwills-audio-overlay.dtsi"
+
+&soc {
+ sound-tavil {
+ qcom,wsa-max-devs = <1>;
+ qcom,wsa-devs = <&wsa881x_0214>;
+ qcom,wsa-aux-dev-prefix = "SpkrRight";
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-pinctrl.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills-pinctrl.dtsi
index 2b0fa5c..b6c04ec 100644
--- a/arch/arm/boot/dts/qcom/sdxpoorwills-pinctrl.dtsi
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-pinctrl.dtsi
@@ -919,6 +919,361 @@
};
};
};
+
+ wcd9xxx_intr {
+ wcd_intr_default: wcd_intr_default{
+ mux {
+ pins = "gpio71";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio71";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* pull down */
+ input-enable;
+ };
+ };
+ };
+
+ cdc_reset_ctrl {
+ cdc_reset_sleep: cdc_reset_sleep {
+ mux {
+ pins = "gpio77";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio77";
+ drive-strength = <2>;
+ bias-disable;
+ output-low;
+ };
+ };
+
+ cdc_reset_active:cdc_reset_active {
+ mux {
+ pins = "gpio77";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio77";
+ drive-strength = <8>;
+ bias-pull-down;
+ output-high;
+ };
+ };
+ };
+
+ i2s_mclk {
+ i2s_mclk_sleep: i2s_mclk_sleep {
+ mux {
+ pins = "gpio62";
+ function = "i2s_mclk";
+ };
+
+ config {
+ pins = "gpio62";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ };
+ };
+
+ i2s_mclk_active: i2s_mclk_active {
+ mux {
+ pins = "gpio62";
+ function = "i2s_mclk";
+ };
+
+ config {
+ pins = "gpio62";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ output-high;
+ };
+ };
+ };
+
+ pmx_pri_mi2s_aux {
+ pri_ws_sleep: pri_ws_sleep {
+ mux {
+ pins = "gpio12";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio12";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ pri_sck_sleep: pri_sck_sleep {
+ mux {
+ pins = "gpio15";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio15";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ pri_dout_sleep: pri_dout_sleep {
+ mux {
+ pins = "gpio14";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio14";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ pri_ws_active_master: pri_ws_active_master {
+ mux {
+ pins = "gpio12";
+ function = "pri_mi2s_ws_a";
+ };
+
+ config {
+ pins = "gpio12";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ output-high;
+ };
+ };
+
+ pri_sck_active_master: pri_sck_active_master {
+ mux {
+ pins = "gpio15";
+ function = "pri_mi2s_sck_a";
+ };
+
+ config {
+ pins = "gpio15";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ output-high;
+ };
+ };
+
+ pri_ws_active_slave: pri_ws_active_slave {
+ mux {
+ pins = "gpio12";
+ function = "pri_mi2s_ws_a";
+ };
+
+ config {
+ pins = "gpio12";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ };
+ };
+
+ pri_sck_active_slave: pri_sck_active_slave {
+ mux {
+ pins = "gpio15";
+ function = "pri_mi2s_sck_a";
+ };
+
+ config {
+ pins = "gpio15";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ };
+ };
+
+ pri_dout_active: pri_dout_active {
+ mux {
+ pins = "gpio14";
+ function = "pri_mi2s_data1_a";
+ };
+
+ config {
+ pins = "gpio14";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ output-high;
+ };
+ };
+ };
+
+ pmx_pri_mi2s_aux_din {
+ pri_din_sleep: pri_din_sleep {
+ mux {
+ pins = "gpio13";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio13";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ pri_din_active: pri_din_active {
+ mux {
+ pins = "gpio13";
+ function = "pri_mi2s_data0_a";
+ };
+
+ config {
+ pins = "gpio13";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL */
+ };
+ };
+ };
+
+ pmx_sec_mi2s_aux {
+ sec_ws_sleep: sec_ws_sleep {
+ mux {
+ pins = "gpio16";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio16";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ sec_sck_sleep: sec_sck_sleep {
+ mux {
+ pins = "gpio19";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio19";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ sec_dout_sleep: sec_dout_sleep {
+ mux {
+ pins = "gpio18";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio18";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ sec_ws_active_master: sec_ws_active_master {
+ mux {
+ pins = "gpio16";
+ function = "sec_mi2s_ws_a";
+ };
+
+ config {
+ pins = "gpio16";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ output-high;
+ };
+ };
+
+ sec_sck_active_master: sec_sck_active_master {
+ mux {
+ pins = "gpio19";
+ function = "sec_mi2s_sck_a";
+ };
+
+ config {
+ pins = "gpio19";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ output-high;
+ };
+ };
+
+ sec_ws_active_slave: sec_ws_active_slave {
+ mux {
+ pins = "gpio16";
+ function = "sec_mi2s_ws_a";
+ };
+
+ config {
+ pins = "gpio16";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ };
+ };
+
+ sec_sck_active_slave: sec_sck_active_slave {
+ mux {
+ pins = "gpio19";
+ function = "sec_mi2s_sck_a";
+ };
+
+ config {
+ pins = "gpio19";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ };
+ };
+
+ sec_dout_active: sec_dout_active {
+ mux {
+ pins = "gpio18";
+ function = "sec_mi2s_data1_a";
+ };
+
+ config {
+ pins = "gpio18";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL*/
+ output-high;
+ };
+ };
+ };
+
+ pmx_sec_mi2s_aux_din {
+ sec_din_sleep: sec_din_sleep {
+ mux {
+ pins = "gpio17";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio17";
+ drive-strength = <2>; /* 2 mA */
+ bias-pull-down; /* PULL DOWN */
+ input-enable;
+ };
+ };
+
+ sec_din_active: sec_din_active {
+ mux {
+ pins = "gpio17";
+ function = "sec_mi2s_data0_a";
+ };
+
+ config {
+ pins = "gpio17";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable; /* NO PULL */
+ };
+ };
+ };
};
};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-regulator.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills-regulator.dtsi
index cc126f6..9947594 100644
--- a/arch/arm/boot/dts/qcom/sdxpoorwills-regulator.dtsi
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-regulator.dtsi
@@ -12,103 +12,324 @@
#include <dt-bindings/regulator/qcom,rpmh-regulator.h>
-/* Stub regulators */
-/ {
- pmxpoorwills_s1: regualtor-pmxpoorwills-s1 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_s1";
- qcom,hpm-min-load = <100000>;
- regulator-min-microvolt = <752000>;
- regulator-max-microvolt = <752000>;
+&soc {
+ /* RPMh regulators */
+
+ /* pmxpoorwills S1 - VDD_MODEM supply */
+ rpmh-regulator-modemlvl {
+ compatible = "qcom,rpmh-arc-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "mss.lvl";
+ pmxpoorwills_s1_level: regualtor-pmxpoorwills-s1 {
+ regulator-name = "pmxpoorwills_s1_level";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ };
};
- /* VDD CX supply */
- pmxpoorwills_s5_level: regualtor-pmxpoorwills-s5-level {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_s5_level";
- qcom,hpm-min-load = <100000>;
- regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
- regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ rpmh-regulator-smpa4 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "smpa4";
+ pmxpoorwills_s4: regulator-pmxpoorwills-s4 {
+ regulator-name = "pmxpoorwills_s4";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,init-voltage = <1800000>;
+ };
};
- pmxpoorwills_s5_level_ao: regualtor-pmxpoorwills-s5-level-ao {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_s5_level_ao";
- qcom,hpm-min-load = <100000>;
- regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
- regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ /* pmxpoorwills S5 - VDD_CX supply */
+ rpmh-regulator-cxlvl {
+ compatible = "qcom,rpmh-arc-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "cx.lvl";
+ pmxpoorwills_s5_level-parent-supply = <&pmxpoorwills_l9_level>;
+ pmxpoorwills_s5_level_ao-parent-supply =
+ <&pmxpoorwills_l9_level_ao>;
+ pmxpoorwills_s5_level: regualtor-pmxpoorwills-s5-level {
+ regulator-name = "pmxpoorwills_s5_level";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ qcom,min-dropout-voltage-level = <(-1)>;
+ };
+
+ pmxpoorwills_s5_level_ao: regualtor-pmxpoorwills-s5-level-ao {
+ regulator-name = "pmxpoorwills_s5_level_ao";
+ qcom,set = <RPMH_REGULATOR_SET_ACTIVE>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ qcom,min-dropout-voltage-level = <(-1)>;
+ };
};
- pmxpoorwills_l1: regualtor-pmxpoorwills-11 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l1";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <1200000>;
- regulator-max-microvolt = <1200000>;
+ rpmh-regulator-ldoa1 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa1";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l1: regualtor-pmxpoorwills-11 {
+ regulator-name = "pmxpoorwills_l1";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ qcom,init-voltage = <1200000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
};
- pmxpoorwills_l3: regualtor-pmxpoorwills-l3 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l3";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <800000>;
- regulator-max-microvolt = <800000>;
+ rpmh-regulator-ldoa2 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa2";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l2: regualtor-pmxpoorwills-12 {
+ regulator-name = "pmxpoorwills_l2";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <1128000>;
+ regulator-max-microvolt = <1128000>;
+ qcom,init-voltage = <1128000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ regulator-always-on;
+ };
};
- pmxpoorwills_l4: regualtor-pmxpoorwills-l4 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l4";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <872000>;
- regulator-max-microvolt = <872000>;
+ rpmh-regulator-ldoa3 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa3";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l3: regualtor-pmxpoorwills-l3 {
+ regulator-name = "pmxpoorwills_l3";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <800000>;
+ qcom,init-voltage = <800000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
};
- pmxpoorwills_l5: regualtor-pmxpoorwills-l5 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l5";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
+ rpmh-regulator-ldoa4 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa4";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l4: regualtor-pmxpoorwills-l4 {
+ regulator-name = "pmxpoorwills_l4";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <872000>;
+ regulator-max-microvolt = <872000>;
+ qcom,init-voltage = <872000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
};
- pmxpoorwills_l6: regualtor-pmxpoorwills-l6 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l6";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
+ rpmh-regulator-ldoa5 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa5";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l5: regualtor-pmxpoorwills-l5 {
+ regulator-name = "pmxpoorwills_l5";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <1704000>;
+ regulator-max-microvolt = <1704000>;
+ qcom,init-voltage = <1704000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
};
- pmxpoorwills_l8: regualtor-pmxpoorwills-l8 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l8";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <800000>;
- regulator-max-microvolt = <800000>;
+ rpmh-regulator-ldoa7 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa7";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l7: regualtor-pmxpoorwills-l7 {
+ regulator-name = "pmxpoorwills_l7";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <2952000>;
+ regulator-max-microvolt = <2952000>;
+ qcom,init-voltage = <2952000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
};
- /* VDD MX supply */
- pmxpoorwills_l9_level: regualtor-pmxpoorwills-l9-level {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l9_level";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
- regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ rpmh-regulator-ldoa8 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa8";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l8: regualtor-pmxpoorwills-l8 {
+ regulator-name = "pmxpoorwills_l8";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <800000>;
+ qcom,init-voltage = <800000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
};
- pmxpoorwills_l9_level_ao: regualtor-pmxpoorwills-l9-level_ao {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l9_level_ao";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
- regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ /* pmxpoorwills L9 - VDD_MX supply */
+ rpmh-regulator-mxlvl {
+ compatible = "qcom,rpmh-arc-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "mx.lvl";
+ pmxpoorwills_l9_level: regualtor-pmxpoorwills-l9-level {
+ regulator-name = "pmxpoorwills_l9_level";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ };
+
+ pmxpoorwills_l9_level_ao: regualtor-pmxpoorwills-l9-level-ao {
+ regulator-name = "pmxpoorwills_l9_level_ao";
+ qcom,set = <RPMH_REGULATOR_SET_ACTIVE>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ };
};
- pmxpoorwills_l10: regualtor-pmxpoorwills-l10 {
- compatible = "qcom,stub-regulator";
- regulator-name = "pmxpoorwills_l10";
- qcom,hpm-min-load = <10000>;
- regulator-min-microvolt = <3088000>;
- regulator-max-microvolt = <3088000>;
+ rpmh-regulator-ldoa10 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa10";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l10: regualtor-pmxpoorwills-l10 {
+ regulator-name = "pmxpoorwills_l10";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <3088000>;
+ regulator-max-microvolt = <3088000>;
+ qcom,init-voltage = <3088000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
+ };
+
+ rpmh-regulator-ldoa11 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa11";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l11: regualtor-pmxpoorwills-l11 {
+ regulator-name = "pmxpoorwills_l11";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <1808000>;
+ regulator-max-microvolt = <1808000>;
+ qcom,init-voltage = <1808000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
+ };
+
+ rpmh-regulator-ldoa12 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa12";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l12: regualtor-pmxpoorwills-l12 {
+ regulator-name = "pmxpoorwills_l12";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <2704000>;
+ regulator-max-microvolt = <2704000>;
+ qcom,init-voltage = <2704000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
+ };
+
+ rpmh-regulator-ldoa13 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa13";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l13: regualtor-pmxpoorwills-l13 {
+ regulator-name = "pmxpoorwills_l13";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <1808000>;
+ regulator-max-microvolt = <1808000>;
+ qcom,init-voltage = <1808000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
+ };
+
+ rpmh-regulator-ldoa14 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa14";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l14: regualtor-pmxpoorwills-l14 {
+ regulator-name = "pmxpoorwills_l14";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <620000>;
+ regulator-max-microvolt = <620000>;
+ qcom,init-voltage = <620000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ };
+ };
+
+ rpmh-regulator-ldoa16 {
+ compatible = "qcom,rpmh-vrm-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "ldoa16";
+ qcom,supported-modes =
+ <RPMH_REGULATOR_MODE_LDO_LPM
+ RPMH_REGULATOR_MODE_LDO_HPM>;
+ qcom,mode-threshold-currents = <0 1>;
+ pmxpoorwills_l16: regualtor-pmxpoorwills-l16 {
+ regulator-name = "pmxpoorwills_l16";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ regulator-min-microvolt = <752000>;
+ regulator-max-microvolt = <752000>;
+ qcom,init-voltage = <752000>;
+ qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
+ regulator-always-on;
+ };
+ };
+
+ /* VREF_RGMII */
+ rpmh-regulator-rgmii {
+ compatible = "qcom,rpmh-xob-regulator";
+ mboxes = <&apps_rsc 0>;
+ qcom,resource-name = "vrefa2";
+ vreg_rgmii: regulator-rgmii {
+ regulator-name = "vreg_rgmii";
+ qcom,set = <RPMH_REGULATOR_SET_ALL>;
+ };
};
};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-rumi.dts b/arch/arm/boot/dts/qcom/sdxpoorwills-rumi.dts
index 3aacd63..aa9e7f2 100644
--- a/arch/arm/boot/dts/qcom/sdxpoorwills-rumi.dts
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-rumi.dts
@@ -23,6 +23,30 @@
qcom,board-id = <15 0>;
};
+&soc {
+ /* Delete rpmh regulators */
+ /delete-node/ rpmh-regulator-modemlvl;
+ /delete-node/ rpmh-regulator-smpa4;
+ /delete-node/ rpmh-regulator-cxlvl;
+ /delete-node/ rpmh-regulator-ldoa1;
+ /delete-node/ rpmh-regulator-ldoa2;
+ /delete-node/ rpmh-regulator-ldoa3;
+ /delete-node/ rpmh-regulator-ldoa4;
+ /delete-node/ rpmh-regulator-ldoa5;
+ /delete-node/ rpmh-regulator-ldoa7;
+ /delete-node/ rpmh-regulator-ldoa8;
+ /delete-node/ rpmh-regulator-mxlvl;
+ /delete-node/ rpmh-regulator-ldoa10;
+ /delete-node/ rpmh-regulator-ldoa11;
+ /delete-node/ rpmh-regulator-ldoa12;
+ /delete-node/ rpmh-regulator-ldoa13;
+ /delete-node/ rpmh-regulator-ldoa14;
+ /delete-node/ rpmh-regulator-ldoa16;
+ /delete-node/ rpmh-regulator-rgmii;
+};
+
+#include "sdxpoorwills-stub-regulator.dtsi"
+
&blsp1_uart2 {
pinctrl-names = "default";
pinctrl-0 = <&uart2_console_active>;
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-stub-regulator.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills-stub-regulator.dtsi
new file mode 100644
index 0000000..7c6b7b0
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-stub-regulator.dtsi
@@ -0,0 +1,176 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+
+/* Stub regulators */
+/ {
+ pmxpoorwills_s1: regualtor-pmxpoorwills-s1 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_s1";
+ qcom,hpm-min-load = <100000>;
+ regulator-min-microvolt = <752000>;
+ regulator-max-microvolt = <752000>;
+ };
+
+ pmxpoorwills_s4: regualtor-pmxpoorwills-s4 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_s4";
+ qcom,hpm-min-load = <100000>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ /* VDD CX supply */
+ pmxpoorwills_s5_level: regualtor-pmxpoorwills-s5-level {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_s5_level";
+ qcom,hpm-min-load = <100000>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ };
+
+ pmxpoorwills_s5_level_ao: regualtor-pmxpoorwills-s5-level-ao {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_s5_level_ao";
+ qcom,hpm-min-load = <100000>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ };
+
+ pmxpoorwills_l1: regualtor-pmxpoorwills-11 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l1";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ };
+
+ pmxpoorwills_l2: regualtor-pmxpoorwills-12 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l2";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <1128000>;
+ regulator-max-microvolt = <1128000>;
+ };
+
+ pmxpoorwills_l3: regualtor-pmxpoorwills-l3 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l3";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <800000>;
+ };
+
+ pmxpoorwills_l4: regualtor-pmxpoorwills-l4 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l4";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <872000>;
+ regulator-max-microvolt = <872000>;
+ };
+
+ pmxpoorwills_l5: regualtor-pmxpoorwills-l5 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l5";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ pmxpoorwills_l7: regualtor-pmxpoorwills-l7 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l7";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <2950000>;
+ };
+
+ pmxpoorwills_l8: regualtor-pmxpoorwills-l8 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l8";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <800000>;
+ };
+
+ /* VDD MX supply */
+ pmxpoorwills_l9_level: regualtor-pmxpoorwills-l9-level {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l9_level";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ };
+
+ pmxpoorwills_l9_level_ao: regualtor-pmxpoorwills-l9-level_ao {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l9_level_ao";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <RPMH_REGULATOR_LEVEL_OFF>;
+ regulator-max-microvolt = <RPMH_REGULATOR_LEVEL_MAX>;
+ };
+
+ pmxpoorwills_l10: regualtor-pmxpoorwills-l10 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l10";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <3088000>;
+ regulator-max-microvolt = <3088000>;
+ };
+
+ pmxpoorwills_l11: regualtor-pmxpoorwills-l11 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l11";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <1808000>;
+ regulator-max-microvolt = <2848000>;
+ };
+
+ pmxpoorwills_l12: regualtor-pmxpoorwills-l12 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l12";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <2704000>;
+ regulator-max-microvolt = <2704000>;
+ };
+
+ pmxpoorwills_l13: regualtor-pmxpoorwills-l13 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l13";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <1808000>;
+ regulator-max-microvolt = <2848000>;
+ };
+
+ pmxpoorwills_l14: regualtor-pmxpoorwills-l14 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l14";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <620000>;
+ regulator-max-microvolt = <752000>;
+ };
+
+ pmxpoorwills_l16: regualtor-pmxpoorwills-l16 {
+ compatible = "qcom,stub-regulator";
+ regulator-name = "pmxpoorwills_l16";
+ qcom,hpm-min-load = <10000>;
+ regulator-min-microvolt = <752000>;
+ regulator-max-microvolt = <752000>;
+ };
+
+ /* VREF_RGMII */
+ vreg_rgmii: rgmii-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vreg_rgmii";
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills-wcd.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills-wcd.dtsi
new file mode 100644
index 0000000..9303ed1
--- /dev/null
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills-wcd.dtsi
@@ -0,0 +1,80 @@
+/* Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+&i2c_3 {
+ tavil_codec {
+ wcd: wcd_pinctrl@5 {
+ compatible = "qcom,wcd-pinctrl";
+ qcom,num-gpios = <5>;
+ gpio-controller;
+ #gpio-cells = <2>;
+
+ spkr_1_wcd_en_active: spkr_1_wcd_en_active {
+ mux {
+ pins = "gpio2";
+ };
+
+ config {
+ pins = "gpio2";
+ output-high;
+ };
+ };
+
+ spkr_1_wcd_en_sleep: spkr_1_wcd_en_sleep {
+ mux {
+ pins = "gpio2";
+ };
+
+ config {
+ pins = "gpio2";
+ input-enable;
+ };
+ };
+
+ spkr_2_wcd_en_active: spkr_2_sd_n_active {
+ mux {
+ pins = "gpio3";
+ };
+
+ config {
+ pins = "gpio3";
+ output-high;
+ };
+ };
+
+ spkr_2_wcd_en_sleep: spkr_2_sd_n_sleep {
+ mux {
+ pins = "gpio3";
+ };
+
+ config {
+ pins = "gpio3";
+ input-enable;
+ };
+ };
+ };
+
+ wsa_spkr_wcd_sd1: msm_cdc_pinctrll {
+ compatible = "qcom,msm-cdc-pinctrl";
+ pinctrl-names = "aud_active", "aud_sleep";
+ pinctrl-0 = <&spkr_1_wcd_en_active>;
+ pinctrl-1 = <&spkr_1_wcd_en_sleep>;
+ };
+
+ wsa_spkr_wcd_sd2: msm_cdc_pinctrlr {
+ compatible = "qcom,msm-cdc-pinctrl";
+ pinctrl-names = "aud_active", "aud_sleep";
+ pinctrl-0 = <&spkr_2_wcd_en_active>;
+ pinctrl-1 = <&spkr_2_wcd_en_sleep>;
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/qcom/sdxpoorwills.dtsi b/arch/arm/boot/dts/qcom/sdxpoorwills.dtsi
index 2b89ee8..3d2dc66 100644
--- a/arch/arm/boot/dts/qcom/sdxpoorwills.dtsi
+++ b/arch/arm/boot/dts/qcom/sdxpoorwills.dtsi
@@ -15,11 +15,12 @@
#include <dt-bindings/clock/qcom,rpmh.h>
#include <dt-bindings/clock/qcom,gcc-sdxpoorwills.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include <dt-bindings/regulator/qcom,rpmh-regulator.h>
/ {
model = "Qualcomm Technologies, Inc. SDX POORWILLS";
compatible = "qcom,sdxpoorwills";
- qcom,msm-id = <334 0x0>;
+ qcom,msm-id = <334 0x0>, <335 0x0>;
interrupt-parent = <&intc>;
reserved-memory {
@@ -27,19 +28,37 @@
#size-cells = <1>;
ranges;
- peripheral2_mem: peripheral2_region@8fd00000 {
+ peripheral2_mem: peripheral2_region@8fe00000 {
compatible = "removed-dma-pool";
no-map;
- reg = <0x8fd00000 0x300000>;
+ reg = <0x8fe00000 0x200000>;
label = "peripheral2_mem";
};
- mss_mem: mss_region@87800000 {
+ sbl_region: sbl_region@8fd00000 {
+ no-map;
+ reg = <0x8fd00000 0x100000>;
+ label = "sbl_mem";
+ };
+
+ hyp_region: hyp_region@8fc00000 {
+ no-map;
+ reg = <0x8fc00000 0x80000>;
+ label = "hyp_mem";
+ };
+
+ mss_mem: mss_region@87400000 {
compatible = "removed-dma-pool";
no-map;
- reg = <0x87800000 0x8000000>;
+ reg = <0x87400000 0x8300000>;
label = "mss_mem";
};
+
+ audio_mem: audio_region@0 {
+ compatible = "shared-dma-pool";
+ reusable;
+ size = <0x400000>;
+ };
};
cpus {
@@ -151,22 +170,40 @@
};
clock_gcc: qcom,gcc@100000 {
- compatible = "qcom,dummycc";
- clock-output-names = "gcc_clocks";
+ compatible = "qcom,gcc-sdxpoorwills";
+ reg = <0x100000 0x1f0000>;
+ reg-names = "cc_base";
+ vdd_cx-supply = <&pmxpoorwills_s5_level>;
+ vdd_cx_ao-supply = <&pmxpoorwills_s5_level_ao>;
#clock-cells = <1>;
#reset-cells = <1>;
};
- clock_cpu: qcom,clock-a7@17810008 {
- compatible = "qcom,dummycc";
- clock-output-names = "cpu_clocks";
+ clock_cpu: qcom,clock-a7@17808100 {
+ compatible = "qcom,cpu-sdxpoorwills";
+ clocks = <&clock_rpmh RPMH_CXO_CLK_A>;
+ clock-names = "xo_ao";
+ qcom,a7cc-init-rate = <1497600000>;
+ reg = <0x17808100 0x7F10>;
+ reg-names = "apcs_pll";
+ qcom,rcg-reg-offset = <0x7F08>;
+
+ vdd_dig_ao-supply = <&pmxpoorwills_s5_level_ao>;
+ cpu-vdd-supply = <&pmxpoorwills_s5_level_ao>;
+ qcom,speed0-bin-v0 =
+ < 0 RPMH_REGULATOR_LEVEL_OFF>,
+ < 345600000 RPMH_REGULATOR_LEVEL_LOW_SVS>,
+ < 576000000 RPMH_REGULATOR_LEVEL_SVS>,
+ < 1094400000 RPMH_REGULATOR_LEVEL_NOM>,
+ < 1497600000 RPMH_REGULATOR_LEVEL_TURBO>;
#clock-cells = <1>;
};
clock_rpmh: qcom,rpmhclk {
- compatible = "qcom,dummycc";
- clock-output-names = "rpmh_clocks";
+ compatible = "qcom,rpmh-clk-sdxpoorwills";
#clock-cells = <1>;
+ mboxes = <&apps_rsc 0>;
+ mbox-names = "apps";
};
blsp1_uart2: serial@831000 {
@@ -183,7 +220,6 @@
compatible = "qcom,gdsc";
regulator-name = "gdsc_usb30";
reg = <0x0010b004 0x4>;
- status = "ok";
};
qcom,sps {
@@ -195,7 +231,12 @@
compatible = "qcom,gdsc";
regulator-name = "gdsc_pcie";
reg = <0x00137004 0x4>;
- status = "ok";
+ };
+
+ gdsc_emac: qcom,gdsc@147004 {
+ compatible = "qcom,gdsc";
+ regulator-name = "gdsc_emac";
+ reg = <0x00147004 0x4>;
};
qnand_1: nand@1b00000 {
@@ -222,10 +263,10 @@
status = "disabled";
};
- qcom,msm-imem@8600000 {
+ qcom,msm-imem@1468B000 {
compatible = "qcom,msm-imem";
- reg = <0x8600000 0x1000>; /* Address and size of IMEM */
- ranges = <0x0 0x8600000 0x1000>;
+ reg = <0x1468B000 0x1000>; /* Address and size of IMEM */
+ ranges = <0x0 0x1468B000 0x1000>;
#address-cells = <1>;
#size-cells = <1>;
@@ -243,7 +284,17 @@
compatible = "qcom,msm-imem-boot_stats";
reg = <0x6b0 32>;
};
- };
+
+ pil@94c {
+ compatible = "qcom,msm-imem-pil";
+ reg = <0x94c 200>;
+ };
+
+ diag_dload@c8 {
+ compatible = "qcom,msm-imem-diag-dload";
+ reg = <0xc8 200>;
+ };
+};
restart@4ab000 {
compatible = "qcom,pshold";
@@ -428,9 +479,9 @@
<CONTROL_TCS 1>;
};
- cmd_db: qcom,cmd-db@ca0000c {
+ cmd_db: qcom,cmd-db@c37000c {
compatible = "qcom,cmd-db";
- reg = <0xca0000c 8>;
+ reg = <0xc37000c 8>;
};
system_pm {
@@ -460,6 +511,19 @@
io-interface = "rgmii";
};
};
+
+ qmp_aop: qcom,qmp-aop@c300000 {
+ compatible = "qcom,qmp-mbox";
+ label = "aop";
+ reg = <0xc300000 0x400>,
+ <0x17811008 0x4>;
+ reg-names = "msgram", "irq-reg-base";
+ qcom,irq-mask = <0x1>;
+ interrupts = <GIC_SPI 221 IRQ_TYPE_EDGE_RISING>;
+ priority = <0>;
+ mbox-desc-offset = <0x0>;
+ #mbox-cells = <1>;
+ };
};
#include "pmxpoorwills.dtsi"
@@ -469,3 +533,4 @@
#include "sdxpoorwills-usb.dtsi"
#include "sdxpoorwills-bus.dtsi"
#include "sdxpoorwills-thermal.dtsi"
+#include "sdxpoorwills-audio.dtsi"
diff --git a/arch/arm/boot/dts/stih410.dtsi b/arch/arm/boot/dts/stih410.dtsi
index a3ef734..4d329b2 100644
--- a/arch/arm/boot/dts/stih410.dtsi
+++ b/arch/arm/boot/dts/stih410.dtsi
@@ -131,7 +131,7 @@
<&clk_s_d2_quadfs 0>;
assigned-clock-rates = <297000000>,
- <108000000>,
+ <297000000>,
<0>,
<400000000>,
<400000000>;
diff --git a/arch/arm/configs/msm8953-perf_defconfig b/arch/arm/configs/msm8953-perf_defconfig
new file mode 100644
index 0000000..067878b
--- /dev/null
+++ b/arch/arm/configs/msm8953-perf_defconfig
@@ -0,0 +1,473 @@
+CONFIG_LOCALVERSION="-perf"
+# CONFIG_LOCALVERSION_AUTO is not set
+# CONFIG_FHANDLE is not set
+CONFIG_AUDIT=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_IRQ_TIME_ACCOUNTING=y
+CONFIG_SCHED_WALT=y
+CONFIG_RCU_EXPERT=y
+CONFIG_RCU_FAST_NO_HZ=y
+CONFIG_RCU_NOCB_CPU=y
+CONFIG_RCU_NOCB_CPU_ALL=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=17
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_SCHEDTUNE=y
+CONFIG_RT_GROUP_SCHED=y
+CONFIG_CGROUP_BPF=y
+CONFIG_SCHED_CORE_CTL=y
+CONFIG_NAMESPACES=y
+# CONFIG_UTS_NS is not set
+# CONFIG_PID_NS is not set
+CONFIG_SCHED_AUTOGROUP=y
+CONFIG_SCHED_TUNE=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_RD_XZ is not set
+# CONFIG_RD_LZO is not set
+# CONFIG_RD_LZ4 is not set
+CONFIG_KALLSYMS_ALL=y
+CONFIG_BPF_SYSCALL=y
+# CONFIG_AIO is not set
+# CONFIG_MEMBARRIER is not set
+CONFIG_EMBEDDED=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_PROFILING=y
+CONFIG_CC_STACKPROTECTOR_STRONG=y
+CONFIG_ARCH_MMAP_RND_BITS=16
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_MODULE_SIG=y
+CONFIG_MODULE_SIG_FORCE=y
+CONFIG_MODULE_SIG_SHA512=y
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_ARCH_QCOM=y
+CONFIG_ARCH_MSM8953=y
+CONFIG_ARCH_SDM450=y
+# CONFIG_VDSO is not set
+CONFIG_SMP=y
+CONFIG_SCHED_MC=y
+CONFIG_NR_CPUS=8
+CONFIG_ARM_PSCI=y
+CONFIG_PREEMPT=y
+CONFIG_AEABI=y
+CONFIG_HIGHMEM=y
+CONFIG_CMA=y
+CONFIG_CMA_DEBUGFS=y
+CONFIG_ZSMALLOC=y
+CONFIG_BALANCE_ANON_FILE_RECLAIM=y
+CONFIG_SECCOMP=y
+CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE=y
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_BOOST=y
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
+CONFIG_CPU_IDLE=y
+CONFIG_VFP=y
+CONFIG_NEON=y
+CONFIG_KERNEL_MODE_NEON=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_PM_AUTOSLEEP=y
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=0
+# CONFIG_PM_WAKELOCKS_GC is not set
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=y
+CONFIG_XFRM_STATISTICS=y
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_INET_AH=y
+CONFIG_INET_ESP=y
+CONFIG_INET_IPCOMP=y
+CONFIG_INET_DIAG_DESTROY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=y
+CONFIG_INET6_ESP=y
+CONFIG_INET6_IPCOMP=y
+CONFIG_IPV6_MIP6=y
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_IPV6_SUBTREES=y
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+CONFIG_NF_CONNTRACK_AMANDA=y
+CONFIG_NF_CONNTRACK_FTP=y
+CONFIG_NF_CONNTRACK_H323=y
+CONFIG_NF_CONNTRACK_IRC=y
+CONFIG_NF_CONNTRACK_NETBIOS_NS=y
+CONFIG_NF_CONNTRACK_PPTP=y
+CONFIG_NF_CONNTRACK_SANE=y
+CONFIG_NF_CONNTRACK_TFTP=y
+CONFIG_NF_CT_NETLINK=y
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
+CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_HARDIDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_LOG=y
+CONFIG_NETFILTER_XT_TARGET_MARK=y
+CONFIG_NETFILTER_XT_TARGET_NFLOG=y
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
+CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
+CONFIG_NETFILTER_XT_TARGET_TEE=y
+CONFIG_NETFILTER_XT_TARGET_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_TRACE=y
+CONFIG_NETFILTER_XT_TARGET_SECMARK=y
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
+CONFIG_NETFILTER_XT_MATCH_COMMENT=y
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
+CONFIG_NETFILTER_XT_MATCH_DSCP=y
+CONFIG_NETFILTER_XT_MATCH_ESP=y
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_HELPER=y
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
+# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
+CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+CONFIG_NETFILTER_XT_MATCH_LIMIT=y
+CONFIG_NETFILTER_XT_MATCH_MAC=y
+CONFIG_NETFILTER_XT_MATCH_MARK=y
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
+CONFIG_NETFILTER_XT_MATCH_POLICY=y
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
+CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
+CONFIG_NETFILTER_XT_MATCH_SOCKET=y
+CONFIG_NETFILTER_XT_MATCH_STATE=y
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
+CONFIG_NETFILTER_XT_MATCH_STRING=y
+CONFIG_NETFILTER_XT_MATCH_TIME=y
+CONFIG_NETFILTER_XT_MATCH_U32=y
+CONFIG_NF_CONNTRACK_IPV4=y
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=y
+CONFIG_IP_NF_MATCH_ECN=y
+CONFIG_IP_NF_MATCH_RPFILTER=y
+CONFIG_IP_NF_MATCH_TTL=y
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_NAT=y
+CONFIG_IP_NF_TARGET_MASQUERADE=y
+CONFIG_IP_NF_TARGET_NETMAP=y
+CONFIG_IP_NF_TARGET_REDIRECT=y
+CONFIG_IP_NF_MANGLE=y
+CONFIG_IP_NF_RAW=y
+CONFIG_IP_NF_SECURITY=y
+CONFIG_IP_NF_ARPTABLES=y
+CONFIG_IP_NF_ARPFILTER=y
+CONFIG_IP_NF_ARP_MANGLE=y
+CONFIG_NF_CONNTRACK_IPV6=y
+CONFIG_IP6_NF_IPTABLES=y
+CONFIG_IP6_NF_MATCH_RPFILTER=y
+CONFIG_IP6_NF_FILTER=y
+CONFIG_IP6_NF_TARGET_REJECT=y
+CONFIG_IP6_NF_MANGLE=y
+CONFIG_IP6_NF_RAW=y
+CONFIG_BRIDGE_NF_EBTABLES=y
+CONFIG_BRIDGE_EBT_BROUTE=y
+CONFIG_L2TP=y
+CONFIG_L2TP_DEBUGFS=y
+CONFIG_L2TP_V3=y
+CONFIG_L2TP_IP=y
+CONFIG_L2TP_ETH=y
+CONFIG_BRIDGE=y
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_HTB=y
+CONFIG_NET_SCH_PRIO=y
+CONFIG_NET_SCH_MULTIQ=y
+CONFIG_NET_SCH_INGRESS=y
+CONFIG_NET_CLS_FW=y
+CONFIG_NET_CLS_U32=y
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_FLOW=y
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_CMP=y
+CONFIG_NET_EMATCH_NBYTE=y
+CONFIG_NET_EMATCH_U32=y
+CONFIG_NET_EMATCH_META=y
+CONFIG_NET_EMATCH_TEXT=y
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_GACT=y
+CONFIG_NET_ACT_MIRRED=y
+CONFIG_NET_ACT_SKBEDIT=y
+CONFIG_RMNET_DATA=y
+CONFIG_RMNET_DATA_FC=y
+CONFIG_RMNET_DATA_DEBUG_PKT=y
+CONFIG_BT=y
+CONFIG_MSM_BT_POWER=y
+CONFIG_CFG80211=y
+CONFIG_CFG80211_INTERNAL_REGDB=y
+# CONFIG_CFG80211_CRDA_SUPPORT is not set
+CONFIG_RFKILL=y
+CONFIG_NFC_NQ=y
+CONFIG_IPC_ROUTER=y
+CONFIG_IPC_ROUTER_SECURITY=y
+CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
+CONFIG_DMA_CMA=y
+CONFIG_ZRAM=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_QSEECOM=y
+CONFIG_MEMORY_STATE_TIME=y
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_CHR_DEV_SCH=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+CONFIG_SCSI_UFSHCD=y
+CONFIG_SCSI_UFSHCD_PLATFORM=y
+CONFIG_SCSI_UFS_QCOM=y
+CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_MD=y
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_DEBUG=y
+CONFIG_DM_CRYPT=y
+CONFIG_DM_REQ_CRYPT=y
+CONFIG_DM_UEVENT=y
+CONFIG_DM_VERITY=y
+CONFIG_DM_VERITY_FEC=y
+CONFIG_NETDEVICES=y
+CONFIG_DUMMY=y
+CONFIG_TUN=y
+CONFIG_PPP=y
+CONFIG_PPP_BSDCOMP=y
+CONFIG_PPP_DEFLATE=y
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_MPPE=y
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPPOE=y
+CONFIG_PPPOL2TP=y
+CONFIG_PPPOLAC=y
+CONFIG_PPPOPNS=y
+CONFIG_PPP_ASYNC=y
+CONFIG_PPP_SYNC_TTY=y
+CONFIG_WCNSS_MEM_PRE_ALLOC=y
+CONFIG_CLD_LL_CORE=y
+CONFIG_INPUT_EVDEV=y
+CONFIG_KEYBOARD_GPIO=y
+# CONFIG_INPUT_MOUSE is not set
+CONFIG_INPUT_JOYSTICK=y
+CONFIG_INPUT_TOUCHSCREEN=y
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_UINPUT=y
+# CONFIG_SERIO_SERPORT is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+CONFIG_SERIAL_MSM=y
+CONFIG_SERIAL_MSM_CONSOLE=y
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_MSM_LEGACY=y
+CONFIG_MSM_ADSPRPC=y
+CONFIG_MSM_RDBG=m
+CONFIG_I2C_CHARDEV=y
+CONFIG_SPI=y
+CONFIG_SPI_QUP=y
+CONFIG_SPI_SPIDEV=y
+CONFIG_SLIMBUS_MSM_NGD=y
+CONFIG_SPMI=y
+CONFIG_SPMI_MSM_PMIC_ARB_DEBUG=y
+CONFIG_PINCTRL_MSM8953=y
+CONFIG_PINCTRL_QCOM_SPMI_PMIC=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_POWER_SUPPLY=y
+CONFIG_SMB135X_CHARGER=y
+CONFIG_SMB1351_USB_CHARGER=y
+CONFIG_SENSORS_QPNP_ADC_VOLTAGE=y
+CONFIG_THERMAL=y
+CONFIG_THERMAL_QPNP=y
+CONFIG_THERMAL_QPNP_ADC_TM=y
+CONFIG_THERMAL_TSENS=y
+CONFIG_MSM_BCL_PERIPHERAL_CTL=y
+CONFIG_QTI_THERMAL_LIMITS_DCVS=y
+CONFIG_REGULATOR=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_PROXY_CONSUMER=y
+CONFIG_REGULATOR_CPR4_APSS=y
+CONFIG_REGULATOR_MEM_ACC=y
+CONFIG_REGULATOR_MSM_GFX_LDO=y
+CONFIG_REGULATOR_QPNP_LABIBB=y
+CONFIG_REGULATOR_QPNP=y
+CONFIG_MEDIA_SUPPORT=y
+CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_CONTROLLER=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
+CONFIG_V4L_PLATFORM_DRIVERS=y
+CONFIG_FB=y
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_DYNAMIC_MINORS=y
+CONFIG_SND_SOC=y
+CONFIG_UHID=y
+CONFIG_HID_APPLE=y
+CONFIG_HID_ELECOM=y
+CONFIG_HID_MAGICMOUSE=y
+CONFIG_HID_MICROSOFT=y
+CONFIG_HID_MULTITOUCH=y
+CONFIG_USB_DWC3=y
+CONFIG_NOP_USB_XCEIV=y
+CONFIG_DUAL_ROLE_USB_INTF=y
+CONFIG_USB_MSM_SSPHY_QMP=y
+CONFIG_MSM_QUSB_PHY=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_GADGET_DEBUG_FILES=y
+CONFIG_USB_GADGET_DEBUG_FS=y
+CONFIG_USB_GADGET_VBUS_DRAW=500
+CONFIG_MMC=y
+CONFIG_MMC_PERF_PROFILING=y
+CONFIG_MMC_PARANOID_SD_INIT=y
+CONFIG_MMC_CLKGATE=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
+CONFIG_MMC_TEST=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_MSM=y
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_QPNP=y
+CONFIG_LEDS_QPNP_WLED=y
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_EDAC=y
+CONFIG_EDAC_MM_EDAC=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_DRV_QPNP=y
+CONFIG_DMADEVICES=y
+CONFIG_QCOM_SPS_DMA=y
+CONFIG_UIO=y
+CONFIG_UIO_MSM_SHAREDMEM=y
+CONFIG_STAGING=y
+CONFIG_ASHMEM=y
+CONFIG_ANDROID_LOW_MEMORY_KILLER=y
+CONFIG_ION=y
+CONFIG_ION_MSM=y
+CONFIG_GSI=y
+CONFIG_IPA3=y
+CONFIG_RMNET_IPA3=y
+CONFIG_RNDIS_IPA=y
+CONFIG_SPS=y
+CONFIG_SPS_SUPPORT_NDP_BAM=y
+CONFIG_QPNP_COINCELL=y
+CONFIG_QPNP_REVID=y
+CONFIG_USB_BAM=y
+CONFIG_REMOTE_SPINLOCK_MSM=y
+CONFIG_MAILBOX=y
+CONFIG_MSM_BOOT_STATS=y
+CONFIG_QCOM_WATCHDOG_V2=y
+CONFIG_QCOM_MEMORY_DUMP_V2=y
+CONFIG_QCOM_SECURE_BUFFER=y
+CONFIG_QCOM_EARLY_RANDOM=y
+CONFIG_MSM_SMEM=y
+CONFIG_MSM_GLINK=y
+CONFIG_MSM_GLINK_LOOPBACK_SERVER=y
+CONFIG_MSM_GLINK_SMEM_NATIVE_XPRT=y
+CONFIG_MSM_GLINK_SPI_XPRT=y
+CONFIG_MSM_SMP2P=y
+CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
+CONFIG_MSM_QMI_INTERFACE=y
+CONFIG_MSM_GLINK_PKT=y
+CONFIG_MSM_SUBSYSTEM_RESTART=y
+CONFIG_MSM_PIL=y
+CONFIG_MSM_PIL_SSR_GENERIC=y
+CONFIG_MSM_PIL_MSS_QDSP6V5=y
+CONFIG_ICNSS=y
+CONFIG_MSM_PERFORMANCE=y
+CONFIG_MSM_EVENT_TIMER=y
+CONFIG_MSM_PM=y
+CONFIG_QTI_RPM_STATS_LOG=y
+CONFIG_QCOM_FORCE_WDOG_BITE_ON_PANIC=y
+CONFIG_PWM=y
+CONFIG_PWM_QPNP=y
+CONFIG_ANDROID=y
+CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_SENSORS_SSC=y
+CONFIG_MSM_TZ_LOG=y
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT3_FS=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_QUOTA=y
+CONFIG_QUOTA_NETLINK_INTERFACE=y
+CONFIG_FUSE_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_ECRYPT_FS=y
+CONFIG_ECRYPT_FS_MESSAGING=y
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_INFO=y
+CONFIG_FRAME_WARN=2048
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_PANIC_TIMEOUT=5
+CONFIG_SCHEDSTATS=y
+CONFIG_SCHED_STACK_END_CHECK=y
+# CONFIG_DEBUG_PREEMPT is not set
+CONFIG_IPC_LOGGING=y
+CONFIG_CPU_FREQ_SWITCH_PROFILER=y
+CONFIG_CORESIGHT=y
+CONFIG_CORESIGHT_REMOTE_ETM=y
+CONFIG_CORESIGHT_REMOTE_ETM_DEFAULT_ENABLE=0
+CONFIG_CORESIGHT_STM=y
+CONFIG_CORESIGHT_TPDA=y
+CONFIG_CORESIGHT_TPDM=y
+CONFIG_CORESIGHT_CTI=y
+CONFIG_CORESIGHT_EVENT=y
+CONFIG_CORESIGHT_HWEVENT=y
+CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
+CONFIG_SECURITY=y
+CONFIG_LSM_MMAP_MIN_ADDR=4096
+CONFIG_HARDENED_USERCOPY=y
+CONFIG_SECURITY_SELINUX=y
+CONFIG_SECURITY_SMACK=y
+CONFIG_CRYPTO_CTR=y
+CONFIG_CRYPTO_XCBC=y
+CONFIG_CRYPTO_MD4=y
+CONFIG_CRYPTO_TWOFISH=y
+CONFIG_CRYPTO_ANSI_CPRNG=y
+CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
+CONFIG_CRYPTO_DEV_QCRYPTO=y
+CONFIG_CRYPTO_DEV_QCEDEV=y
+CONFIG_CRYPTO_DEV_OTA_CRYPTO=y
+CONFIG_CRYPTO_DEV_QCOM_ICE=y
+CONFIG_ARM_CRYPTO=y
+CONFIG_CRYPTO_SHA1_ARM_NEON=y
+CONFIG_CRYPTO_SHA2_ARM_CE=y
+CONFIG_CRYPTO_AES_ARM_BS=y
+CONFIG_CRYPTO_AES_ARM_CE=y
+CONFIG_QMI_ENCDEC=y
diff --git a/arch/arm/configs/msm8953_defconfig b/arch/arm/configs/msm8953_defconfig
new file mode 100644
index 0000000..46f3e0c
--- /dev/null
+++ b/arch/arm/configs/msm8953_defconfig
@@ -0,0 +1,533 @@
+# CONFIG_LOCALVERSION_AUTO is not set
+# CONFIG_FHANDLE is not set
+CONFIG_AUDIT=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_IRQ_TIME_ACCOUNTING=y
+CONFIG_SCHED_WALT=y
+CONFIG_TASKSTATS=y
+CONFIG_TASK_DELAY_ACCT=y
+CONFIG_TASK_XACCT=y
+CONFIG_TASK_IO_ACCOUNTING=y
+CONFIG_RCU_EXPERT=y
+CONFIG_RCU_FAST_NO_HZ=y
+CONFIG_RCU_NOCB_CPU=y
+CONFIG_RCU_NOCB_CPU_ALL=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=17
+CONFIG_CGROUP_DEBUG=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_SCHEDTUNE=y
+CONFIG_RT_GROUP_SCHED=y
+CONFIG_CGROUP_BPF=y
+CONFIG_SCHED_CORE_CTL=y
+CONFIG_NAMESPACES=y
+# CONFIG_UTS_NS is not set
+# CONFIG_PID_NS is not set
+CONFIG_SCHED_AUTOGROUP=y
+CONFIG_SCHED_TUNE=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_RD_XZ is not set
+# CONFIG_RD_LZO is not set
+# CONFIG_RD_LZ4 is not set
+CONFIG_KALLSYMS_ALL=y
+CONFIG_BPF_SYSCALL=y
+# CONFIG_AIO is not set
+# CONFIG_MEMBARRIER is not set
+CONFIG_EMBEDDED=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=m
+CONFIG_KPROBES=y
+CONFIG_CC_STACKPROTECTOR_STRONG=y
+CONFIG_ARCH_MMAP_RND_BITS=16
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_MODULE_SIG=y
+CONFIG_MODULE_SIG_FORCE=y
+CONFIG_MODULE_SIG_SHA512=y
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_ARCH_QCOM=y
+CONFIG_ARCH_MSM8953=y
+CONFIG_ARCH_SDM450=y
+# CONFIG_VDSO is not set
+CONFIG_SMP=y
+CONFIG_SCHED_MC=y
+CONFIG_NR_CPUS=8
+CONFIG_ARM_PSCI=y
+CONFIG_PREEMPT=y
+CONFIG_AEABI=y
+CONFIG_HIGHMEM=y
+CONFIG_CMA=y
+CONFIG_CMA_DEBUGFS=y
+CONFIG_ZSMALLOC=y
+CONFIG_BALANCE_ANON_FILE_RECLAIM=y
+CONFIG_SECCOMP=y
+CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE=y
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_BOOST=y
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
+CONFIG_CPU_IDLE=y
+CONFIG_VFP=y
+CONFIG_NEON=y
+CONFIG_KERNEL_MODE_NEON=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_PM_AUTOSLEEP=y
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=0
+# CONFIG_PM_WAKELOCKS_GC is not set
+CONFIG_PM_DEBUG=y
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=y
+CONFIG_XFRM_STATISTICS=y
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_INET_AH=y
+CONFIG_INET_ESP=y
+CONFIG_INET_IPCOMP=y
+CONFIG_INET_DIAG_DESTROY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=y
+CONFIG_INET6_ESP=y
+CONFIG_INET6_IPCOMP=y
+CONFIG_IPV6_MIP6=y
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_IPV6_SUBTREES=y
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+CONFIG_NF_CONNTRACK_AMANDA=y
+CONFIG_NF_CONNTRACK_FTP=y
+CONFIG_NF_CONNTRACK_H323=y
+CONFIG_NF_CONNTRACK_IRC=y
+CONFIG_NF_CONNTRACK_NETBIOS_NS=y
+CONFIG_NF_CONNTRACK_PPTP=y
+CONFIG_NF_CONNTRACK_SANE=y
+CONFIG_NF_CONNTRACK_TFTP=y
+CONFIG_NF_CT_NETLINK=y
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
+CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_HARDIDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_LOG=y
+CONFIG_NETFILTER_XT_TARGET_MARK=y
+CONFIG_NETFILTER_XT_TARGET_NFLOG=y
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
+CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
+CONFIG_NETFILTER_XT_TARGET_TEE=y
+CONFIG_NETFILTER_XT_TARGET_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_TRACE=y
+CONFIG_NETFILTER_XT_TARGET_SECMARK=y
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
+CONFIG_NETFILTER_XT_MATCH_COMMENT=y
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
+CONFIG_NETFILTER_XT_MATCH_DSCP=y
+CONFIG_NETFILTER_XT_MATCH_ESP=y
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_HELPER=y
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
+# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
+CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+CONFIG_NETFILTER_XT_MATCH_LIMIT=y
+CONFIG_NETFILTER_XT_MATCH_MAC=y
+CONFIG_NETFILTER_XT_MATCH_MARK=y
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
+CONFIG_NETFILTER_XT_MATCH_POLICY=y
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
+CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
+CONFIG_NETFILTER_XT_MATCH_SOCKET=y
+CONFIG_NETFILTER_XT_MATCH_STATE=y
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
+CONFIG_NETFILTER_XT_MATCH_STRING=y
+CONFIG_NETFILTER_XT_MATCH_TIME=y
+CONFIG_NETFILTER_XT_MATCH_U32=y
+CONFIG_NF_CONNTRACK_IPV4=y
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=y
+CONFIG_IP_NF_MATCH_ECN=y
+CONFIG_IP_NF_MATCH_RPFILTER=y
+CONFIG_IP_NF_MATCH_TTL=y
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_NAT=y
+CONFIG_IP_NF_TARGET_MASQUERADE=y
+CONFIG_IP_NF_TARGET_NETMAP=y
+CONFIG_IP_NF_TARGET_REDIRECT=y
+CONFIG_IP_NF_MANGLE=y
+CONFIG_IP_NF_RAW=y
+CONFIG_IP_NF_SECURITY=y
+CONFIG_IP_NF_ARPTABLES=y
+CONFIG_IP_NF_ARPFILTER=y
+CONFIG_IP_NF_ARP_MANGLE=y
+CONFIG_NF_CONNTRACK_IPV6=y
+CONFIG_IP6_NF_IPTABLES=y
+CONFIG_IP6_NF_MATCH_RPFILTER=y
+CONFIG_IP6_NF_FILTER=y
+CONFIG_IP6_NF_TARGET_REJECT=y
+CONFIG_IP6_NF_MANGLE=y
+CONFIG_IP6_NF_RAW=y
+CONFIG_BRIDGE_NF_EBTABLES=y
+CONFIG_BRIDGE_EBT_BROUTE=y
+CONFIG_L2TP=y
+CONFIG_L2TP_DEBUGFS=y
+CONFIG_L2TP_V3=y
+CONFIG_L2TP_IP=y
+CONFIG_L2TP_ETH=y
+CONFIG_BRIDGE=y
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_HTB=y
+CONFIG_NET_SCH_PRIO=y
+CONFIG_NET_SCH_MULTIQ=y
+CONFIG_NET_SCH_INGRESS=y
+CONFIG_NET_CLS_FW=y
+CONFIG_NET_CLS_U32=y
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_FLOW=y
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_CMP=y
+CONFIG_NET_EMATCH_NBYTE=y
+CONFIG_NET_EMATCH_U32=y
+CONFIG_NET_EMATCH_META=y
+CONFIG_NET_EMATCH_TEXT=y
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_GACT=y
+CONFIG_NET_ACT_MIRRED=y
+CONFIG_NET_ACT_SKBEDIT=y
+CONFIG_DNS_RESOLVER=y
+CONFIG_RMNET_DATA=y
+CONFIG_RMNET_DATA_FC=y
+CONFIG_RMNET_DATA_DEBUG_PKT=y
+CONFIG_BT=y
+CONFIG_MSM_BT_POWER=y
+CONFIG_CFG80211=y
+CONFIG_CFG80211_INTERNAL_REGDB=y
+# CONFIG_CFG80211_CRDA_SUPPORT is not set
+CONFIG_RFKILL=y
+CONFIG_NFC_NQ=y
+CONFIG_IPC_ROUTER=y
+CONFIG_IPC_ROUTER_SECURITY=y
+CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
+CONFIG_DMA_CMA=y
+CONFIG_ZRAM=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_HDCP_QSEECOM=y
+CONFIG_QSEECOM=y
+CONFIG_UID_SYS_STATS=y
+CONFIG_MEMORY_STATE_TIME=y
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_CHR_DEV_SCH=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+CONFIG_SCSI_UFSHCD=y
+CONFIG_SCSI_UFSHCD_PLATFORM=y
+CONFIG_SCSI_UFS_QCOM=y
+CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_MD=y
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_DEBUG=y
+CONFIG_DM_CRYPT=y
+CONFIG_DM_REQ_CRYPT=y
+CONFIG_DM_UEVENT=y
+CONFIG_DM_VERITY=y
+CONFIG_DM_VERITY_FEC=y
+CONFIG_NETDEVICES=y
+CONFIG_DUMMY=y
+CONFIG_TUN=y
+CONFIG_PPP=y
+CONFIG_PPP_BSDCOMP=y
+CONFIG_PPP_DEFLATE=y
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_MPPE=y
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPPOE=y
+CONFIG_PPPOL2TP=y
+CONFIG_PPPOLAC=y
+CONFIG_PPPOPNS=y
+CONFIG_PPP_ASYNC=y
+CONFIG_PPP_SYNC_TTY=y
+CONFIG_WCNSS_MEM_PRE_ALLOC=y
+CONFIG_CLD_LL_CORE=y
+CONFIG_INPUT_EVDEV=y
+CONFIG_KEYBOARD_GPIO=y
+# CONFIG_INPUT_MOUSE is not set
+CONFIG_INPUT_JOYSTICK=y
+CONFIG_INPUT_TOUCHSCREEN=y
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_UINPUT=y
+# CONFIG_SERIO_SERPORT is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+CONFIG_SERIAL_MSM=y
+CONFIG_SERIAL_MSM_CONSOLE=y
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_MSM_LEGACY=y
+CONFIG_MSM_ADSPRPC=y
+CONFIG_MSM_RDBG=m
+CONFIG_I2C_CHARDEV=y
+CONFIG_SPI=y
+CONFIG_SPI_QUP=y
+CONFIG_SPI_SPIDEV=y
+CONFIG_SLIMBUS_MSM_NGD=y
+CONFIG_SPMI=y
+CONFIG_SPMI_MSM_PMIC_ARB_DEBUG=y
+CONFIG_PINCTRL_MSM8953=y
+CONFIG_PINCTRL_QCOM_SPMI_PMIC=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_POWER_SUPPLY=y
+CONFIG_SMB135X_CHARGER=y
+CONFIG_SMB1351_USB_CHARGER=y
+CONFIG_SENSORS_QPNP_ADC_VOLTAGE=y
+CONFIG_THERMAL=y
+CONFIG_THERMAL_QPNP=y
+CONFIG_THERMAL_QPNP_ADC_TM=y
+CONFIG_THERMAL_TSENS=y
+CONFIG_MSM_BCL_PERIPHERAL_CTL=y
+CONFIG_QTI_THERMAL_LIMITS_DCVS=y
+CONFIG_REGULATOR=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_PROXY_CONSUMER=y
+CONFIG_REGULATOR_CPR4_APSS=y
+CONFIG_REGULATOR_MEM_ACC=y
+CONFIG_REGULATOR_MSM_GFX_LDO=y
+CONFIG_REGULATOR_QPNP_LABIBB=y
+CONFIG_REGULATOR_QPNP=y
+CONFIG_MEDIA_SUPPORT=y
+CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_CONTROLLER=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
+CONFIG_V4L_PLATFORM_DRIVERS=y
+CONFIG_FB=y
+CONFIG_FB_VIRTUAL=y
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_DYNAMIC_MINORS=y
+CONFIG_SND_SOC=y
+CONFIG_UHID=y
+CONFIG_HID_APPLE=y
+CONFIG_HID_ELECOM=y
+CONFIG_HID_MAGICMOUSE=y
+CONFIG_HID_MICROSOFT=y
+CONFIG_HID_MULTITOUCH=y
+CONFIG_USB_DWC3=y
+CONFIG_NOP_USB_XCEIV=y
+CONFIG_DUAL_ROLE_USB_INTF=y
+CONFIG_USB_MSM_SSPHY_QMP=y
+CONFIG_MSM_QUSB_PHY=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_GADGET_DEBUG_FILES=y
+CONFIG_USB_GADGET_DEBUG_FS=y
+CONFIG_USB_GADGET_VBUS_DRAW=500
+CONFIG_MMC=y
+CONFIG_MMC_PERF_PROFILING=y
+CONFIG_MMC_RING_BUFFER=y
+CONFIG_MMC_PARANOID_SD_INIT=y
+CONFIG_MMC_CLKGATE=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
+CONFIG_MMC_TEST=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_MSM=y
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_QPNP=y
+CONFIG_LEDS_QPNP_WLED=y
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_EDAC=y
+CONFIG_EDAC_MM_EDAC=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_DRV_QPNP=y
+CONFIG_DMADEVICES=y
+CONFIG_QCOM_SPS_DMA=y
+CONFIG_UIO=y
+CONFIG_UIO_MSM_SHAREDMEM=y
+CONFIG_STAGING=y
+CONFIG_ASHMEM=y
+CONFIG_ANDROID_LOW_MEMORY_KILLER=y
+CONFIG_ION=y
+CONFIG_ION_MSM=y
+CONFIG_GSI=y
+CONFIG_IPA3=y
+CONFIG_RMNET_IPA3=y
+CONFIG_RNDIS_IPA=y
+CONFIG_SPS=y
+CONFIG_SPS_SUPPORT_NDP_BAM=y
+CONFIG_QPNP_COINCELL=y
+CONFIG_QPNP_REVID=y
+CONFIG_USB_BAM=y
+CONFIG_MSM_EXT_DISPLAY=y
+CONFIG_REMOTE_SPINLOCK_MSM=y
+CONFIG_MAILBOX=y
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_MSM_BOOT_STATS=y
+CONFIG_MSM_CORE_HANG_DETECT=y
+CONFIG_MSM_GLADIATOR_HANG_DETECT=y
+CONFIG_QCOM_WATCHDOG_V2=y
+CONFIG_QCOM_MEMORY_DUMP_V2=y
+CONFIG_QCOM_SECURE_BUFFER=y
+CONFIG_QCOM_EARLY_RANDOM=y
+CONFIG_MSM_SMEM=y
+CONFIG_MSM_GLINK=y
+CONFIG_MSM_GLINK_LOOPBACK_SERVER=y
+CONFIG_MSM_GLINK_SMEM_NATIVE_XPRT=y
+CONFIG_MSM_GLINK_SPI_XPRT=y
+CONFIG_TRACER_PKT=y
+CONFIG_MSM_SMP2P=y
+CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
+CONFIG_MSM_QMI_INTERFACE=y
+CONFIG_MSM_GLINK_PKT=y
+CONFIG_MSM_SUBSYSTEM_RESTART=y
+CONFIG_MSM_PIL=y
+CONFIG_MSM_PIL_SSR_GENERIC=y
+CONFIG_MSM_PIL_MSS_QDSP6V5=y
+CONFIG_ICNSS=y
+CONFIG_MSM_PERFORMANCE=y
+CONFIG_MSM_EVENT_TIMER=y
+CONFIG_MSM_PM=y
+CONFIG_QTI_RPM_STATS_LOG=y
+CONFIG_QCOM_FORCE_WDOG_BITE_ON_PANIC=y
+CONFIG_PWM=y
+CONFIG_PWM_QPNP=y
+CONFIG_ANDROID=y
+CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_SENSORS_SSC=y
+CONFIG_MSM_TZ_LOG=y
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT3_FS=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_QUOTA=y
+CONFIG_QUOTA_NETLINK_INTERFACE=y
+CONFIG_FUSE_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_TMPFS=y
+CONFIG_ECRYPT_FS=y
+CONFIG_ECRYPT_FS_MESSAGING=y
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_FRAME_WARN=2048
+CONFIG_PAGE_OWNER=y
+CONFIG_PAGE_OWNER_ENABLE_DEFAULT=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_PAGEALLOC=y
+CONFIG_SLUB_DEBUG_PANIC_ON=y
+CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y
+CONFIG_DEBUG_OBJECTS=y
+CONFIG_DEBUG_OBJECTS_FREE=y
+CONFIG_DEBUG_OBJECTS_TIMERS=y
+CONFIG_DEBUG_OBJECTS_WORK=y
+CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
+CONFIG_DEBUG_OBJECTS_PERCPU_COUNTER=y
+CONFIG_SLUB_DEBUG_ON=y
+CONFIG_DEBUG_KMEMLEAK=y
+CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=4000
+CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y
+CONFIG_DEBUG_STACK_USAGE=y
+CONFIG_DEBUG_MEMORY_INIT=y
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_WQ_WATCHDOG=y
+CONFIG_PANIC_TIMEOUT=5
+CONFIG_PANIC_ON_SCHED_BUG=y
+CONFIG_PANIC_ON_RT_THROTTLING=y
+CONFIG_SCHEDSTATS=y
+CONFIG_SCHED_STACK_END_CHECK=y
+# CONFIG_DEBUG_PREEMPT is not set
+CONFIG_DEBUG_SPINLOCK=y
+CONFIG_DEBUG_MUTEXES=y
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+CONFIG_DEBUG_LIST=y
+CONFIG_FAULT_INJECTION=y
+CONFIG_FAIL_PAGE_ALLOC=y
+CONFIG_FAULT_INJECTION_DEBUG_FS=y
+CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y
+CONFIG_IPC_LOGGING=y
+CONFIG_QCOM_RTB=y
+CONFIG_QCOM_RTB_SEPARATE_CPUS=y
+CONFIG_FUNCTION_TRACER=y
+CONFIG_IRQSOFF_TRACER=y
+CONFIG_PREEMPT_TRACER=y
+CONFIG_BLK_DEV_IO_TRACE=y
+CONFIG_CPU_FREQ_SWITCH_PROFILER=y
+CONFIG_LKDTM=y
+CONFIG_MEMTEST=y
+CONFIG_PANIC_ON_DATA_CORRUPTION=y
+CONFIG_DEBUG_USER=y
+CONFIG_PID_IN_CONTEXTIDR=y
+CONFIG_DEBUG_SET_MODULE_RONX=y
+CONFIG_CORESIGHT=y
+CONFIG_CORESIGHT_REMOTE_ETM=y
+CONFIG_CORESIGHT_REMOTE_ETM_DEFAULT_ENABLE=0
+CONFIG_CORESIGHT_STM=y
+CONFIG_CORESIGHT_TPDA=y
+CONFIG_CORESIGHT_TPDM=y
+CONFIG_CORESIGHT_CTI=y
+CONFIG_CORESIGHT_EVENT=y
+CONFIG_CORESIGHT_HWEVENT=y
+CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
+CONFIG_SECURITY=y
+CONFIG_LSM_MMAP_MIN_ADDR=4096
+CONFIG_HARDENED_USERCOPY=y
+CONFIG_SECURITY_SELINUX=y
+CONFIG_SECURITY_SMACK=y
+CONFIG_CRYPTO_CTR=y
+CONFIG_CRYPTO_XCBC=y
+CONFIG_CRYPTO_MD4=y
+CONFIG_CRYPTO_TWOFISH=y
+CONFIG_CRYPTO_ANSI_CPRNG=y
+CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
+CONFIG_CRYPTO_DEV_QCRYPTO=y
+CONFIG_CRYPTO_DEV_QCEDEV=y
+CONFIG_CRYPTO_DEV_OTA_CRYPTO=y
+CONFIG_CRYPTO_DEV_QCOM_ICE=y
+CONFIG_ARM_CRYPTO=y
+CONFIG_CRYPTO_SHA1_ARM_NEON=y
+CONFIG_CRYPTO_SHA2_ARM_CE=y
+CONFIG_CRYPTO_AES_ARM_BS=y
+CONFIG_CRYPTO_AES_ARM_CE=y
+CONFIG_QMI_ENCDEC=y
diff --git a/arch/arm/configs/omap2plus_defconfig b/arch/arm/configs/omap2plus_defconfig
index 53e1a88..66d7196 100644
--- a/arch/arm/configs/omap2plus_defconfig
+++ b/arch/arm/configs/omap2plus_defconfig
@@ -216,6 +216,7 @@
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=32
+CONFIG_SERIAL_8250_RUNTIME_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
diff --git a/arch/arm/configs/sdxpoorwills-perf_defconfig b/arch/arm/configs/sdxpoorwills-perf_defconfig
index ffaafd9..456ff5e 100644
--- a/arch/arm/configs/sdxpoorwills-perf_defconfig
+++ b/arch/arm/configs/sdxpoorwills-perf_defconfig
@@ -134,6 +134,9 @@
CONFIG_BRIDGE=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_PRIO=y
+CONFIG_RMNET_DATA=y
+CONFIG_RMNET_DATA_FC=y
+CONFIG_RMNET_DATA_DEBUG_PKT=y
CONFIG_BT=y
CONFIG_BT_RFCOMM=y
CONFIG_BT_RFCOMM_TTY=y
@@ -232,8 +235,12 @@
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_QPNP=y
+CONFIG_REGULATOR_RPMH=y
CONFIG_SOUND=y
CONFIG_SND=y
+CONFIG_SND_DYNAMIC_MINORS=y
+CONFIG_SND_USB_AUDIO=y
+CONFIG_SND_USB_AUDIO_QMI=y
CONFIG_SND_SOC=y
CONFIG_UHID=y
CONFIG_HID_APPLE=y
@@ -272,9 +279,11 @@
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_F_UAC1=y
CONFIG_USB_CONFIGFS_F_DIAG=y
CONFIG_USB_CONFIGFS_F_CDEV=y
CONFIG_USB_CONFIGFS_F_GSI=y
+CONFIG_USB_CONFIGFS_F_QDSS=y
CONFIG_MMC=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_BLOCK_MINORS=32
@@ -288,6 +297,8 @@
CONFIG_QCOM_SPS_DMA=y
CONFIG_UIO=y
CONFIG_STAGING=y
+CONFIG_ION=y
+CONFIG_ION_MSM=y
CONFIG_GSI=y
CONFIG_IPA3=y
CONFIG_RMNET_IPA3=y
@@ -298,8 +309,11 @@
CONFIG_SPS_SUPPORT_NDP_BAM=y
CONFIG_QPNP_REVID=y
CONFIG_USB_BAM=y
+CONFIG_MSM_CLK_RPMH=y
+CONFIG_MDM_GCC_SDXPOORWILLS=y
+CONFIG_MDM_CLOCK_CPU_SDXPOORWILLS=y
CONFIG_REMOTE_SPINLOCK_MSM=y
-CONFIG_MAILBOX=y
+CONFIG_MSM_QMP=y
CONFIG_QCOM_SCM=y
CONFIG_MSM_BOOT_STATS=y
CONFIG_MSM_SMEM=y
@@ -307,6 +321,7 @@
CONFIG_MSM_GLINK_LOOPBACK_SERVER=y
CONFIG_MSM_GLINK_SMEM_NATIVE_XPRT=y
CONFIG_TRACER_PKT=y
+CONFIG_QTI_RPMH_API=y
CONFIG_MSM_SMP2P=y
CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
CONFIG_MSM_QMI_INTERFACE=y
@@ -314,6 +329,8 @@
CONFIG_MSM_SUBSYSTEM_RESTART=y
CONFIG_MSM_PIL=y
CONFIG_MSM_PIL_SSR_GENERIC=y
+CONFIG_QCOM_COMMAND_DB=y
+CONFIG_MSM_PM=y
CONFIG_IIO=y
CONFIG_PWM=y
CONFIG_PWM_QPNP=y
diff --git a/arch/arm/configs/sdxpoorwills_defconfig b/arch/arm/configs/sdxpoorwills_defconfig
index 3e2b495..f2b20b0 100644
--- a/arch/arm/configs/sdxpoorwills_defconfig
+++ b/arch/arm/configs/sdxpoorwills_defconfig
@@ -136,6 +136,9 @@
CONFIG_BRIDGE=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_PRIO=y
+CONFIG_RMNET_DATA=y
+CONFIG_RMNET_DATA_FC=y
+CONFIG_RMNET_DATA_DEBUG_PKT=y
CONFIG_CFG80211=y
CONFIG_CFG80211_DEBUGFS=y
CONFIG_CFG80211_INTERNAL_REGDB=y
@@ -228,10 +231,14 @@
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_QPNP=y
+CONFIG_REGULATOR_RPMH=y
CONFIG_REGULATOR_STUB=y
CONFIG_FB=y
CONFIG_SOUND=y
CONFIG_SND=y
+CONFIG_SND_DYNAMIC_MINORS=y
+CONFIG_SND_USB_AUDIO=y
+CONFIG_SND_USB_AUDIO_QMI=y
CONFIG_SND_SOC=y
CONFIG_UHID=y
CONFIG_HID_APPLE=y
@@ -270,9 +277,11 @@
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_F_UAC1=y
CONFIG_USB_CONFIGFS_F_DIAG=y
CONFIG_USB_CONFIGFS_F_CDEV=y
CONFIG_USB_CONFIGFS_F_GSI=y
+CONFIG_USB_CONFIGFS_F_QDSS=y
CONFIG_MMC=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_BLOCK_MINORS=32
@@ -285,6 +294,8 @@
CONFIG_QCOM_SPS_DMA=y
CONFIG_UIO=y
CONFIG_STAGING=y
+CONFIG_ION=y
+CONFIG_ION_MSM=y
CONFIG_GSI=y
CONFIG_IPA3=y
CONFIG_RMNET_IPA3=y
@@ -294,8 +305,11 @@
CONFIG_SPS=y
CONFIG_SPS_SUPPORT_NDP_BAM=y
CONFIG_QPNP_REVID=y
+CONFIG_MSM_CLK_RPMH=y
+CONFIG_MDM_GCC_SDXPOORWILLS=y
+CONFIG_MDM_CLOCK_CPU_SDXPOORWILLS=y
CONFIG_REMOTE_SPINLOCK_MSM=y
-CONFIG_MAILBOX=y
+CONFIG_MSM_QMP=y
CONFIG_QCOM_SCM=y
CONFIG_MSM_BOOT_STATS=y
CONFIG_MSM_SMEM=y
@@ -303,6 +317,7 @@
CONFIG_MSM_GLINK_LOOPBACK_SERVER=y
CONFIG_MSM_GLINK_SMEM_NATIVE_XPRT=y
CONFIG_TRACER_PKT=y
+CONFIG_QTI_RPMH_API=y
CONFIG_MSM_SMP2P=y
CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
CONFIG_MSM_QMI_INTERFACE=y
@@ -310,6 +325,8 @@
CONFIG_MSM_SUBSYSTEM_RESTART=y
CONFIG_MSM_PIL=y
CONFIG_MSM_PIL_SSR_GENERIC=y
+CONFIG_QCOM_COMMAND_DB=y
+CONFIG_MSM_PM=y
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
CONFIG_IIO=y
CONFIG_PWM=y
diff --git a/arch/arm/crypto/aesbs-glue.c b/arch/arm/crypto/aesbs-glue.c
index 0511a6c..5d934a0 100644
--- a/arch/arm/crypto/aesbs-glue.c
+++ b/arch/arm/crypto/aesbs-glue.c
@@ -363,7 +363,7 @@ static struct crypto_alg aesbs_algs[] = { {
}, {
.cra_name = "cbc(aes)",
.cra_driver_name = "cbc-aes-neonbs",
- .cra_priority = 300,
+ .cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct async_helper_ctx),
@@ -383,7 +383,7 @@ static struct crypto_alg aesbs_algs[] = { {
}, {
.cra_name = "ctr(aes)",
.cra_driver_name = "ctr-aes-neonbs",
- .cra_priority = 300,
+ .cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct async_helper_ctx),
@@ -403,7 +403,7 @@ static struct crypto_alg aesbs_algs[] = { {
}, {
.cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-neonbs",
- .cra_priority = 300,
+ .cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct async_helper_ctx),
diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
index 55e0e3e..bd12b98 100644
--- a/arch/arm/include/asm/Kbuild
+++ b/arch/arm/include/asm/Kbuild
@@ -37,4 +37,3 @@
generic-y += termios.h
generic-y += timex.h
generic-y += trace_clock.h
-generic-y += unaligned.h
diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
index 9edea10..41e9107 100644
--- a/arch/arm/include/asm/topology.h
+++ b/arch/arm/include/asm/topology.h
@@ -32,6 +32,9 @@ unsigned long arch_get_cpu_efficiency(int cpu);
#define arch_scale_cpu_capacity scale_cpu_capacity
extern unsigned long scale_cpu_capacity(struct sched_domain *sd, int cpu);
+#define arch_update_cpu_capacity update_cpu_power_capacity
+extern void update_cpu_power_capacity(int cpu);
+
#else
static inline void init_cpu_topology(void) { }
diff --git a/arch/arm/include/asm/unaligned.h b/arch/arm/include/asm/unaligned.h
new file mode 100644
index 0000000..ab905ff
--- /dev/null
+++ b/arch/arm/include/asm/unaligned.h
@@ -0,0 +1,27 @@
+#ifndef __ASM_ARM_UNALIGNED_H
+#define __ASM_ARM_UNALIGNED_H
+
+/*
+ * We generally want to set CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on ARMv6+,
+ * but we don't want to use linux/unaligned/access_ok.h since that can lead
+ * to traps on unaligned stm/ldm or strd/ldrd.
+ */
+#include <asm/byteorder.h>
+
+#if defined(__LITTLE_ENDIAN)
+# include <linux/unaligned/le_struct.h>
+# include <linux/unaligned/be_byteshift.h>
+# include <linux/unaligned/generic.h>
+# define get_unaligned __get_unaligned_le
+# define put_unaligned __put_unaligned_le
+#elif defined(__BIG_ENDIAN)
+# include <linux/unaligned/be_struct.h>
+# include <linux/unaligned/le_byteshift.h>
+# include <linux/unaligned/generic.h>
+# define get_unaligned __get_unaligned_be
+# define put_unaligned __put_unaligned_be
+#else
+# error need to define endianess
+#endif
+
+#endif /* __ASM_ARM_UNALIGNED_H */
diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index 2b6c530..28dcd44 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -42,6 +42,16 @@
*/
static DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
+unsigned long arch_scale_freq_power(struct sched_domain *sd, int cpu)
+{
+ return per_cpu(cpu_scale, cpu);
+}
+
+static void set_power_scale(unsigned int cpu, unsigned long power)
+{
+ per_cpu(cpu_scale, cpu) = power;
+}
+
unsigned long scale_cpu_capacity(struct sched_domain *sd, int cpu)
{
#ifdef CONFIG_CPU_FREQ
@@ -397,6 +407,23 @@ const struct cpumask *cpu_corepower_mask(int cpu)
return &cpu_topology[cpu].thread_sibling;
}
+static void update_cpu_power(unsigned int cpu)
+{
+ if (!cpu_capacity(cpu))
+ return;
+
+ set_power_scale(cpu, cpu_capacity(cpu) / middle_capacity);
+
+ pr_info("CPU%u: update cpu_power %lu\n",
+ cpu, arch_scale_freq_power(NULL, cpu));
+}
+
+void update_cpu_power_capacity(int cpu)
+{
+ update_cpu_power(cpu);
+ update_cpu_capacity(cpu);
+}
+
static void update_siblings_masks(unsigned int cpuid)
{
struct cputopo_arm *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index 9688ec0..1b30489 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -152,30 +152,26 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom,
set_fs(fs);
}
-static void dump_instr(const char *lvl, struct pt_regs *regs)
+static void __dump_instr(const char *lvl, struct pt_regs *regs)
{
unsigned long addr = instruction_pointer(regs);
const int thumb = thumb_mode(regs);
const int width = thumb ? 4 : 8;
- mm_segment_t fs;
char str[sizeof("00000000 ") * 5 + 2 + 1], *p = str;
int i;
/*
- * We need to switch to kernel mode so that we can use __get_user
- * to safely read from kernel space. Note that we now dump the
- * code first, just in case the backtrace kills us.
+ * Note that we now dump the code first, just in case the backtrace
+ * kills us.
*/
- fs = get_fs();
- set_fs(KERNEL_DS);
for (i = -4; i < 1 + !!thumb; i++) {
unsigned int val, bad;
if (thumb)
- bad = __get_user(val, &((u16 *)addr)[i]);
+ bad = get_user(val, &((u16 *)addr)[i]);
else
- bad = __get_user(val, &((u32 *)addr)[i]);
+ bad = get_user(val, &((u32 *)addr)[i]);
if (!bad)
p += sprintf(p, i == 0 ? "(%0*x) " : "%0*x ",
@@ -186,8 +182,20 @@ static void dump_instr(const char *lvl, struct pt_regs *regs)
}
}
printk("%sCode: %s\n", lvl, str);
+}
- set_fs(fs);
+static void dump_instr(const char *lvl, struct pt_regs *regs)
+{
+ mm_segment_t fs;
+
+ if (!user_mode(regs)) {
+ fs = get_fs();
+ set_fs(KERNEL_DS);
+ __dump_instr(lvl, regs);
+ set_fs(fs);
+ } else {
+ __dump_instr(lvl, regs);
+ }
}
#ifdef CONFIG_ARM_UNWIND
diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
index 0064b86..30a13647 100644
--- a/arch/arm/kvm/emulate.c
+++ b/arch/arm/kvm/emulate.c
@@ -227,7 +227,7 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu)
u32 return_offset = (is_thumb) ? 2 : 4;
kvm_update_psr(vcpu, UND_MODE);
- *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) - return_offset;
+ *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) + return_offset;
/* Branch to exception vector */
*vcpu_pc(vcpu) = exc_vector_base(vcpu) + vect_offset;
@@ -239,10 +239,8 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu)
*/
static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr)
{
- unsigned long cpsr = *vcpu_cpsr(vcpu);
- bool is_thumb = (cpsr & PSR_T_BIT);
u32 vect_offset;
- u32 return_offset = (is_thumb) ? 4 : 0;
+ u32 return_offset = (is_pabt) ? 4 : 8;
bool is_lpae;
kvm_update_psr(vcpu, ABT_MODE);
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 8679405..92eab1d 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -2,7 +2,7 @@
# Makefile for Kernel-based Virtual Machine module, HYP part
#
-ccflags-y += -fno-stack-protector
+ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING
KVM=../../../../virt/kvm
diff --git a/arch/arm/mach-omap2/pdata-quirks.c b/arch/arm/mach-omap2/pdata-quirks.c
index 05e20aa..770216b 100644
--- a/arch/arm/mach-omap2/pdata-quirks.c
+++ b/arch/arm/mach-omap2/pdata-quirks.c
@@ -600,7 +600,6 @@ static void pdata_quirks_check(struct pdata_init *quirks)
if (of_machine_is_compatible(quirks->compatible)) {
if (quirks->fn)
quirks->fn();
- break;
}
quirks++;
}
diff --git a/arch/arm/mach-qcom/Kconfig b/arch/arm/mach-qcom/Kconfig
index f4d7965..4761bc5 100644
--- a/arch/arm/mach-qcom/Kconfig
+++ b/arch/arm/mach-qcom/Kconfig
@@ -51,5 +51,28 @@
select COMMON_CLK
select COMMON_CLK_QCOM
select QCOM_GDSC
+
+config ARCH_MSM8953
+ bool "Enable support for MSM8953"
+ select CPU_V7
+ select HAVE_ARM_ARCH_TIMER
+ select PINCTRL
+ select QCOM_SCM if SMP
+ select PM_DEVFREQ
+ select COMMON_CLK
+ select COMMON_CLK_QCOM
+ select QCOM_GDSC
+
+config ARCH_SDM450
+ bool "Enable support for SDM450"
+ select CPU_V7
+ select HAVE_ARM_ARCH_TIMER
+ select PINCTRL
+ select QCOM_SCM if SMP
+ select PM_DEVFREQ
+ select COMMON_CLK
+ select COMMON_CLK_QCOM
+ select QCOM_GDSC
+
endmenu
endif
diff --git a/arch/arm/mach-qcom/Makefile b/arch/arm/mach-qcom/Makefile
index d893b27..828e9c9 100644
--- a/arch/arm/mach-qcom/Makefile
+++ b/arch/arm/mach-qcom/Makefile
@@ -1,3 +1,5 @@
obj-$(CONFIG_USE_OF) += board-dt.o
obj-$(CONFIG_SMP) += platsmp.o
obj-$(CONFIG_ARCH_SDXPOORWILLS) += board-poorwills.o
+obj-$(CONFIG_ARCH_MSM8953) += board-msm8953.o
+obj-$(CONFIG_ARCH_SDM450) += board-sdm450.o
diff --git a/arch/arm/mach-qcom/board-msm8953.c b/arch/arm/mach-qcom/board-msm8953.c
new file mode 100644
index 0000000..cae3bf7
--- /dev/null
+++ b/arch/arm/mach-qcom/board-msm8953.c
@@ -0,0 +1,32 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include "board-dt.h"
+#include <asm/mach/map.h>
+#include <asm/mach/arch.h>
+
+static const char *msm8953_dt_match[] __initconst = {
+ "qcom,msm8953",
+ NULL
+};
+
+static void __init msm8953_init(void)
+{
+ board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(MSM8953_DT,
+ "Qualcomm Technologies, Inc. MSM8953 (Flattened Device Tree)")
+ .init_machine = msm8953_init,
+ .dt_compat = msm8953_dt_match,
+MACHINE_END
diff --git a/arch/arm/mach-qcom/board-sdm450.c b/arch/arm/mach-qcom/board-sdm450.c
new file mode 100644
index 0000000..5f68ede
--- /dev/null
+++ b/arch/arm/mach-qcom/board-sdm450.c
@@ -0,0 +1,32 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include "board-dt.h"
+#include <asm/mach/map.h>
+#include <asm/mach/arch.h>
+
+static const char *sdm450_dt_match[] __initconst = {
+ "qcom,sdm450",
+ NULL
+};
+
+static void __init sdm450_init(void)
+{
+ board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(SDM450_DT,
+ "Qualcomm Technologies, Inc. SDM450 (Flattened Device Tree)")
+ .init_machine = sdm450_init,
+ .dt_compat = sdm450_dt_match,
+MACHINE_END
diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
index 8edfbf2..6531949 100644
--- a/arch/arm64/Kconfig.platforms
+++ b/arch/arm64/Kconfig.platforms
@@ -144,6 +144,7 @@
depends on ARCH_QCOM
select COMMON_CLK_QCOM
select QCOM_GDSC
+ select CPU_FREQ_QCOM
help
This enables support for the MSM8953 chipset. If you do not
wish to build a kernel that runs on this chipset, say 'N' here.
@@ -153,6 +154,7 @@
depends on ARCH_QCOM
select COMMON_CLK_QCOM
select QCOM_GDSC
+ select CPU_FREQ_QCOM
help
This enables support for the sdm450 chipset. If you do not
wish to build a kernel that runs on this chipset, say 'N' here.
diff --git a/arch/arm64/boot/dts/broadcom/ns2.dtsi b/arch/arm64/boot/dts/broadcom/ns2.dtsi
index d95dc40..a16b1b3 100644
--- a/arch/arm64/boot/dts/broadcom/ns2.dtsi
+++ b/arch/arm64/boot/dts/broadcom/ns2.dtsi
@@ -30,6 +30,8 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+/memreserve/ 0x81000000 0x00200000;
+
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/bcm-ns2.h>
diff --git a/arch/arm64/boot/dts/qcom/Makefile b/arch/arm64/boot/dts/qcom/Makefile
index 40a6aab..3df7439 100644
--- a/arch/arm64/boot/dts/qcom/Makefile
+++ b/arch/arm64/boot/dts/qcom/Makefile
@@ -7,10 +7,10 @@
sdm845-cdp-overlay.dtbo \
sdm845-mtp-overlay.dtbo \
sdm845-qrd-overlay.dtbo \
- sdm845-qvr-overlay.dtbo \
sdm845-4k-panel-mtp-overlay.dtbo \
sdm845-4k-panel-cdp-overlay.dtbo \
sdm845-4k-panel-qrd-overlay.dtbo \
+ sdm845-v2-qvr-overlay.dtbo \
sdm845-v2-cdp-overlay.dtbo \
sdm845-v2-mtp-overlay.dtbo \
sdm845-v2-qrd-overlay.dtbo \
@@ -46,11 +46,10 @@
sdm845-cdp-overlay.dtbo-base := sdm845.dtb
sdm845-mtp-overlay.dtbo-base := sdm845.dtb
sdm845-qrd-overlay.dtbo-base := sdm845.dtb
-sdm845-qvr-overlay.dtbo-base := sdm845-v2.dtb
-sdm845-qvr-overlay.dtbo-base := sdm845.dtb
sdm845-4k-panel-mtp-overlay.dtbo-base := sdm845.dtb
sdm845-4k-panel-cdp-overlay.dtbo-base := sdm845.dtb
sdm845-4k-panel-qrd-overlay.dtbo-base := sdm845.dtb
+sdm845-v2-qvr-overlay.dtbo-base := sdm845-v2.dtb
sdm845-v2-cdp-overlay.dtbo-base := sdm845-v2.dtb
sdm845-v2-mtp-overlay.dtbo-base := sdm845-v2.dtb
sdm845-v2-qrd-overlay.dtbo-base := sdm845-v2.dtb
@@ -92,7 +91,7 @@
sdm845-v2-cdp.dtb \
sdm845-qrd.dtb \
sdm845-v2-qrd.dtb \
- sdm845-qvr.dtb \
+ sdm845-v2-qvr.dtb \
sdm845-4k-panel-mtp.dtb \
sdm845-4k-panel-cdp.dtb \
sdm845-4k-panel-qrd.dtb \
@@ -180,6 +179,7 @@
sda670-cdp.dtb \
sda670-pm660a-mtp.dtb \
sda670-pm660a-cdp.dtb \
+ qcs605-360camera.dtb \
qcs605-mtp.dtb \
qcs605-cdp.dtb \
qcs605-external-codec-mtp.dtb
@@ -187,7 +187,29 @@
ifeq ($(CONFIG_BUILD_ARM64_DT_OVERLAY),y)
else
-dtb-$(CONFIG_ARCH_MSM8953) += msm8953-mtp.dtb
+dtb-$(CONFIG_ARCH_MSM8953) += msm8953-cdp.dtb \
+ msm8953-mtp.dtb \
+ msm8953-ext-codec-mtp.dtb \
+ msm8953-qrd-sku3.dtb \
+ msm8953-rcm.dtb \
+ apq8053-rcm.dtb \
+ msm8953-ext-codec-rcm.dtb \
+ apq8053-cdp.dtb \
+ apq8053-ipc.dtb \
+ msm8953-ipc.dtb \
+ apq8053-mtp.dtb \
+ apq8053-ext-audio-mtp.dtb \
+ apq8053-ext-codec-rcm.dtb \
+ msm8953-cdp-1200p.dtb \
+ msm8953-iot-mtp.dtb \
+ apq8053-iot-mtp.dtb \
+ msm8953-pmi8940-cdp.dtb \
+ msm8953-pmi8940-mtp.dtb \
+ msm8953-pmi8937-cdp.dtb \
+ msm8953-pmi8937-mtp.dtb \
+ msm8953-pmi8940-ext-codec-mtp.dtb \
+ msm8953-pmi8937-ext-codec-mtp.dtb
+
dtb-$(CONFIG_ARCH_SDM450) += sdm450-rcm.dtb \
sdm450-cdp.dtb \
sdm450-mtp.dtb \
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/apq8053-cdp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/apq8053-cdp.dts
index 5513c92..5e89e4f 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/apq8053-cdp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "apq8053.dtsi"
+#include "msm8953-cdp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. APQ8053 + PMI8950 CDP";
+ compatible = "qcom,apq8053-cdp", "qcom,apq8053", "qcom,cdp";
+ qcom,board-id= <1 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/apq8053-ext-audio-mtp.dts b/arch/arm64/boot/dts/qcom/apq8053-ext-audio-mtp.dts
new file mode 100644
index 0000000..2c7b228
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/apq8053-ext-audio-mtp.dts
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "apq8053.dtsi"
+#include "msm8953-mtp.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. APQ8053 + PMI8950 Ext Codec MTP";
+ compatible = "qcom,apq8053-mtp", "qcom,apq8053", "qcom,mtp";
+ qcom,board-id= <8 1>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/apq8053-ext-codec-rcm.dts b/arch/arm64/boot/dts/qcom/apq8053-ext-codec-rcm.dts
new file mode 100644
index 0000000..d026734
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/apq8053-ext-codec-rcm.dts
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "apq8053.dtsi"
+#include "msm8953-cdp.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. APQ8053 + PMI8950 Ext Codec RCM";
+ compatible = "qcom,apq8053-cdp", "qcom,apq8053", "qcom,cdp";
+ qcom,board-id= <21 1>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/apq8053-iot-mtp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/apq8053-iot-mtp.dts
index 5513c92..177e105 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/apq8053-iot-mtp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "apq8053.dtsi"
+#include "msm8953-mtp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. APQ8053 + PMI8950 IOT MTP";
+ compatible = "qcom,apq8053-mtp", "qcom,apq8053", "qcom,mtp";
+ qcom,board-id= <8 2>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/apq8053-ipc.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/apq8053-ipc.dts
index 5513c92..3381b2a 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/apq8053-ipc.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,14 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "apq8053.dtsi"
+#include "msm8953-ipc.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. APQ8053 + PMI8950 IPC";
+ compatible = "qcom,apq8053-ipc", "qcom,apq8053", "qcom,ipc";
+ qcom,board-id= <12 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/apq8053-mtp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/apq8053-mtp.dts
index 5513c92..be544af 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/apq8053-mtp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "apq8053.dtsi"
+#include "msm8953-mtp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. APQ8053 + PMI8950 MTP";
+ compatible = "qcom,apq8053-mtp", "qcom,apq8053", "qcom,mtp";
+ qcom,board-id= <8 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/apq8053-rcm.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/apq8053-rcm.dts
index 5513c92..cc5bdaa 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/apq8053-rcm.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,14 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "apq8053.dtsi"
+#include "msm8953-cdp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. APQ8053 + PMI8950 RCM";
+ compatible = "qcom,apq8053-cdp", "qcom,apq8053", "qcom,cdp";
+ qcom,board-id= <21 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
diff --git a/arch/arm64/boot/dts/qcom/apq8053.dtsi b/arch/arm64/boot/dts/qcom/apq8053.dtsi
new file mode 100644
index 0000000..15a1595
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/apq8053.dtsi
@@ -0,0 +1,23 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include "msm8953.dtsi"
+/ {
+ model = "Qualcomm Technologies, Inc. APQ 8953";
+ compatible = "qcom,apq8053";
+ qcom,msm-id = <304 0x0>;
+};
+
+&secure_mem {
+ status = "disabled";
+};
+
diff --git a/arch/arm64/boot/dts/qcom/dsi-panel-nt35597-truly-dualmipi-wqxga-cmd.dtsi b/arch/arm64/boot/dts/qcom/dsi-panel-nt35597-truly-dualmipi-wqxga-cmd.dtsi
index 5b5fbb8..1a8ce91 100644
--- a/arch/arm64/boot/dts/qcom/dsi-panel-nt35597-truly-dualmipi-wqxga-cmd.dtsi
+++ b/arch/arm64/boot/dts/qcom/dsi-panel-nt35597-truly-dualmipi-wqxga-cmd.dtsi
@@ -210,6 +210,11 @@
15 01 00 00 00 00 02 E5 01
/* CMD mode(10) VDO mode(03) */
15 01 00 00 00 00 02 BB 10
+ /* NVT SDC */
+ 15 01 00 00 00 00 02 C0 00
+ /* GRAM Slide Parameter */
+ 29 01 00 00 00 00 0C C9 01 01 70
+ 00 0A 06 67 04 C5 12 18
/* Non Reload MTP */
15 01 00 00 00 00 02 FB 01
/* SlpOut + DispOn */
diff --git a/arch/arm64/boot/dts/qcom/msm-arm-smmu-sdm670.dtsi b/arch/arm64/boot/dts/qcom/msm-arm-smmu-sdm670.dtsi
index fc468f5..ae22a36 100644
--- a/arch/arm64/boot/dts/qcom/msm-arm-smmu-sdm670.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm-arm-smmu-sdm670.dtsi
@@ -61,6 +61,7 @@
qcom,skip-init;
qcom,use-3-lvl-tables;
qcom,no-asid-retention;
+ qcom,disable-atos;
#global-interrupts = <1>;
#size-cells = <1>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/qcom/msm8953-cdp-1200p.dts b/arch/arm64/boot/dts/qcom/msm8953-cdp-1200p.dts
new file mode 100644
index 0000000..a685380
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-cdp-1200p.dts
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "msm8953.dtsi"
+#include "msm8953-cdp.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 CDP 1200P";
+ compatible = "qcom,msm8953-cdp", "qcom,msm8953", "qcom,cdp";
+ qcom,board-id= <1 1>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-cdp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-cdp.dts
index 5513c92..1f78902 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-cdp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-cdp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 CDP";
+ compatible = "qcom,msm8953-cdp", "qcom,msm8953", "qcom,cdp";
+ qcom,board-id= <1 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953-ext-codec-mtp.dts b/arch/arm64/boot/dts/qcom/msm8953-ext-codec-mtp.dts
new file mode 100644
index 0000000..3dfd848
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-ext-codec-mtp.dts
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "msm8953.dtsi"
+#include "msm8953-mtp.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 Ext Codec MTP";
+ compatible = "qcom,msm8953-mtp", "qcom,msm8953", "qcom,mtp";
+ qcom,board-id= <8 1>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953-ext-codec-rcm.dts b/arch/arm64/boot/dts/qcom/msm8953-ext-codec-rcm.dts
new file mode 100644
index 0000000..a81e212
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-ext-codec-rcm.dts
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "msm8953.dtsi"
+#include "msm8953-cdp.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 Ext Codec RCM";
+ compatible = "qcom,msm8953-cdp", "qcom,msm8953", "qcom,cdp";
+ qcom,board-id= <21 1>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-iot-mtp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-iot-mtp.dts
index 5513c92..524e7ca 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-iot-mtp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-mtp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 IOT MTP";
+ compatible = "qcom,msm8953-mtp", "qcom,msm8953", "qcom,mtp";
+ qcom,board-id= <8 2>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-ipc.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-ipc.dts
index 5513c92..89a54af 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-ipc.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-ipc.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 IPC";
+ compatible = "qcom,msm8953-ipc", "qcom,msm8953", "qcom,ipc";
+ qcom,board-id= <12 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953-ipc.dtsi b/arch/arm64/boot/dts/qcom/msm8953-ipc.dtsi
new file mode 100644
index 0000000..26f4338
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-ipc.dtsi
@@ -0,0 +1,18 @@
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+&blsp1_uart0 {
+ status = "ok";
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart_console_active>;
+};
diff --git a/arch/arm64/boot/dts/qcom/msm8953-pm.dtsi b/arch/arm64/boot/dts/qcom/msm8953-pm.dtsi
new file mode 100644
index 0000000..0cbb0f2
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-pm.dtsi
@@ -0,0 +1,267 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <dt-bindings/msm/pm.h>
+
+&soc {
+ qcom,spm@b1d2000 {
+ compatible = "qcom,spm-v2";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ reg = <0xb1d2000 0x1000>;
+ reg-names = "saw-base";
+ qcom,name = "system-cci";
+ qcom,saw2-ver-reg = <0xfd0>;
+ qcom,saw2-cfg = <0x14>;
+ qcom,saw2-spm-dly= <0x3C102800>;
+ qcom,saw2-spm-ctl = <0xe>;
+ qcom,saw2-avs-ctl = <0x10>;
+ qcom,cpu-vctl-list = <&CPU0 &CPU1 &CPU2 &CPU3
+ &CPU4 &CPU5 &CPU6 &CPU7>;
+ qcom,vctl-timeout-us = <500>;
+ qcom,vctl-port = <0x0>;
+ qcom,phase-port = <0x1>;
+ qcom,pfm-port = <0x2>;
+ };
+
+ qcom,lpm-levels {
+ compatible = "qcom,lpm-levels";
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ qcom,pm-cluster@0 {
+ reg = <0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ label = "system";
+ qcom,psci-mode-shift = <8>;
+ qcom,psci-mode-mask = <0xf>;
+
+ qcom,pm-cluster-level@0 {
+ reg = <0>;
+ label = "system-active";
+ qcom,psci-mode = <0>;
+ qcom,latency-us = <423>;
+ qcom,ss-power = <349>;
+ qcom,energy-overhead = <299110>;
+ qcom,time-overhead = <719>;
+ };
+
+ qcom,pm-cluster-level@1 {
+ reg = <1>;
+ label = "system-wfi";
+ qcom,psci-mode = <1>;
+ qcom,latency-us = <480>;
+ qcom,ss-power = <348>;
+ qcom,energy-overhead = <313989>;
+ qcom,time-overhead = <753>;
+ qcom,min-child-idx = <3>;
+ };
+
+ qcom,pm-cluster-level@2 {
+ reg = <2>;
+ label = "system-pc";
+ qcom,psci-mode = <3>;
+ qcom,latency-us = <11027>;
+ qcom,ss-power = <340>;
+ qcom,energy-overhead = <616035>;
+ qcom,time-overhead = <1495>;
+ qcom,min-child-idx = <3>;
+ qcom,notify-rpm;
+ qcom,is-reset;
+ qcom,reset-level = <LPM_RESET_LVL_PC>;
+ };
+
+ qcom,pm-cluster@0{
+ reg = <0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ label = "pwr";
+ qcom,psci-mode-shift = <4>;
+ qcom,psci-mode-mask = <0xf>;
+
+ qcom,pm-cluster-level@0{
+ reg = <0>;
+ label = "pwr-l2-wfi";
+ qcom,psci-mode = <1>;
+ qcom,latency-us = <180>;
+ qcom,ss-power = <361>;
+ qcom,energy-overhead = <113215>;
+ qcom,time-overhead = <270>;
+ };
+
+ qcom,pm-cluster-level@1{
+ reg = <1>;
+ label = "pwr-l2-retention";
+ qcom,psci-mode = <3>;
+ qcom,latency-us = <188>;
+ qcom,ss-power = <360>;
+ qcom,energy-overhead = <121271>;
+ qcom,time-overhead = <307>;
+ qcom,min-child-idx = <1>;
+ qcom,reset-level =
+ <LPM_RESET_LVL_RET>;
+ };
+
+ qcom,pm-cluster-level@2{
+ reg = <2>;
+ label = "pwr-l2-gdhs";
+ qcom,psci-mode = <4>;
+ qcom,latency-us = <212>;
+ qcom,ss-power = <354>;
+ qcom,energy-overhead = <150748>;
+ qcom,time-overhead = <363>;
+ qcom,min-child-idx = <1>;
+ qcom,reset-level =
+ <LPM_RESET_LVL_GDHS>;
+ };
+
+ qcom,pm-cluster-level@3{
+ reg = <3>;
+ label = "pwr-l2-pc";
+ qcom,psci-mode = <5>;
+ qcom,latency-us = <421>;
+ qcom,ss-power = <350>;
+ qcom,energy-overhead = <277479>;
+ qcom,time-overhead = <690>;
+ qcom,min-child-idx = <1>;
+ qcom,is-reset;
+ qcom,reset-level =
+ <LPM_RESET_LVL_PC>;
+ };
+
+ qcom,pm-cpu {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ qcom,psci-mode-shift = <0>;
+ qcom,psci-mode-mask = <0xf>;
+ qcom,cpu = <&CPU0 &CPU1 &CPU2 &CPU3>;
+
+ qcom,pm-cpu-level@0 {
+ reg = <0>;
+ label = "wfi";
+ qcom,psci-cpu-mode = <1>;
+ qcom,latency-us = <1>;
+ qcom,ss-power = <395>;
+ qcom,energy-overhead = <30424>;
+ qcom,time-overhead = <61>;
+ };
+
+ qcom,pm-cpu-level@1 {
+ reg = <1>;
+ label = "pc";
+ qcom,psci-cpu-mode = <3>;
+ qcom,latency-us = <180>;
+ qcom,ss-power = <361>;
+ qcom,energy-overhead = <113215>;
+ qcom,time-overhead = <270>;
+ qcom,use-broadcast-timer;
+ qcom,is-reset;
+ qcom,reset-level =
+ <LPM_RESET_LVL_PC>;
+ };
+ };
+ };
+
+ qcom,pm-cluster@1{
+ reg = <1>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ label = "perf";
+ qcom,psci-mode-shift = <4>;
+ qcom,psci-mode-mask = <0xf>;
+
+ qcom,pm-cluster-level@0{
+ reg = <0>;
+ label = "perf-l2-wfi";
+ qcom,psci-mode = <1>;
+ qcom,latency-us = <179>;
+ qcom,ss-power = <362>;
+ qcom,energy-overhead = <113242>;
+ qcom,time-overhead = <268>;
+ };
+
+ qcom,pm-cluster-level@1{
+ reg = <1>;
+ label = "perf-l2-retention";
+ qcom,psci-mode = <3>;
+ qcom,latency-us = <186>;
+ qcom,ss-power = <361>;
+ qcom,energy-overhead = <125982>;
+ qcom,time-overhead = <306>;
+ qcom,min-child-idx = <1>;
+ qcom,reset-level =
+ <LPM_RESET_LVL_RET>;
+ };
+
+ qcom,pm-cluster-level@2{
+ reg = <2>;
+ label = "perf-l2-gdhs";
+ qcom,psci-mode = <4>;
+ qcom,latency-us = <210>;
+ qcom,ss-power = <355>;
+ qcom,energy-overhead = <155728>;
+ qcom,time-overhead = <361>;
+ qcom,min-child-idx = <1>;
+ qcom,reset-level =
+ <LPM_RESET_LVL_GDHS>;
+ };
+
+ qcom,pm-cluster-level@3{
+ reg = <3>;
+ label = "perf-l2-pc";
+ qcom,psci-mode = <5>;
+ qcom,latency-us = <423>;
+ qcom,ss-power = <349>;
+ qcom,energy-overhead = <299110>;
+ qcom,time-overhead = <719>;
+ qcom,min-child-idx = <1>;
+ qcom,is-reset;
+ qcom,reset-level =
+ <LPM_RESET_LVL_PC>;
+ };
+
+ qcom,pm-cpu {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ qcom,psci-mode-shift = <0>;
+ qcom,psci-mode-mask = <0xf>;
+ qcom,cpu = <&CPU4 &CPU5 &CPU6 &CPU7>;
+
+ qcom,pm-cpu-level@0 {
+ reg = <0>;
+ label = "wfi";
+ qcom,psci-cpu-mode = <1>;
+ qcom,latency-us = <1>;
+ qcom,ss-power = <397>;
+ qcom,energy-overhead = <30424>;
+ qcom,time-overhead = <61>;
+ };
+
+ qcom,pm-cpu-level@1 {
+ reg = <1>;
+ label = "pc";
+ qcom,psci-cpu-mode = <3>;
+ qcom,latency-us = <179>;
+ qcom,ss-power = <362>;
+ qcom,energy-overhead = <113242>;
+ qcom,time-overhead = <268>;
+ qcom,use-broadcast-timer;
+ qcom,is-reset;
+ qcom,reset-level =
+ <LPM_RESET_LVL_PC>;
+ };
+ };
+ };
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-pmi8937-cdp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-pmi8937-cdp.dts
index 5513c92..a751d5d 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-pmi8937-cdp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-cdp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8937 CDP";
+ compatible = "qcom,msm8953-cdp", "qcom,msm8953", "qcom,cdp";
+ qcom,board-id= <1 0>;
+ qcom,pmic-id = <0x010016 0x020037 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953-pmi8937-ext-codec-mtp.dts b/arch/arm64/boot/dts/qcom/msm8953-pmi8937-ext-codec-mtp.dts
new file mode 100644
index 0000000..13aba62
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-pmi8937-ext-codec-mtp.dts
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "msm8953.dtsi"
+#include "msm8953-mtp.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8937 Ext Codec MTP";
+ compatible = "qcom,msm8953-mtp", "qcom,msm8953", "qcom,mtp";
+ qcom,board-id= <8 1>;
+ qcom,pmic-id = <0x010016 0x020037 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-pmi8937-mtp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-pmi8937-mtp.dts
index 5513c92..9d6be47 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-pmi8937-mtp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-mtp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8937 MTP";
+ compatible = "qcom,msm8953-mtp", "qcom,msm8953", "qcom,mtp";
+ qcom,board-id= <8 0>;
+ qcom,pmic-id = <0x010016 0x020037 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-pmi8940-cdp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-pmi8940-cdp.dts
index 5513c92..d2bb465 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-pmi8940-cdp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-cdp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8940 CDP";
+ compatible = "qcom,msm8953-cdp", "qcom,msm8953", "qcom,cdp";
+ qcom,board-id= <1 0>;
+ qcom,pmic-id = <0x010016 0x020040 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953-pmi8940-ext-codec-mtp.dts b/arch/arm64/boot/dts/qcom/msm8953-pmi8940-ext-codec-mtp.dts
new file mode 100644
index 0000000..dbbb6b8
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-pmi8940-ext-codec-mtp.dts
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "msm8953.dtsi"
+#include "msm8953-mtp.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8940 Ext Codec MTP";
+ compatible = "qcom,msm8953-mtp", "qcom,msm8953", "qcom,mtp";
+ qcom,board-id= <8 1>;
+ qcom,pmic-id = <0x010016 0x020040 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-pmi8940-mtp.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-pmi8940-mtp.dts
index 5513c92..0fb793b 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-pmi8940-mtp.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-mtp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8940 MTP";
+ compatible = "qcom,msm8953-mtp", "qcom,msm8953", "qcom,mtp";
+ qcom,board-id= <8 0>;
+ qcom,pmic-id = <0x010016 0x020040 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953-qrd-sku3.dts b/arch/arm64/boot/dts/qcom/msm8953-qrd-sku3.dts
new file mode 100644
index 0000000..5d892fd
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-qrd-sku3.dts
@@ -0,0 +1,26 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "msm8953.dtsi"
+#include "msm8953-qrd-sku3.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 QRD SKU3";
+ compatible = "qcom,msm8953-qrd-sku3",
+ "qcom,msm8953-qrd", "qcom,msm8953", "qcom,qrd";
+ qcom,board-id= <0x2000b 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
+};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953-qrd-sku3.dtsi b/arch/arm64/boot/dts/qcom/msm8953-qrd-sku3.dtsi
new file mode 100644
index 0000000..96e185b
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/msm8953-qrd-sku3.dtsi
@@ -0,0 +1,15 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "msm8953-qrd.dtsi"
+
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/msm8953-rcm.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/msm8953-rcm.dts
index 5513c92..a3117ed 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/msm8953-rcm.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,15 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-cdp.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 RCM";
+ compatible = "qcom,msm8953-cdp", "qcom,msm8953", "qcom,cdp";
+ qcom,board-id= <21 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
+
diff --git a/arch/arm64/boot/dts/qcom/msm8953.dtsi b/arch/arm64/boot/dts/qcom/msm8953.dtsi
index e90c30b..12b39a9 100644
--- a/arch/arm64/boot/dts/qcom/msm8953.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8953.dtsi
@@ -103,6 +103,17 @@
aliases {
/* smdtty devices */
+ smd1 = &smdtty_apps_fm;
+ smd2 = &smdtty_apps_riva_bt_acl;
+ smd3 = &smdtty_apps_riva_bt_cmd;
+ smd4 = &smdtty_mbalbridge;
+ smd5 = &smdtty_apps_riva_ant_cmd;
+ smd6 = &smdtty_apps_riva_ant_data;
+ smd7 = &smdtty_data1;
+ smd8 = &smdtty_data4;
+ smd11 = &smdtty_data11;
+ smd21 = &smdtty_data21;
+ smd36 = &smdtty_loopback;
sdhc1 = &sdhc_1; /* SDC1 eMMC slot */
sdhc2 = &sdhc_2; /* SDC2 for SD card */
};
@@ -113,6 +124,7 @@
#include "msm8953-pinctrl.dtsi"
#include "msm8953-cpu.dtsi"
+#include "msm8953-pm.dtsi"
&soc {
@@ -454,6 +466,106 @@
};
};
+ qcom,smdtty {
+ compatible = "qcom,smdtty";
+
+ smdtty_apps_fm: qcom,smdtty-apps-fm {
+ qcom,smdtty-remote = "wcnss";
+ qcom,smdtty-port-name = "APPS_FM";
+ };
+
+ smdtty_apps_riva_bt_acl: smdtty-apps-riva-bt-acl {
+ qcom,smdtty-remote = "wcnss";
+ qcom,smdtty-port-name = "APPS_RIVA_BT_ACL";
+ };
+
+ smdtty_apps_riva_bt_cmd: qcom,smdtty-apps-riva-bt-cmd {
+ qcom,smdtty-remote = "wcnss";
+ qcom,smdtty-port-name = "APPS_RIVA_BT_CMD";
+ };
+
+ smdtty_mbalbridge: qcom,smdtty-mbalbridge {
+ qcom,smdtty-remote = "modem";
+ qcom,smdtty-port-name = "MBALBRIDGE";
+ };
+
+ smdtty_apps_riva_ant_cmd: smdtty-apps-riva-ant-cmd {
+ qcom,smdtty-remote = "wcnss";
+ qcom,smdtty-port-name = "APPS_RIVA_ANT_CMD";
+ };
+
+ smdtty_apps_riva_ant_data: smdtty-apps-riva-ant-data {
+ qcom,smdtty-remote = "wcnss";
+ qcom,smdtty-port-name = "APPS_RIVA_ANT_DATA";
+ };
+
+ smdtty_data1: qcom,smdtty-data1 {
+ qcom,smdtty-remote = "modem";
+ qcom,smdtty-port-name = "DATA1";
+ };
+
+ smdtty_data4: qcom,smdtty-data4 {
+ qcom,smdtty-remote = "modem";
+ qcom,smdtty-port-name = "DATA4";
+ };
+
+ smdtty_data11: qcom,smdtty-data11 {
+ qcom,smdtty-remote = "modem";
+ qcom,smdtty-port-name = "DATA11";
+ };
+
+ smdtty_data21: qcom,smdtty-data21 {
+ qcom,smdtty-remote = "modem";
+ qcom,smdtty-port-name = "DATA21";
+ };
+
+ smdtty_loopback: smdtty-loopback {
+ qcom,smdtty-remote = "modem";
+ qcom,smdtty-port-name = "LOOPBACK";
+ qcom,smdtty-dev-name = "LOOPBACK_TTY";
+ };
+ };
+
+ qcom,smdpkt {
+ compatible = "qcom,smdpkt";
+
+ qcom,smdpkt-data5-cntl {
+ qcom,smdpkt-remote = "modem";
+ qcom,smdpkt-port-name = "DATA5_CNTL";
+ qcom,smdpkt-dev-name = "smdcntl0";
+ };
+
+ qcom,smdpkt-data22 {
+ qcom,smdpkt-remote = "modem";
+ qcom,smdpkt-port-name = "DATA22";
+ qcom,smdpkt-dev-name = "smd22";
+ };
+
+ qcom,smdpkt-data40-cntl {
+ qcom,smdpkt-remote = "modem";
+ qcom,smdpkt-port-name = "DATA40_CNTL";
+ qcom,smdpkt-dev-name = "smdcntl8";
+ };
+
+ qcom,smdpkt-apr-apps2 {
+ qcom,smdpkt-remote = "adsp";
+ qcom,smdpkt-port-name = "apr_apps2";
+ qcom,smdpkt-dev-name = "apr_apps2";
+ };
+
+ qcom,smdpkt-loopback {
+ qcom,smdpkt-remote = "modem";
+ qcom,smdpkt-port-name = "LOOPBACK";
+ qcom,smdpkt-dev-name = "smd_pkt_loopback";
+ };
+ };
+
+ rpm_bus: qcom,rpm-smd {
+ compatible = "qcom,rpm-smd";
+ rpm-channel-name = "rpm_requests";
+ rpm-channel-type = <15>; /* SMD_APPS_RPM */
+ };
+
qcom,wdt@b017000 {
compatible = "qcom,msm-watchdog";
reg = <0xb017000 0x1000>;
@@ -490,6 +602,11 @@
reg = <0x10 8>;
};
+ dload_type@18 {
+ compatible = "qcom,msm-imem-dload-type";
+ reg = <0x18 4>;
+ };
+
restart_reason@65c {
compatible = "qcom,msm-imem-restart_reason";
reg = <0x65c 4>;
@@ -640,10 +757,10 @@
interrupts = <GIC_SPI 190 IRQ_TYPE_NONE>;
qcom,ee = <0>;
qcom,channel = <0>;
- #address-cells = <1>;
+ #address-cells = <2>;
#size-cells = <0>;
interrupt-controller;
- #interrupt-cells = <3>;
+ #interrupt-cells = <4>;
cell-index = <0>;
};
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/mtp8953-ipc.dts
similarity index 60%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to arch/arm64/boot/dts/qcom/mtp8953-ipc.dts
index 5513c92..481e576 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/mtp8953-ipc.dts
@@ -1,4 +1,5 @@
-/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,15 +11,14 @@
* GNU General Public License for more details.
*/
-
/dts-v1/;
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "msm8953.dtsi"
+#include "msm8953-ipc.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
+ model = "Qualcomm Technologies, Inc. MSM8953 + PMI8950 IPC";
+ compatible = "qcom,msm8953-ipc", "qcom,msm8953", "qcom,ipc";
+ qcom,board-id= <12 0>;
+ qcom,pmic-id = <0x010016 0x010011 0x0 0x0>;
};
diff --git a/arch/arm64/boot/dts/qcom/pmi8950.dtsi b/arch/arm64/boot/dts/qcom/pmi8950.dtsi
index 0ec1f0b..4223cfe 100644
--- a/arch/arm64/boot/dts/qcom/pmi8950.dtsi
+++ b/arch/arm64/boot/dts/qcom/pmi8950.dtsi
@@ -434,6 +434,7 @@
#address-cells = <1>;
#size-cells = <1>;
qcom,pmic-revid = <&pmi8950_revid>;
+ qcom,qpnp-labibb-mode = "lcd";
ibb_regulator: qcom,ibb@dc00 {
reg = <0xdc00 0x100>;
diff --git a/arch/arm64/boot/dts/qcom/pmi8998.dtsi b/arch/arm64/boot/dts/qcom/pmi8998.dtsi
index c65430b1..5b48c14 100644
--- a/arch/arm64/boot/dts/qcom/pmi8998.dtsi
+++ b/arch/arm64/boot/dts/qcom/pmi8998.dtsi
@@ -483,6 +483,10 @@
regulator-min-microvolt = <4600000>;
regulator-max-microvolt = <6000000>;
+ interrupts = <0x3 0xdc 0x2
+ IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "ibb-sc-err";
+
qcom,qpnp-ibb-min-voltage = <1400000>;
qcom,qpnp-ibb-step-size = <100000>;
qcom,qpnp-ibb-slew-rate = <2000000>;
@@ -516,8 +520,11 @@
regulator-max-microvolt = <6000000>;
interrupts = <0x3 0xde 0x0
+ IRQ_TYPE_EDGE_RISING>,
+ <0x3 0xde 0x1
IRQ_TYPE_EDGE_RISING>;
- interrupt-names = "lab-vreg-ok";
+ interrupt-names = "lab-vreg-ok", "lab-sc-err";
+
qcom,qpnp-lab-min-voltage = <4600000>;
qcom,qpnp-lab-step-size = <100000>;
qcom,qpnp-lab-slew-rate = <5000>;
@@ -681,6 +688,14 @@
qcom,led-mask = <4>;
qcom,default-led-trigger = "switch1_trigger";
};
+
+ pmi8998_switch2: qcom,led_switch_2 {
+ label = "switch";
+ qcom,led-name = "led:switch_2";
+ qcom,led-mask = <4>;
+ qcom,default-led-trigger = "switch2_trigger";
+ };
+
};
pmi8998_haptics: qcom,haptics@c000 {
diff --git a/arch/arm64/boot/dts/qcom/qcs605-360camera.dts b/arch/arm64/boot/dts/qcom/qcs605-360camera.dts
new file mode 100644
index 0000000..8caad4b
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/qcs605-360camera.dts
@@ -0,0 +1,27 @@
+/*
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+
+/dts-v1/;
+
+#include "qcs605.dtsi"
+#include "qcs605-360camera.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. QCS605 PM660 + PM660L 360camera";
+ compatible = "qcom,qcs605-mtp", "qcom,qcs605", "qcom,mtp";
+ qcom,board-id = <0x0000000b 1>;
+ qcom,pmic-id = <0x0001001b 0x0101011a 0x0 0x0>,
+ <0x0001001b 0x0102001a 0x0 0x0>,
+ <0x0001001b 0x0201011a 0x0 0x0>;
+};
diff --git a/arch/arm64/boot/dts/qcom/qcs605-360camera.dtsi b/arch/arm64/boot/dts/qcom/qcs605-360camera.dtsi
new file mode 100644
index 0000000..6670edd
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/qcs605-360camera.dtsi
@@ -0,0 +1,285 @@
+/*
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "sdm670-mtp.dtsi"
+#include "sdm670-camera-sensor-360camera.dtsi"
+#include "sdm670-audio-overlay.dtsi"
+
+&qupv3_se3_i2c {
+ status = "disabled";
+};
+
+&qupv3_se10_i2c {
+ status = "okay";
+};
+
+&qupv3_se12_2uart {
+ status = "okay";
+};
+
+&qupv3_se6_4uart {
+ status = "okay";
+};
+
+&qupv3_se13_i2c {
+ status = "disabled";
+};
+
+&qupv3_se13_spi {
+ status = "disabled";
+};
+
+&int_codec {
+ qcom,model = "sdm670-360cam-snd-card";
+ qcom,audio-routing =
+ "RX_BIAS", "INT_MCLK0",
+ "SPK_RX_BIAS", "INT_MCLK0",
+ "INT_LDO_H", "INT_MCLK0",
+ "DMIC1", "MIC BIAS External",
+ "MIC BIAS External", "Digital Mic1",
+ "DMIC2", "MIC BIAS External",
+ "MIC BIAS External", "Digital Mic2",
+ "DMIC3", "MIC BIAS External2",
+ "MIC BIAS External2", "Digital Mic3",
+ "DMIC4", "MIC BIAS External2",
+ "MIC BIAS External2", "Digital Mic4",
+ "PDM_IN_RX1", "PDM_OUT_RX1",
+ "PDM_IN_RX2", "PDM_OUT_RX2",
+ "PDM_IN_RX3", "PDM_OUT_RX3",
+ "ADC1_IN", "ADC1_OUT",
+ "ADC2_IN", "ADC2_OUT",
+ "ADC3_IN", "ADC3_OUT";
+ qcom,wsa-max-devs = <0>;
+};
+
+&tlmm {
+ pwr_led_green_default: pwr_led_green_default {
+ mux {
+ pins = "gpio106";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio106";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable;
+ output-low;
+ };
+ };
+
+ pwr_led_red_default: pwr_led_red_default {
+ mux {
+ pins = "gpio111";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio111";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable;
+ output-low;
+ };
+ };
+
+ wifi_led_green_default: wifi_led_green_default {
+ mux {
+ pins = "gpio114";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio114";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable;
+ output-low;
+ };
+ };
+
+ wifi_led_red_default: wifi_led_red_default {
+ mux {
+ pins = "gpio115";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio115";
+ drive-strength = <8>; /* 8 mA */
+ bias-disable;
+ output-low;
+ };
+ };
+
+ key_wcnss_default: key_wcnss_default {
+ mux {
+ pins = "gpio120";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio120";
+ drive-strength = <8>; /* 8 mA */
+ bias-pull-up;
+ input-enable;
+ };
+ };
+
+ key_record_default: key_record_default {
+ mux {
+ pins = "gpio119";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio119";
+ drive-strength = <8>; /* 8 mA */
+ bias-pull-up;
+ input-enable;
+ };
+ };
+
+ key_snapshot_default: key_snapshot_default {
+ mux {
+ pins = "gpio91";
+ function = "gpio";
+ };
+ config {
+ pins = "gpio91";
+ drive-strength = <8>; /* 8 mA */
+ bias-pull-up;
+ input-enable;
+ };
+ };
+};
+
+&soc {
+ gpio-leds {
+ compatible = "gpio-leds";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwr_led_green_default
+ &pwr_led_red_default
+ &wifi_led_green_default
+ &wifi_led_red_default>;
+ status = "okay";
+
+ led@1 {
+ label = "PWR_LED:red:106";
+ gpios = <&tlmm 106 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "wlan";
+ default-state = "off";
+ };
+
+ led@2 {
+ label = "PWR_LED:green:111";
+ gpios = <&tlmm 111 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "wlan";
+ default-state = "on";
+ };
+
+ led@3 {
+ label = "WIFI_LED:red:114";
+ gpios = <&tlmm 114 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "wlan";
+ default-state = "on";
+ };
+
+ led@4 {
+ label = "WIFI_LED:green:115";
+ gpios = <&tlmm 115 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "wlan";
+ default-state = "off";
+ };
+ };
+
+ gpio_keys {
+ compatible = "gpio-keys";
+ label = "gpio-keys";
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&key_snapshot_default
+ &key_record_default
+ &key_wcnss_default>;
+ status = "okay";
+ cam_snapshot {
+ label = "cam_snapshot";
+ gpios = <&tlmm 91 GPIO_ACTIVE_LOW>;
+ linux,input-type = <1>;
+ linux,code = <766>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ linux,can-disable;
+ };
+
+ cam_record {
+ label = "cam_record";
+ gpios = <&tlmm 119 GPIO_ACTIVE_LOW>;
+ linux,input-type = <1>;
+ linux,code = <766>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ linux,can-disable;
+ };
+
+ wcnss_key {
+ label = "wcnss_key";
+ gpios = <&tlmm 120 GPIO_ACTIVE_LOW>;
+ linux,input-type = <1>;
+ linux,code = <528>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ linux,can-disable;
+ };
+ };
+};
+&pm660_vadc{
+
+ chan@4e {
+ label = "emmc_therm";
+ reg = <0x4e>;
+ qcom,decimation = <2>;
+ qcom,pre-div-channel-scaling = <0>;
+ qcom,calibration-type = "ratiometric";
+ qcom,scale-function = <2>;
+ qcom,hw-settle-time = <2>;
+ qcom,fast-avg-setup = <0>;
+ };
+
+ chan@4f {
+ label = "pa_therm0";
+ reg = <0x4f>;
+ qcom,decimation = <2>;
+ qcom,pre-div-channel-scaling = <0>;
+ qcom,calibration-type = "ratiometric";
+ qcom,scale-function = <2>;
+ qcom,hw-settle-time = <2>;
+ qcom,fast-avg-setup = <0>;
+ };
+};
+
+&pm660_adc_tm{
+
+ chan@4e {
+ label = "emmc_therm";
+ reg = <0x4e>;
+ qcom,pre-div-channel-scaling = <0>;
+ qcom,calibration-type = "ratiometric";
+ qcom,scale-function = <2>;
+ qcom,hw-settle-time = <2>;
+ qcom,btm-channel-number = <0x80>;
+ qcom,thermal-node;
+ };
+
+ chan@4f {
+ label = "pa_therm0";
+ reg = <0x4f>;
+ qcom,pre-div-channel-scaling = <0>;
+ qcom,calibration-type = "ratiometric";
+ qcom,scale-function = <2>;
+ qcom,hw-settle-time = <2>;
+ qcom,btm-channel-number = <0x88>;
+ qcom,thermal-node;
+ };
+};
diff --git a/arch/arm64/boot/dts/qcom/qcs605-mtp-overlay.dts b/arch/arm64/boot/dts/qcom/qcs605-mtp-overlay.dts
index 7955242..65be275 100644
--- a/arch/arm64/boot/dts/qcom/qcs605-mtp-overlay.dts
+++ b/arch/arm64/boot/dts/qcom/qcs605-mtp-overlay.dts
@@ -32,3 +32,47 @@
<0x0001001b 0x0102001a 0x0 0x0>,
<0x0001001b 0x0201011a 0x0 0x0>;
};
+
+&cam_cci {
+ /delete-node/ qcom,cam-sensor@1;
+ qcom,cam-sensor@1 {
+ cell-index = <1>;
+ compatible = "qcom,cam-sensor";
+ reg = <0x1>;
+ csiphy-sd-index = <1>;
+ sensor-position-roll = <90>;
+ sensor-position-pitch = <0>;
+ sensor-position-yaw = <180>;
+ eeprom-src = <&eeprom_rear_aux>;
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ regulator-names = "cam_vdig", "cam_vio", "cam_vana",
+ "cam_clk";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1352000 1800000 2850000 0>;
+ rgltr-max-voltage = <1352000 1800000 2850000 0>;
+ rgltr-load-current = <105000 0 80000 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk0_active
+ &cam_sensor_rear2_active>;
+ pinctrl-1 = <&cam_sensor_mclk0_suspend
+ &cam_sensor_rear2_suspend>;
+ gpios = <&tlmm 13 0>,
+ <&tlmm 28 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK0",
+ "CAM_RESET1";
+ sensor-mode = <0>;
+ cci-master = <1>;
+ status = "ok";
+ clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+};
diff --git a/arch/arm64/boot/dts/qcom/qcs605-mtp.dts b/arch/arm64/boot/dts/qcom/qcs605-mtp.dts
index dc3c7ce..b0ca9a3 100644
--- a/arch/arm64/boot/dts/qcom/qcs605-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/qcs605-mtp.dts
@@ -26,3 +26,47 @@
<0x0001001b 0x0102001a 0x0 0x0>,
<0x0001001b 0x0201011a 0x0 0x0>;
};
+
+&cam_cci {
+ /delete-node/ qcom,cam-sensor@1;
+ qcom,cam-sensor@1 {
+ cell-index = <1>;
+ compatible = "qcom,cam-sensor";
+ reg = <0x1>;
+ csiphy-sd-index = <1>;
+ sensor-position-roll = <90>;
+ sensor-position-pitch = <0>;
+ sensor-position-yaw = <180>;
+ eeprom-src = <&eeprom_rear_aux>;
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ regulator-names = "cam_vdig", "cam_vio", "cam_vana",
+ "cam_clk";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1352000 1800000 2850000 0>;
+ rgltr-max-voltage = <1352000 1800000 2850000 0>;
+ rgltr-load-current = <105000 0 80000 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk0_active
+ &cam_sensor_rear2_active>;
+ pinctrl-1 = <&cam_sensor_mclk0_suspend
+ &cam_sensor_rear2_suspend>;
+ gpios = <&tlmm 13 0>,
+ <&tlmm 28 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK0",
+ "CAM_RESET1";
+ sensor-mode = <0>;
+ cci-master = <1>;
+ status = "ok";
+ clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+};
diff --git a/arch/arm64/boot/dts/qcom/qcs605.dtsi b/arch/arm64/boot/dts/qcom/qcs605.dtsi
index 12da650..66493d1 100644
--- a/arch/arm64/boot/dts/qcom/qcs605.dtsi
+++ b/arch/arm64/boot/dts/qcom/qcs605.dtsi
@@ -17,3 +17,13 @@
model = "Qualcomm Technologies, Inc. QCS605";
qcom,msm-id = <347 0x0>;
};
+
+&soc {
+ qcom,rmnet-ipa {
+ status = "disabled";
+ };
+};
+
+&ipa_hw {
+ status = "disabled";
+};
diff --git a/arch/arm64/boot/dts/qcom/sda845-v2-hdk-overlay.dts b/arch/arm64/boot/dts/qcom/sda845-v2-hdk-overlay.dts
index de20f87..6357886 100644
--- a/arch/arm64/boot/dts/qcom/sda845-v2-hdk-overlay.dts
+++ b/arch/arm64/boot/dts/qcom/sda845-v2-hdk-overlay.dts
@@ -29,3 +29,32 @@
qcom,msm-id = <341 0x20000>;
qcom,board-id = <0x01001F 0x00>;
};
+
+&dsi_dual_nt36850_truly_cmd {
+ qcom,panel-supply-entries = <&dsi_panel_pwr_supply>;
+ qcom,mdss-dsi-bl-pmic-control-type = "bl_ctrl_wled";
+ qcom,mdss-dsi-bl-min-level = <1>;
+ qcom,mdss-dsi-bl-max-level = <4095>;
+ qcom,mdss-dsi-mode-sel-gpio-state = "dual_port";
+ qcom,panel-mode-gpio = <&tlmm 52 0>;
+ qcom,platform-te-gpio = <&tlmm 10 0>;
+ qcom,platform-reset-gpio = <&tlmm 6 0>;
+};
+
+&dsi_dual_nt36850_truly_cmd_display {
+ qcom,dsi-display-active;
+};
+
+&labibb {
+ status = "ok";
+ qcom,qpnp-labibb-mode = "lcd";
+};
+
+&pmi8998_wled {
+ status = "okay";
+ qcom,led-strings-list = [01 02];
+};
+
+&mdss_mdp {
+ connectors = <&sde_rscc &sde_wb &sde_dp>;
+};
diff --git a/arch/arm64/boot/dts/qcom/sda845-v2-hdk.dtsi b/arch/arm64/boot/dts/qcom/sda845-v2-hdk.dtsi
index d212554..26a73b0 100644
--- a/arch/arm64/boot/dts/qcom/sda845-v2-hdk.dtsi
+++ b/arch/arm64/boot/dts/qcom/sda845-v2-hdk.dtsi
@@ -22,3 +22,19 @@
&sdhc_2 {
cd-gpios = <&tlmm 126 GPIO_ACTIVE_LOW>;
};
+
+&usb1 {
+ status = "ok";
+ dwc3@a800000 {
+ maximum-speed = "high-speed";
+ dr_mode = "host";
+ };
+};
+
+&qusb_phy1 {
+ status = "ok";
+};
+
+&usb_qmp_phy {
+ status = "ok";
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm450-cdp.dts b/arch/arm64/boot/dts/qcom/sdm450-cdp.dts
index 41a1d1a..3e06872 100644
--- a/arch/arm64/boot/dts/qcom/sdm450-cdp.dts
+++ b/arch/arm64/boot/dts/qcom/sdm450-cdp.dts
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm450-iot-mtp.dts b/arch/arm64/boot/dts/qcom/sdm450-iot-mtp.dts
index 8762b60..7fac030 100644
--- a/arch/arm64/boot/dts/qcom/sdm450-iot-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/sdm450-iot-mtp.dts
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm450-mtp.dts b/arch/arm64/boot/dts/qcom/sdm450-mtp.dts
index e503f16..2524b80 100644
--- a/arch/arm64/boot/dts/qcom/sdm450-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/sdm450-mtp.dts
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm450-pmi8937-mtp.dts b/arch/arm64/boot/dts/qcom/sdm450-pmi8937-mtp.dts
index 23ec75c..6a6a09e 100644
--- a/arch/arm64/boot/dts/qcom/sdm450-pmi8937-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/sdm450-pmi8937-mtp.dts
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm450-pmi8940-mtp.dts b/arch/arm64/boot/dts/qcom/sdm450-pmi8940-mtp.dts
index 26dd008..3c4e802 100644
--- a/arch/arm64/boot/dts/qcom/sdm450-pmi8940-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/sdm450-pmi8940-mtp.dts
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm450-qrd.dts b/arch/arm64/boot/dts/qcom/sdm450-qrd.dts
index 16d8878..3c2e25b 100644
--- a/arch/arm64/boot/dts/qcom/sdm450-qrd.dts
+++ b/arch/arm64/boot/dts/qcom/sdm450-qrd.dts
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm450-rcm.dts b/arch/arm64/boot/dts/qcom/sdm450-rcm.dts
index 0771801..4ab131a 100644
--- a/arch/arm64/boot/dts/qcom/sdm450-rcm.dts
+++ b/arch/arm64/boot/dts/qcom/sdm450-rcm.dts
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm450.dtsi b/arch/arm64/boot/dts/qcom/sdm450.dtsi
index b7581b8..8087399 100644
--- a/arch/arm64/boot/dts/qcom/sdm450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm450.dtsi
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
diff --git a/arch/arm64/boot/dts/qcom/sdm670-audio-overlay.dtsi b/arch/arm64/boot/dts/qcom/sdm670-audio-overlay.dtsi
index 58c290d..5dd5c0d 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-audio-overlay.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-audio-overlay.dtsi
@@ -50,8 +50,8 @@
qcom,hph-en0-gpio = <&tavil_hph_en0>;
qcom,hph-en1-gpio = <&tavil_hph_en1>;
qcom,msm-mclk-freq = <9600000>;
- asoc-codec = <&stub_codec>;
- asoc-codec-names = "msm-stub-codec.1";
+ asoc-codec = <&stub_codec>, <&ext_disp_audio_codec>;
+ asoc-codec-names = "msm-stub-codec.1", "msm-ext-disp-audio-codec-rx";
qcom,wsa-max-devs = <2>;
qcom,wsa-devs = <&wsa881x_0211>, <&wsa881x_0212>,
<&wsa881x_0213>, <&wsa881x_0214>;
@@ -100,9 +100,11 @@
qcom,cdc-dmic-gpios = <&cdc_dmic_gpios>;
asoc-codec = <&stub_codec>, <&msm_digital_codec>,
- <&pmic_analog_codec>, <&msm_sdw_codec>;
+ <&pmic_analog_codec>, <&msm_sdw_codec>,
+ <&ext_disp_audio_codec>;
asoc-codec-names = "msm-stub-codec.1", "msm-dig-codec",
- "analog-codec", "msm_sdw_codec";
+ "analog-codec", "msm_sdw_codec",
+ "msm-ext-disp-audio-codec-rx";
qcom,wsa-max-devs = <2>;
qcom,wsa-devs = <&wsa881x_211_en>, <&wsa881x_212_en>,
diff --git a/arch/arm64/boot/dts/qcom/sdm670-audio.dtsi b/arch/arm64/boot/dts/qcom/sdm670-audio.dtsi
index b26ec5c..bda44cc 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-audio.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-audio.dtsi
@@ -39,6 +39,7 @@
qcom,wcn-btfm;
qcom,mi2s-audio-intf;
qcom,auxpcm-audio-intf;
+ qcom,ext-disp-audio-rx;
asoc-platform = <&pcm0>, <&pcm1>, <&pcm2>, <&voip>, <&voice>,
<&loopback>, <&compress>, <&hostless>,
<&afe>, <&lsm>, <&routing>, <&cpe>, <&compr>,
@@ -50,7 +51,7 @@
"msm-pcm-afe", "msm-lsm-client",
"msm-pcm-routing", "msm-cpe-lsm",
"msm-compr-dsp", "msm-pcm-dsp-noirq";
- asoc-cpu = <&dai_mi2s0>, <&dai_mi2s1>,
+ asoc-cpu = <&dai_dp>, <&dai_mi2s0>, <&dai_mi2s1>,
<&dai_mi2s2>, <&dai_mi2s3>, <&dai_mi2s4>,
<&dai_pri_auxpcm>, <&dai_sec_auxpcm>,
<&dai_tert_auxpcm>, <&dai_quat_auxpcm>,
@@ -70,7 +71,8 @@
<&dai_tert_tdm_rx_0>, <&dai_tert_tdm_tx_0>,
<&dai_quat_tdm_rx_0>, <&dai_quat_tdm_tx_0>,
<&dai_quin_tdm_rx_0>, <&dai_quin_tdm_tx_0>;
- asoc-cpu-names = "msm-dai-q6-mi2s.0", "msm-dai-q6-mi2s.1",
+ asoc-cpu-names = "msm-dai-q6-dp.24608",
+ "msm-dai-q6-mi2s.0", "msm-dai-q6-mi2s.1",
"msm-dai-q6-mi2s.2", "msm-dai-q6-mi2s.3",
"msm-dai-q6-mi2s.4",
"msm-dai-q6-auxpcm.1", "msm-dai-q6-auxpcm.2",
@@ -102,6 +104,7 @@
compatible = "qcom,sdm670-asoc-snd";
qcom,model = "sdm670-mtp-snd-card";
qcom,wcn-btfm;
+ qcom,ext-disp-audio-rx;
qcom,mi2s-audio-intf;
qcom,auxpcm-audio-intf;
asoc-platform = <&pcm0>, <&pcm1>, <&pcm2>, <&voip>, <&voice>,
@@ -115,7 +118,7 @@
"msm-pcm-afe", "msm-lsm-client",
"msm-pcm-routing", "msm-compr-dsp",
"msm-pcm-dsp-noirq";
- asoc-cpu = <&dai_mi2s0>, <&dai_mi2s1>,
+ asoc-cpu = <&dai_dp>, <&dai_mi2s0>, <&dai_mi2s1>,
<&dai_mi2s2>, <&dai_mi2s3>, <&dai_mi2s4>,
<&dai_int_mi2s0>, <&dai_int_mi2s1>,
<&dai_int_mi2s2>, <&dai_int_mi2s3>,
@@ -134,7 +137,8 @@
<&dai_tert_tdm_rx_0>, <&dai_tert_tdm_tx_0>,
<&dai_quat_tdm_rx_0>, <&dai_quat_tdm_tx_0>,
<&dai_quin_tdm_rx_0>, <&dai_quin_tdm_tx_0>;
- asoc-cpu-names = "msm-dai-q6-mi2s.0", "msm-dai-q6-mi2s.1",
+ asoc-cpu-names = "msm-dai-q6-dp.24608",
+ "msm-dai-q6-mi2s.0","msm-dai-q6-mi2s.1",
"msm-dai-q6-mi2s.2", "msm-dai-q6-mi2s.3",
"msm-dai-q6-mi2s.4",
"msm-dai-q6-mi2s.7", "msm-dai-q6-mi2s.8",
diff --git a/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-360camera.dtsi b/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-360camera.dtsi
new file mode 100644
index 0000000..18b0cd8
--- /dev/null
+++ b/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-360camera.dtsi
@@ -0,0 +1,382 @@
+/*
+ * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+&soc {
+ led_flash_rear: qcom,camera-flash@0 {
+ cell-index = <0>;
+ reg = <0x00 0x00>;
+ compatible = "qcom,camera-flash";
+ flash-source = <&pm660l_flash0 &pm660l_flash1>;
+ torch-source = <&pm660l_torch0 &pm660l_torch1>;
+ switch-source = <&pm660l_switch0>;
+ status = "ok";
+ };
+
+ led_flash_front: qcom,camera-flash@1 {
+ cell-index = <1>;
+ reg = <0x01 0x00>;
+ compatible = "qcom,camera-flash";
+ flash-source = <&pm660l_flash2>;
+ torch-source = <&pm660l_torch2>;
+ switch-source = <&pm660l_switch1>;
+ status = "ok";
+ };
+
+ actuator_regulator: gpio-regulator@0 {
+ compatible = "regulator-fixed";
+ reg = <0x00 0x00>;
+ regulator-name = "actuator_regulator";
+ regulator-min-microvolt = <2800000>;
+ regulator-max-microvolt = <2800000>;
+ regulator-enable-ramp-delay = <100>;
+ enable-active-high;
+ gpio = <&tlmm 27 0>;
+ };
+
+ camera_ldo: gpio-regulator@2 {
+ compatible = "regulator-fixed";
+ reg = <0x02 0x00>;
+ regulator-name = "camera_ldo";
+ regulator-min-microvolt = <1352000>;
+ regulator-max-microvolt = <1352000>;
+ regulator-enable-ramp-delay = <233>;
+ enable-active-high;
+ gpio = <&pm660l_gpios 4 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&camera_dvdd_en_default>;
+ vin-supply = <&pm660_s6>;
+ };
+
+ camera_rear_ldo: gpio-regulator@1 {
+ compatible = "regulator-fixed";
+ reg = <0x01 0x00>;
+ regulator-name = "camera_rear_ldo";
+ regulator-min-microvolt = <1352000>;
+ regulator-max-microvolt = <1352000>;
+ regulator-enable-ramp-delay = <135>;
+ enable-active-high;
+ gpio = <&pm660l_gpios 4 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&camera_rear_dvdd_en_default>;
+ vin-supply = <&pm660_s6>;
+ };
+
+ camera_vio_ldo: gpio-regulator@3 {
+ compatible = "regulator-fixed";
+ reg = <0x03 0x00>;
+ regulator-name = "camera_vio_ldo";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-enable-ramp-delay = <233>;
+ enable-active-high;
+ gpio = <&tlmm 29 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&cam_sensor_rear_vio>;
+ vin-supply = <&pm660_s4>;
+ };
+
+ camera_vana_ldo: gpio-regulator@4 {
+ compatible = "regulator-fixed";
+ reg = <0x04 0x00>;
+ regulator-name = "camera_vana_ldo";
+ regulator-min-microvolt = <2850000>;
+ regulator-max-microvolt = <2850000>;
+ regulator-enable-ramp-delay = <233>;
+ enable-active-high;
+ gpio = <&tlmm 8 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&cam_sensor_rear_vana>;
+ vin-supply = <&pm660l_bob>;
+ };
+};
+
+&cam_cci {
+ actuator_rear: qcom,actuator@0 {
+ cell-index = <0>;
+ reg = <0x0>;
+ compatible = "qcom,actuator";
+ cci-master = <0>;
+ cam_vaf-supply = <&actuator_regulator>;
+ regulator-names = "cam_vaf";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <2800000>;
+ rgltr-max-voltage = <2800000>;
+ rgltr-load-current = <0>;
+ };
+
+ actuator_front: qcom,actuator@1 {
+ cell-index = <1>;
+ reg = <0x1>;
+ compatible = "qcom,actuator";
+ cci-master = <1>;
+ cam_vaf-supply = <&actuator_regulator>;
+ regulator-names = "cam_vaf";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <2800000>;
+ rgltr-max-voltage = <2800000>;
+ rgltr-load-current = <0>;
+ };
+
+ ois_rear: qcom,ois@0 {
+ cell-index = <0>;
+ reg = <0x0>;
+ compatible = "qcom,ois";
+ cci-master = <0>;
+ cam_vaf-supply = <&actuator_regulator>;
+ regulator-names = "cam_vaf";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <2800000>;
+ rgltr-max-voltage = <2800000>;
+ rgltr-load-current = <0>;
+ status = "disabled";
+ };
+
+ eeprom_rear: qcom,eeprom@0 {
+ cell-index = <0>;
+ reg = <0>;
+ compatible = "qcom,eeprom";
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_rear_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ cam_vaf-supply = <&actuator_regulator>;
+ regulator-names = "cam_vio", "cam_vana", "cam_vdig",
+ "cam_clk", "cam_vaf";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1800000 2850000 1352000 0 2800000>;
+ rgltr-max-voltage = <1800000 2850000 1352000 0 2800000>;
+ rgltr-load-current = <0 80000 105000 0 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk0_active
+ &cam_sensor_rear_active>;
+ pinctrl-1 = <&cam_sensor_mclk0_suspend
+ &cam_sensor_rear_suspend>;
+ gpios = <&tlmm 13 0>,
+ <&tlmm 30 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK0",
+ "CAM_RESET0";
+ sensor-mode = <0>;
+ cci-master = <0>;
+ status = "ok";
+ clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+
+ eeprom_rear_aux: qcom,eeprom@1 {
+ cell-index = <1>;
+ reg = <0x1>;
+ compatible = "qcom,eeprom";
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ cam_vaf-supply = <&actuator_regulator>;
+ regulator-names = "cam_vdig", "cam_vio", "cam_vana",
+ "cam_clk", "cam_vaf";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1352000 1800000 2850000 0 2800000>;
+ rgltr-max-voltage = <1352000 1800000 2850000 0 2800000>;
+ rgltr-load-current = <105000 0 80000 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk1_active
+ &cam_sensor_rear2_active>;
+ pinctrl-1 = <&cam_sensor_mclk1_suspend
+ &cam_sensor_rear2_suspend>;
+ gpios = <&tlmm 14 0>,
+ <&tlmm 28 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK1",
+ "CAM_RESET1";
+ sensor-position = <0>;
+ sensor-mode = <0>;
+ cci-master = <1>;
+ status = "ok";
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+
+ eeprom_front: qcom,eeprom@2 {
+ cell-index = <2>;
+ reg = <0x2>;
+ compatible = "qcom,eeprom";
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ cam_vaf-supply = <&actuator_regulator>;
+ regulator-names = "cam_vio", "cam_vana", "cam_vdig",
+ "cam_clk", "cam_vaf";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1800000 2850000 1352000 0 2800000>;
+ rgltr-max-voltage = <1800000 2850000 1352000 0 2800000>;
+ rgltr-load-current = <0 80000 105000 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk2_active
+ &cam_sensor_front_active>;
+ pinctrl-1 = <&cam_sensor_mclk2_suspend
+ &cam_sensor_front_suspend>;
+ gpios = <&tlmm 15 0>,
+ <&tlmm 9 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK2",
+ "CAM_RESET2";
+ sensor-mode = <0>;
+ cci-master = <1>;
+ status = "ok";
+ clocks = <&clock_camcc CAM_CC_MCLK2_CLK>;
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+
+ qcom,cam-sensor@0 {
+ cell-index = <0>;
+ compatible = "qcom,cam-sensor";
+ reg = <0x0>;
+ csiphy-sd-index = <0>;
+ sensor-position-roll = <270>;
+ sensor-position-pitch = <0>;
+ sensor-position-yaw = <180>;
+ led-flash-src = <&led_flash_rear>;
+ actuator-src = <&actuator_rear>;
+ ois-src = <&ois_rear>;
+ eeprom-src = <&eeprom_rear>;
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_rear_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ regulator-names = "cam_vio", "cam_vana", "cam_vdig",
+ "cam_clk";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1800000 2850000 1352000 0>;
+ rgltr-max-voltage = <1800000 2850000 1352000 0>;
+ rgltr-load-current = <0 80000 105000 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk0_active
+ &cam_sensor_rear_active>;
+ pinctrl-1 = <&cam_sensor_mclk0_suspend
+ &cam_sensor_rear_suspend>;
+ gpios = <&tlmm 13 0>,
+ <&tlmm 30 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK0",
+ "CAM_RESET0";
+ sensor-mode = <0>;
+ cci-master = <0>;
+ status = "ok";
+ clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+
+ qcom,cam-sensor@1 {
+ cell-index = <1>;
+ compatible = "qcom,cam-sensor";
+ reg = <0x1>;
+ csiphy-sd-index = <1>;
+ sensor-position-roll = <90>;
+ sensor-position-pitch = <0>;
+ sensor-position-yaw = <180>;
+ eeprom-src = <&eeprom_rear_aux>;
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ regulator-names = "cam_vdig", "cam_vio", "cam_vana",
+ "cam_clk";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1352000 1800000 2850000 0>;
+ rgltr-max-voltage = <1352000 1800000 2850000 0>;
+ rgltr-load-current = <105000 0 80000 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk1_active
+ &cam_sensor_rear2_active>;
+ pinctrl-1 = <&cam_sensor_mclk1_suspend
+ &cam_sensor_rear2_suspend>;
+ gpios = <&tlmm 14 0>,
+ <&tlmm 28 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK1",
+ "CAM_RESET1";
+ sensor-mode = <0>;
+ cci-master = <1>;
+ status = "ok";
+ clocks = <&clock_camcc CAM_CC_MCLK1_CLK>;
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+
+ qcom,cam-sensor@2 {
+ cell-index = <2>;
+ compatible = "qcom,cam-sensor";
+ reg = <0x02>;
+ csiphy-sd-index = <2>;
+ sensor-position-roll = <270>;
+ sensor-position-pitch = <0>;
+ sensor-position-yaw = <0>;
+ eeprom-src = <&eeprom_front>;
+ actuator-src = <&actuator_front>;
+ led-flash-src = <&led_flash_front>;
+ cam_vio-supply = <&camera_vio_ldo>;
+ cam_vana-supply = <&camera_vana_ldo>;
+ cam_vdig-supply = <&camera_ldo>;
+ cam_clk-supply = <&titan_top_gdsc>;
+ regulator-names = "cam_vio", "cam_vana", "cam_vdig",
+ "cam_clk";
+ rgltr-cntrl-support;
+ rgltr-min-voltage = <1800000 2850000 1352000 0>;
+ rgltr-max-voltage = <1800000 2850000 1352000 0>;
+ rgltr-load-current = <0 80000 105000 0>;
+ gpio-no-mux = <0>;
+ pinctrl-names = "cam_default", "cam_suspend";
+ pinctrl-0 = <&cam_sensor_mclk2_active
+ &cam_sensor_front_active>;
+ pinctrl-1 = <&cam_sensor_mclk2_suspend
+ &cam_sensor_front_suspend>;
+ gpios = <&tlmm 15 0>,
+ <&tlmm 9 0>;
+ gpio-reset = <1>;
+ gpio-req-tbl-num = <0 1>;
+ gpio-req-tbl-flags = <1 0>;
+ gpio-req-tbl-label = "CAMIF_MCLK2",
+ "CAM_RESET2";
+ sensor-mode = <0>;
+ cci-master = <1>;
+ status = "ok";
+ clocks = <&clock_camcc CAM_CC_MCLK2_CLK>;
+ clock-names = "cam_clk";
+ clock-cntl-level = "turbo";
+ clock-rates = <24000000>;
+ };
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-cdp.dtsi b/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-cdp.dtsi
index c4ca6c5..8b94ca2 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-cdp.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-cdp.dtsi
@@ -197,7 +197,7 @@
rgltr-cntrl-support;
rgltr-min-voltage = <1352000 1800000 2850000 0 2800000>;
rgltr-max-voltage = <1352000 1800000 2850000 0 2800000>;
- rgltr-load-current = <105000 0 80000 0>;
+ rgltr-load-current = <105000 0 80000 0 0>;
gpio-no-mux = <0>;
pinctrl-names = "cam_default", "cam_suspend";
pinctrl-0 = <&cam_sensor_mclk1_active
@@ -234,7 +234,7 @@
rgltr-cntrl-support;
rgltr-min-voltage = <1800000 2850000 1352000 0 2800000>;
rgltr-max-voltage = <1800000 2850000 1352000 0 2800000>;
- rgltr-load-current = <0 80000 105000 0>;
+ rgltr-load-current = <0 80000 105000 0 0>;
gpio-no-mux = <0>;
pinctrl-names = "cam_default", "cam_suspend";
pinctrl-0 = <&cam_sensor_mclk2_active
diff --git a/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-mtp.dtsi b/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-mtp.dtsi
index c4ca6c5..8b94ca2 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-mtp.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-camera-sensor-mtp.dtsi
@@ -197,7 +197,7 @@
rgltr-cntrl-support;
rgltr-min-voltage = <1352000 1800000 2850000 0 2800000>;
rgltr-max-voltage = <1352000 1800000 2850000 0 2800000>;
- rgltr-load-current = <105000 0 80000 0>;
+ rgltr-load-current = <105000 0 80000 0 0>;
gpio-no-mux = <0>;
pinctrl-names = "cam_default", "cam_suspend";
pinctrl-0 = <&cam_sensor_mclk1_active
@@ -234,7 +234,7 @@
rgltr-cntrl-support;
rgltr-min-voltage = <1800000 2850000 1352000 0 2800000>;
rgltr-max-voltage = <1800000 2850000 1352000 0 2800000>;
- rgltr-load-current = <0 80000 105000 0>;
+ rgltr-load-current = <0 80000 105000 0 0>;
gpio-no-mux = <0>;
pinctrl-names = "cam_default", "cam_suspend";
pinctrl-0 = <&cam_sensor_mclk2_active
diff --git a/arch/arm64/boot/dts/qcom/sdm670-camera.dtsi b/arch/arm64/boot/dts/qcom/sdm670-camera.dtsi
index 34b8740..110e626 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-camera.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-camera.dtsi
@@ -230,6 +230,33 @@
compatible = "qcom,msm-cam-smmu";
status = "ok";
+ msm_cam_smmu_lrme {
+ compatible = "qcom,msm-cam-smmu-cb";
+ iommus = <&apps_smmu 0x1038 0x0>,
+ <&apps_smmu 0x1058 0x0>,
+ <&apps_smmu 0x1039 0x0>,
+ <&apps_smmu 0x1059 0x0>;
+ label = "lrme";
+ lrme_iova_mem_map: iova-mem-map {
+ iova-mem-region-shared {
+ /* Shared region is 100MB long */
+ iova-region-name = "shared";
+ iova-region-start = <0x7400000>;
+ iova-region-len = <0x6400000>;
+ iova-region-id = <0x1>;
+ status = "ok";
+ };
+ /* IO region is approximately 3.3 GB */
+ iova-mem-region-io {
+ iova-region-name = "io";
+ iova-region-start = <0xd800000>;
+ iova-region-len = <0xd2800000>;
+ iova-region-id = <0x3>;
+ status = "ok";
+ };
+ };
+ };
+
msm_cam_smmu_ife {
compatible = "qcom,msm-cam-smmu-cb";
iommus = <&apps_smmu 0x808 0x0>,
@@ -435,13 +462,14 @@
"csid0", "csid1", "csid2",
"ife0", "ife1", "ife2", "ipe0",
"ipe1", "cam-cdm-intf0", "cpas-cdm0", "bps0",
- "icp0", "jpeg-dma0", "jpeg-enc0", "fd0";
+ "icp0", "jpeg-dma0", "jpeg-enc0", "fd0", "lrmecpas0";
client-axi-port-names =
"cam_hf_1", "cam_hf_2", "cam_hf_2", "cam_sf_1",
"cam_hf_1", "cam_hf_2", "cam_hf_2",
"cam_hf_1", "cam_hf_2", "cam_hf_2", "cam_sf_1",
"cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1",
- "cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1";
+ "cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1",
+ "cam_sf_1";
client-bus-camnoc-based;
qcom,axi-port-list {
qcom,axi-port1 {
@@ -530,7 +558,8 @@
cdm-client-names = "vfe",
"jpegdma",
"jpegenc",
- "fd";
+ "fd",
+ "lrmecdm";
status = "ok";
};
@@ -1062,4 +1091,43 @@
<0 0 0 0 0 600000000 0 0>;
status = "ok";
};
+
+ qcom,cam-lrme {
+ compatible = "qcom,cam-lrme";
+ arch-compat = "lrme";
+ status = "ok";
+ };
+
+ cam_lrme: qcom,lrme@ac6b000 {
+ cell-index = <0>;
+ compatible = "qcom,lrme";
+ reg-names = "lrme";
+ reg = <0xac6b000 0xa00>;
+ reg-cam-base = <0x6b000>;
+ interrupt-names = "lrme";
+ interrupts = <0 476 0>;
+ regulator-names = "camss";
+ camss-supply = <&titan_top_gdsc>;
+ clock-names = "camera_ahb",
+ "camera_axi",
+ "soc_ahb_clk",
+ "cpas_ahb_clk",
+ "camnoc_axi_clk",
+ "lrme_clk_src",
+ "lrme_clk";
+ clocks = <&clock_gcc GCC_CAMERA_AHB_CLK>,
+ <&clock_gcc GCC_CAMERA_AXI_CLK>,
+ <&clock_camcc CAM_CC_SOC_AHB_CLK>,
+ <&clock_camcc CAM_CC_CPAS_AHB_CLK>,
+ <&clock_camcc CAM_CC_CAMNOC_AXI_CLK>,
+ <&clock_camcc CAM_CC_LRME_CLK_SRC>,
+ <&clock_camcc CAM_CC_LRME_CLK>;
+ clock-rates = <0 0 0 0 0 200000000 200000000>,
+ <0 0 0 0 0 269000000 269000000>,
+ <0 0 0 0 0 320000000 320000000>,
+ <0 0 0 0 0 400000000 400000000>;
+ clock-cntl-level = "lowsvs", "svs", "svs_l1", "turbo";
+ src-clock-name = "lrme_clk_src";
+ status = "ok";
+ };
};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-cdp.dtsi b/arch/arm64/boot/dts/qcom/sdm670-cdp.dtsi
index 163420a..521b048 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-cdp.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-cdp.dtsi
@@ -41,6 +41,12 @@
status = "ok";
};
+&pm660l_switch1 {
+ pinctrl-names = "led_enable", "led_disable";
+ pinctrl-0 = <&flash_led3_front_en>;
+ pinctrl-1 = <&flash_led3_front_dis>;
+};
+
&qupv3_se9_2uart {
status = "disabled";
};
@@ -267,9 +273,7 @@
};
&dsi_rm67195_amoled_fhd_cmd {
- qcom,mdss-dsi-bl-pmic-control-type = "bl_ctrl_wled";
- qcom,mdss-dsi-bl-min-level = <1>;
- qcom,mdss-dsi-bl-max-level = <4095>;
+ qcom,mdss-dsi-bl-pmic-control-type = "bl_ctrl_dcs";
qcom,panel-supply-entries = <&dsi_panel_pwr_supply_labibb_amoled>;
qcom,platform-reset-gpio = <&tlmm 75 0>;
qcom,platform-te-gpio = <&tlmm 10 0>;
diff --git a/arch/arm64/boot/dts/qcom/sdm670-coresight.dtsi b/arch/arm64/boot/dts/qcom/sdm670-coresight.dtsi
index 34fe19f..8323476 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-coresight.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-coresight.dtsi
@@ -537,6 +537,58 @@
<&funnel_apss_merg_out_funnel_in2>;
};
};
+ port@4 {
+ reg = <6>;
+ funnel_in2_in_funnel_gfx: endpoint {
+ slave-mode;
+ remote-endpoint =
+ <&funnel_gfx_out_funnel_in2>;
+ };
+ };
+ };
+ };
+
+ funnel_gfx: funnel@0x6943000 {
+ compatible = "arm,primecell";
+ arm,primecell-periphid = <0x0003b908>;
+
+ reg = <0x6943000 0x1000>;
+ reg-names = "funnel-base";
+
+ coresight-name = "coresight-funnel-gfx";
+
+ clocks = <&clock_aop QDSS_CLK>;
+ clock-names = "apb_pclk";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+ funnel_gfx_out_funnel_in2: endpoint {
+ remote-endpoint =
+ <&funnel_in2_in_funnel_gfx>;
+ };
+ };
+
+ port@1 {
+ reg = <0>;
+ funnel_in2_in_gfx: endpoint {
+ slave-mode;
+ remote-endpoint =
+ <&gfx_out_funnel_in2>;
+ };
+ };
+
+ port@2 {
+ reg = <1>;
+ funnel_in2_in_gfx_cx: endpoint {
+ slave-mode;
+ remote-endpoint =
+ <&gfx_cx_out_funnel_in2>;
+ };
+ };
};
};
@@ -1336,7 +1388,7 @@
reg = <0x69e1000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-ddr0";
+ coresight-name = "coresight-cti-ddr_dl_0_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1348,7 +1400,7 @@
reg = <0x69e4000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-ddr1";
+ coresight-name = "coresight-cti-ddr_dl_1_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1360,7 +1412,7 @@
reg = <0x69e5000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti1-ddr1";
+ coresight-name = "coresight-cti-ddr_dl_1_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1372,7 +1424,7 @@
reg = <0x6c09000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-dlmm";
+ coresight-name = "coresight-cti-dlmm_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1384,7 +1436,7 @@
reg = <0x6c0a000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti1-dlmm";
+ coresight-name = "coresight-cti-dlmm_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1396,7 +1448,7 @@
reg = <0x6c29000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-dlct";
+ coresight-name = "coresight-cti-dlct_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1408,7 +1460,7 @@
reg = <0x6c2a000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti1-dlct";
+ coresight-name = "coresight-cti-dlct_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1420,7 +1472,8 @@
reg = <0x69a4000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-wcss";
+ coresight-name = "coresight-cti-wcss_cti0";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1432,7 +1485,8 @@
reg = <0x69a5000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti1-wcss";
+ coresight-name = "coresight-cti-wcss_cti1";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1444,7 +1498,8 @@
reg = <0x69a6000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti2-wcss";
+ coresight-name = "coresight-cti-wcss_cti2";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1480,7 +1535,8 @@
reg = <0x6b10000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti2-ssc_sdc";
+ coresight-name = "coresight-cti-ssc_sdc_cti2";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1492,7 +1548,8 @@
reg = <0x6b11000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti1-ssc";
+ coresight-name = "coresight-cti-ssc_cti1";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1504,7 +1561,8 @@
reg = <0x6b1b000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-ssc-q6";
+ coresight-name = "coresight-cti-ssc_q6_cti0";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1516,7 +1574,8 @@
reg = <0x6b1e000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-ssc-noc";
+ coresight-name = "coresight-cti-ssc_noc";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1528,7 +1587,8 @@
reg = <0x6b1f000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti6-ssc-noc";
+ coresight-name = "coresight-cti-ssc_noc_cti6";
+ status = "disabled";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1540,7 +1600,7 @@
reg = <0x6b04000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-swao";
+ coresight-name = "coresight-cti-swao_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1552,7 +1612,7 @@
reg = <0x6b05000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti1-swao";
+ coresight-name = "coresight-cti-swao_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1564,7 +1624,7 @@
reg = <0x6b06000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti2-swao";
+ coresight-name = "coresight-cti-swao_cti2";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1576,7 +1636,7 @@
reg = <0x6b07000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti3-swao";
+ coresight-name = "coresight-cti-swao_cti3";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1624,7 +1684,7 @@
reg = <0x78e0000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti0-apss";
+ coresight-name = "coresight-cti-apss_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1636,7 +1696,7 @@
reg = <0x78f0000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti1-apss";
+ coresight-name = "coresight-cti-apss_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1648,7 +1708,7 @@
reg = <0x7900000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti2-apss";
+ coresight-name = "coresight-cti-apss_cti2";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
diff --git a/arch/arm64/boot/dts/qcom/sdm670-gpu.dtsi b/arch/arm64/boot/dts/qcom/sdm670-gpu.dtsi
index 41a66e9..9e75ee0 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-gpu.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-gpu.dtsi
@@ -117,8 +117,8 @@
cache-slices = <&llcc 12>, <&llcc 11>;
/* CPU latency parameter */
- qcom,pm-qos-active-latency = <660>;
- qcom,pm-qos-wakeup-latency = <460>;
+ qcom,pm-qos-active-latency = <914>;
+ qcom,pm-qos-wakeup-latency = <899>;
/* Enable context aware freq. scaling */
qcom,enable-ca-jump;
@@ -129,6 +129,36 @@
qcom,gpu-speed-bin = <0x41a0 0x1fe00000 21>;
+ qcom,gpu-coresights {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "qcom,gpu-coresight";
+
+ qcom,gpu-coresight@0 {
+ reg = <0>;
+ coresight-name = "coresight-gfx";
+ coresight-atid = <50>;
+ port {
+ gfx_out_funnel_in2: endpoint {
+ remote-endpoint =
+ <&funnel_in2_in_gfx>;
+ };
+ };
+ };
+
+ qcom,gpu-coresight@1 {
+ reg = <1>;
+ coresight-name = "coresight-gfx-cx";
+ coresight-atid = <51>;
+ port {
+ gfx_cx_out_funnel_in2: endpoint {
+ remote-endpoint =
+ <&funnel_in2_in_gfx_cx>;
+ };
+ };
+ };
+ };
+
/* GPU Mempools */
qcom,gpu-mempools {
#address-cells = <1>;
@@ -364,6 +394,60 @@
};
+ qcom,gpu-pwrlevels-3 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ qcom,speed-bin = <163>;
+
+ qcom,initial-pwrlevel = <3>;
+
+ /* SVS_L1 */
+ qcom,gpu-pwrlevel@0 {
+ reg = <0>;
+ qcom,gpu-freq = <430000000>;
+ qcom,bus-freq = <11>;
+ qcom,bus-min = <8>;
+ qcom,bus-max = <11>;
+ };
+
+ /* SVS */
+ qcom,gpu-pwrlevel@1 {
+ reg = <1>;
+ qcom,gpu-freq = <355000000>;
+ qcom,bus-freq = <8>;
+ qcom,bus-min = <5>;
+ qcom,bus-max = <9>;
+ };
+
+ /* LOW SVS */
+ qcom,gpu-pwrlevel@2 {
+ reg = <2>;
+ qcom,gpu-freq = <267000000>;
+ qcom,bus-freq = <6>;
+ qcom,bus-min = <4>;
+ qcom,bus-max = <8>;
+ };
+
+ /* MIN SVS */
+ qcom,gpu-pwrlevel@3 {
+ reg = <3>;
+ qcom,gpu-freq = <180000000>;
+ qcom,bus-freq = <4>;
+ qcom,bus-min = <3>;
+ qcom,bus-max = <4>;
+ };
+
+ /* XO */
+ qcom,gpu-pwrlevel@4 {
+ reg = <4>;
+ qcom,gpu-freq = <0>;
+ qcom,bus-freq = <0>;
+ qcom,bus-min = <0>;
+ qcom,bus-max = <0>;
+ };
+ };
+
};
};
@@ -394,7 +478,7 @@
gfx3d_secure: gfx3d_secure {
compatible = "qcom,smmu-kgsl-cb";
- iommus = <&kgsl_smmu 2>;
+ iommus = <&kgsl_smmu 2>, <&kgsl_smmu 1>;
};
};
@@ -404,12 +488,10 @@
reg =
<0x506a000 0x31000>,
- <0xb200000 0x300000>,
- <0xc200000 0x10000>;
+ <0xb200000 0x300000>;
reg-names =
"kgsl_gmu_reg",
- "kgsl_gmu_pdc_reg",
- "kgsl_gmu_cpr_reg";
+ "kgsl_gmu_pdc_reg";
interrupts = <0 304 0>, <0 305 0>;
interrupt-names = "kgsl_hfi_irq", "kgsl_gmu_irq";
diff --git a/arch/arm64/boot/dts/qcom/sdm670-mtp.dtsi b/arch/arm64/boot/dts/qcom/sdm670-mtp.dtsi
index 307444d..e9924e2 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-mtp.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-mtp.dtsi
@@ -42,6 +42,12 @@
status = "ok";
};
+&pm660l_switch1 {
+ pinctrl-names = "led_enable", "led_disable";
+ pinctrl-0 = <&flash_led3_front_en>;
+ pinctrl-1 = <&flash_led3_front_dis>;
+};
+
&qupv3_se9_2uart {
status = "disabled";
};
@@ -322,9 +328,7 @@
};
&dsi_rm67195_amoled_fhd_cmd {
- qcom,mdss-dsi-bl-pmic-control-type = "bl_ctrl_wled";
- qcom,mdss-dsi-bl-min-level = <1>;
- qcom,mdss-dsi-bl-max-level = <4095>;
+ qcom,mdss-dsi-bl-pmic-control-type = "bl_ctrl_dcs";
qcom,panel-supply-entries = <&dsi_panel_pwr_supply_labibb_amoled>;
qcom,platform-reset-gpio = <&tlmm 75 0>;
qcom,platform-te-gpio = <&tlmm 10 0>;
@@ -359,3 +363,12 @@
&mdss_mdp {
#cooling-cells = <2>;
};
+
+&thermal_zones {
+ xo-therm-cpu-step {
+ status = "disabled";
+ };
+ xo-therm-mdm-step {
+ status = "disabled";
+ };
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-pinctrl.dtsi b/arch/arm64/boot/dts/qcom/sdm670-pinctrl.dtsi
index 188da58..d4953c1 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-pinctrl.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-pinctrl.dtsi
@@ -1558,6 +1558,36 @@
};
};
+ flash_led3_front {
+ flash_led3_front_en: flash_led3_front_en {
+ mux {
+ pins = "gpio21";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio21";
+ drive_strength = <2>;
+ output-high;
+ bias-disable;
+ };
+ };
+
+ flash_led3_front_dis: flash_led3_front_dis {
+ mux {
+ pins = "gpio21";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio21";
+ drive_strength = <2>;
+ output-low;
+ bias-disable;
+ };
+ };
+ };
+
/* Pinctrl setting for CAMERA GPIO key */
key_cam_snapshot {
key_cam_snapshot_default: key_cam_snapshot_default {
diff --git a/arch/arm64/boot/dts/qcom/sdm670-pm.dtsi b/arch/arm64/boot/dts/qcom/sdm670-pm.dtsi
index a459a9d..dd35a36 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-pm.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-pm.dtsi
@@ -28,56 +28,45 @@
reg = <0>;
label = "l3-wfi";
qcom,psci-mode = <0x1>;
- qcom,latency-us = <51>;
- qcom,ss-power = <452>;
- qcom,energy-overhead = <69355>;
- qcom,time-overhead = <99>;
+ qcom,latency-us = <600>;
+ qcom,ss-power = <420>;
+ qcom,energy-overhead = <4254140>;
+ qcom,time-overhead = <1260>;
};
- qcom,pm-cluster-level@1 { /* D2 */
+ qcom,pm-cluster-level@1 { /* D4, D3 is not supported */
reg = <1>;
- label = "l3-dyn-ret";
- qcom,psci-mode = <0x2>;
- qcom,latency-us = <659>;
- qcom,ss-power = <434>;
- qcom,energy-overhead = <465725>;
- qcom,time-overhead = <976>;
- qcom,min-child-idx = <1>;
- };
-
- qcom,pm-cluster-level@2 { /* D4, D3 is not supported */
- reg = <2>;
label = "l3-pc";
qcom,psci-mode = <0x4>;
- qcom,latency-us = <3201>;
- qcom,ss-power = <408>;
- qcom,energy-overhead = <2421840>;
- qcom,time-overhead = <5376>;
- qcom,min-child-idx = <2>;
+ qcom,latency-us = <3048>;
+ qcom,ss-power = <329>;
+ qcom,energy-overhead = <6189829>;
+ qcom,time-overhead = <5800>;
+ qcom,min-child-idx = <3>;
qcom,is-reset;
};
- qcom,pm-cluster-level@3 { /* Cx off */
- reg = <3>;
+ qcom,pm-cluster-level@2 { /* Cx off */
+ reg = <2>;
label = "cx-off";
qcom,psci-mode = <0x224>;
- qcom,latency-us = <5562>;
- qcom,ss-power = <308>;
- qcom,energy-overhead = <2521840>;
- qcom,time-overhead = <6376>;
+ qcom,latency-us = <4562>;
+ qcom,ss-power = <290>;
+ qcom,energy-overhead = <6989829>;
+ qcom,time-overhead = <8200>;
qcom,min-child-idx = <3>;
qcom,is-reset;
qcom,notify-rpm;
};
- qcom,pm-cluster-level@4 { /* AOSS sleep */
- reg = <4>;
+ qcom,pm-cluster-level@3 { /* AOSS sleep */
+ reg = <3>;
label = "llcc-off";
qcom,psci-mode = <0xC24>;
qcom,latency-us = <6562>;
- qcom,ss-power = <108>;
- qcom,energy-overhead = <2621840>;
- qcom,time-overhead = <7376>;
+ qcom,ss-power = <165>;
+ qcom,energy-overhead = <7000029>;
+ qcom,time-overhead = <9825>;
qcom,min-child-idx = <3>;
qcom,is-reset;
qcom,notify-rpm;
@@ -88,6 +77,7 @@
#size-cells = <0>;
qcom,psci-mode-shift = <0>;
qcom,psci-mode-mask = <0xf>;
+ qcom,use-prediction;
qcom,cpu = <&CPU0 &CPU1 &CPU2 &CPU3 &CPU4
&CPU5>;
@@ -95,30 +85,30 @@
reg = <0>;
label = "wfi";
qcom,psci-cpu-mode = <0x1>;
- qcom,latency-us = <43>;
- qcom,ss-power = <454>;
- qcom,energy-overhead = <38639>;
- qcom,time-overhead = <83>;
+ qcom,latency-us = <60>;
+ qcom,ss-power = <383>;
+ qcom,energy-overhead = <64140>;
+ qcom,time-overhead = <121>;
};
qcom,pm-cpu-level@1 { /* C2D */
reg = <1>;
label = "ret";
qcom,psci-cpu-mode = <0x2>;
- qcom,latency-us = <119>;
- qcom,ss-power = <449>;
- qcom,energy-overhead = <78456>;
- qcom,time-overhead = <167>;
+ qcom,latency-us = <282>;
+ qcom,ss-power = <370>;
+ qcom,energy-overhead = <238600>;
+ qcom,time-overhead = <559>;
};
qcom,pm-cpu-level@2 { /* C3 */
reg = <2>;
label = "pc";
qcom,psci-cpu-mode = <0x3>;
- qcom,latency-us = <461>;
- qcom,ss-power = <436>;
- qcom,energy-overhead = <418225>;
- qcom,time-overhead = <885>;
+ qcom,latency-us = <901>;
+ qcom,ss-power = <364>;
+ qcom,energy-overhead = <579285>;
+ qcom,time-overhead = <1450>;
qcom,is-reset;
qcom,use-broadcast-timer;
};
@@ -127,10 +117,10 @@
reg = <3>;
label = "rail-pc";
qcom,psci-cpu-mode = <0x4>;
- qcom,latency-us = <531>;
- qcom,ss-power = <400>;
- qcom,energy-overhead = <428225>;
- qcom,time-overhead = <1000>;
+ qcom,latency-us = <915>;
+ qcom,ss-power = <353>;
+ qcom,energy-overhead = <666292>;
+ qcom,time-overhead = <1617>;
qcom,is-reset;
qcom,use-broadcast-timer;
};
@@ -141,36 +131,37 @@
#size-cells = <0>;
qcom,psci-mode-shift = <0>;
qcom,psci-mode-mask = <0xf>;
+ qcom,use-prediction;
qcom,cpu = <&CPU6 &CPU7>;
qcom,pm-cpu-level@0 { /* C1 */
reg = <0>;
label = "wfi";
qcom,psci-cpu-mode = <0x1>;
- qcom,latency-us = <43>;
- qcom,ss-power = <454>;
- qcom,energy-overhead = <38639>;
- qcom,time-overhead = <83>;
+ qcom,latency-us = <66>;
+ qcom,ss-power = <427>;
+ qcom,energy-overhead = <68410>;
+ qcom,time-overhead = <121>;
};
qcom,pm-cpu-level@1 { /* C2D */
reg = <1>;
label = "ret";
qcom,psci-cpu-mode = <0x2>;
- qcom,latency-us = <116>;
- qcom,ss-power = <449>;
- qcom,energy-overhead = <78456>;
- qcom,time-overhead = <167>;
+ qcom,latency-us = <282>;
+ qcom,ss-power = <388>;
+ qcom,energy-overhead = <281755>;
+ qcom,time-overhead = <553>;
};
qcom,pm-cpu-level@2 { /* C3 */
reg = <2>;
label = "pc";
qcom,psci-cpu-mode = <0x3>;
- qcom,latency-us = <621>;
- qcom,ss-power = <436>;
- qcom,energy-overhead = <418225>;
- qcom,time-overhead = <885>;
+ qcom,latency-us = <1244>;
+ qcom,ss-power = <373>;
+ qcom,energy-overhead = <795006>;
+ qcom,time-overhead = <1767>;
qcom,is-reset;
qcom,use-broadcast-timer;
};
@@ -179,10 +170,10 @@
reg = <3>;
label = "rail-pc";
qcom,psci-cpu-mode = <0x4>;
- qcom,latency-us = <1061>;
- qcom,ss-power = <400>;
- qcom,energy-overhead = <428225>;
- qcom,time-overhead = <1000>;
+ qcom,latency-us = <1854>;
+ qcom,ss-power = <359>;
+ qcom,energy-overhead = <1068095>;
+ qcom,time-overhead = <2380>;
qcom,is-reset;
qcom,use-broadcast-timer;
};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp-overlay.dts b/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp-overlay.dts
index 5b67765..3ea4fa7 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp-overlay.dts
+++ b/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp-overlay.dts
@@ -33,3 +33,42 @@
<0x0001001b 0x0202001a 0x0 0x0>;
};
+&dsi_dual_nt35597_truly_video_display {
+ /delete-property/ qcom,dsi-display-active;
+};
+
+&dsi_panel_pwr_supply_labibb_amoled {
+ qcom,panel-supply-entry@2 {
+ reg = <2>;
+ qcom,supply-name = "lab";
+ qcom,supply-min-voltage = <4600000>;
+ qcom,supply-max-voltage = <6100000>;
+ qcom,supply-enable-load = <100000>;
+ qcom,supply-disable-load = <100>;
+ };
+
+ qcom,panel-supply-entry@3 {
+ reg = <3>;
+ qcom,supply-name = "ibb";
+ qcom,supply-min-voltage = <4000000>;
+ qcom,supply-max-voltage = <6300000>;
+ qcom,supply-enable-load = <100000>;
+ qcom,supply-disable-load = <100>;
+ };
+
+ qcom,panel-supply-entry@4 {
+ reg = <4>;
+ qcom,supply-name = "oledb";
+ qcom,supply-min-voltage = <5000000>;
+ qcom,supply-max-voltage = <8100000>;
+ qcom,supply-enable-load = <100000>;
+ qcom,supply-disable-load = <100>;
+ };
+};
+
+&dsi_rm67195_amoled_fhd_cmd_display {
+ qcom,dsi-display-active;
+ lab-supply = <&lab_regulator>;
+ ibb-supply = <&ibb_regulator>;
+ oledb-supply = <&pm660a_oledb>;
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp.dts b/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp.dts
index 26f5e78..64133dd 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp.dts
+++ b/arch/arm64/boot/dts/qcom/sdm670-pm660a-cdp.dts
@@ -26,3 +26,43 @@
<0x0001001b 0x0002001a 0x0 0x0>,
<0x0001001b 0x0202001a 0x0 0x0>;
};
+
+&dsi_dual_nt35597_truly_video_display {
+ /delete-property/ qcom,dsi-display-active;
+};
+
+&dsi_panel_pwr_supply_labibb_amoled {
+ qcom,panel-supply-entry@2 {
+ reg = <2>;
+ qcom,supply-name = "lab";
+ qcom,supply-min-voltage = <4600000>;
+ qcom,supply-max-voltage = <6100000>;
+ qcom,supply-enable-load = <100000>;
+ qcom,supply-disable-load = <100>;
+ };
+
+ qcom,panel-supply-entry@3 {
+ reg = <3>;
+ qcom,supply-name = "ibb";
+ qcom,supply-min-voltage = <4000000>;
+ qcom,supply-max-voltage = <6300000>;
+ qcom,supply-enable-load = <100000>;
+ qcom,supply-disable-load = <100>;
+ };
+
+ qcom,panel-supply-entry@4 {
+ reg = <4>;
+ qcom,supply-name = "oledb";
+ qcom,supply-min-voltage = <5000000>;
+ qcom,supply-max-voltage = <8100000>;
+ qcom,supply-enable-load = <100000>;
+ qcom,supply-disable-load = <100>;
+ };
+};
+
+&dsi_rm67195_amoled_fhd_cmd_display {
+ qcom,dsi-display-active;
+ lab-supply = <&lab_regulator>;
+ ibb-supply = <&ibb_regulator>;
+ oledb-supply = <&pm660a_oledb>;
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-pmic-overlay.dtsi b/arch/arm64/boot/dts/qcom/sdm670-pmic-overlay.dtsi
index 220487a..0a7e25d 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-pmic-overlay.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-pmic-overlay.dtsi
@@ -380,5 +380,8 @@
};
&usb0 {
- extcon = <&pm660_pdphy>, <&pm660_pdphy>, <&eud>;
+ extcon = <&pm660_pdphy>, <&pm660_pdphy>, <&eud>,
+ <&pm660_charger>, <&pm660_charger>;
+ vbus_dwc3-supply = <&smb2_vbus>;
+ qcom,no-vbus-vote-with-type-C;
};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-qrd.dtsi b/arch/arm64/boot/dts/qcom/sdm670-qrd.dtsi
index 93e4c51..f3c6b00 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-qrd.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-qrd.dtsi
@@ -68,6 +68,10 @@
};
};
+&eud {
+ vdda33-supply = <&pm660l_l7>;
+};
+
&pm660_fg {
qcom,battery-data = <&qrd_batterydata>;
qcom,fg-bmd-en-delay-ms = <300>;
@@ -142,6 +146,29 @@
};
};
+&qusb_phy0 {
+ qcom,qusb-phy-init-seq =
+ /* <value reg_offset> */
+ <0x23 0x210 /* PWR_CTRL1 */
+ 0x03 0x04 /* PLL_ANALOG_CONTROLS_TWO */
+ 0x7c 0x18c /* PLL_CLOCK_INVERTERS */
+ 0x80 0x2c /* PLL_CMODE */
+ 0x0a 0x184 /* PLL_LOCK_DELAY */
+ 0x19 0xb4 /* PLL_DIGITAL_TIMERS_TWO */
+ 0x40 0x194 /* PLL_BIAS_CONTROL_1 */
+ 0x20 0x198 /* PLL_BIAS_CONTROL_2 */
+ 0x21 0x214 /* PWR_CTRL2 */
+ 0x07 0x220 /* IMP_CTRL1 */
+ 0x58 0x224 /* IMP_CTRL2 */
+ 0x77 0x240 /* TUNE1 */
+ 0x29 0x244 /* TUNE2 */
+ 0xca 0x248 /* TUNE3 */
+ 0x04 0x24c /* TUNE4 */
+ 0x03 0x250 /* TUNE5 */
+ 0x00 0x23c /* CHG_CTRL2 */
+ 0x22 0x210>; /* PWR_CTRL1 */
+};
+
&pm660_haptics {
qcom,vmax-mv = <1800>;
qcom,wave-play-rate-us = <4255>;
@@ -151,6 +178,7 @@
&int_codec {
qcom,model = "sdm670-skuw-snd-card";
+ qcom,msm-micbias1-ext-cap;
qcom,audio-routing =
"RX_BIAS", "INT_MCLK0",
"SPK_RX_BIAS", "INT_MCLK0",
@@ -279,7 +307,7 @@
&pm660l_wled {
status = "okay";
- qcom,led-strings-list = [01 02];
+ qcom,led-strings-list = [00 01];
};
&mdss_mdp {
diff --git a/arch/arm64/boot/dts/qcom/sdm670-regulator.dtsi b/arch/arm64/boot/dts/qcom/sdm670-regulator.dtsi
index 24b8dd6..3c84314 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-regulator.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-regulator.dtsi
@@ -46,9 +46,9 @@
pm660_s4: regulator-pm660-s4 {
regulator-name = "pm660_s4";
qcom,set = <RPMH_REGULATOR_SET_ALL>;
- regulator-min-microvolt = <2040000>;
+ regulator-min-microvolt = <1808000>;
regulator-max-microvolt = <2040000>;
- qcom,init-voltage = <2040000>;
+ qcom,init-voltage = <1808000>;
};
};
@@ -72,9 +72,9 @@
pm660_s6: regulator-pm660-s6 {
regulator-name = "pm660_s6";
qcom,set = <RPMH_REGULATOR_SET_ALL>;
- regulator-min-microvolt = <1352000>;
+ regulator-min-microvolt = <1224000>;
regulator-max-microvolt = <1352000>;
- qcom,init-voltage = <1352000>;
+ qcom,init-voltage = <1224000>;
};
};
@@ -162,11 +162,14 @@
<RPMH_REGULATOR_MODE_LDO_LPM
RPMH_REGULATOR_MODE_LDO_HPM>;
qcom,mode-threshold-currents = <0 1>;
+ proxy-supply = <&pm660_l1>;
pm660_l1: regulator-pm660-l1 {
regulator-name = "pm660_l1";
qcom,set = <RPMH_REGULATOR_SET_ALL>;
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1250000>;
+ qcom,proxy-consumer-enable;
+ qcom,proxy-consumer-current = <43600>;
qcom,init-voltage = <1200000>;
qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
};
@@ -237,9 +240,9 @@
pm660_l6: regulator-pm660-l6 {
regulator-name = "pm660_l6";
qcom,set = <RPMH_REGULATOR_SET_ALL>;
- regulator-min-microvolt = <1304000>;
+ regulator-min-microvolt = <1248000>;
regulator-max-microvolt = <1304000>;
- qcom,init-voltage = <1304000>;
+ qcom,init-voltage = <1248000>;
qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
};
};
@@ -324,11 +327,14 @@
<RPMH_REGULATOR_MODE_LDO_LPM
RPMH_REGULATOR_MODE_LDO_HPM>;
qcom,mode-threshold-currents = <0 1>;
+ proxy-supply = <&pm660_l11>;
pm660_l11: regulator-pm660-l11 {
regulator-name = "pm660_l11";
qcom,set = <RPMH_REGULATOR_SET_ALL>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
+ qcom,proxy-consumer-enable;
+ qcom,proxy-consumer-current = <115000>;
qcom,init-voltage = <1800000>;
qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
};
@@ -468,11 +474,14 @@
<RPMH_REGULATOR_MODE_LDO_LPM
RPMH_REGULATOR_MODE_LDO_HPM>;
qcom,mode-threshold-currents = <0 1>;
+ proxy-supply = <&pm660l_l1>;
pm660l_l1: regulator-pm660l-l1 {
regulator-name = "pm660l_l1";
qcom,set = <RPMH_REGULATOR_SET_ALL>;
regulator-min-microvolt = <880000>;
regulator-max-microvolt = <900000>;
+ qcom,proxy-consumer-enable;
+ qcom,proxy-consumer-current = <72000>;
qcom,init-voltage = <880000>;
qcom,init-mode = <RPMH_REGULATOR_MODE_LDO_LPM>;
};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-rumi.dts b/arch/arm64/boot/dts/qcom/sdm670-rumi.dts
index e137705..6201488 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-rumi.dts
+++ b/arch/arm64/boot/dts/qcom/sdm670-rumi.dts
@@ -16,7 +16,6 @@
#include "sdm670.dtsi"
#include "sdm670-rumi.dtsi"
-#include "sdm670-audio-overlay.dtsi"
/ {
model = "Qualcomm Technologies, Inc. SDM670 RUMI";
compatible = "qcom,sdm670-rumi", "qcom,sdm670", "qcom,rumi";
diff --git a/arch/arm64/boot/dts/qcom/sdm670-sde-display.dtsi b/arch/arm64/boot/dts/qcom/sdm670-sde-display.dtsi
index 6f4dd33..de125e2 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-sde-display.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-sde-display.dtsi
@@ -109,9 +109,9 @@
qcom,panel-supply-entry@0 {
reg = <0>;
- qcom,supply-name = "wqhd-vddio";
+ qcom,supply-name = "vddio";
qcom,supply-min-voltage = <1800000>;
- qcom,supply-max-voltage = <1950000>;
+ qcom,supply-max-voltage = <1800000>;
qcom,supply-enable-load = <32000>;
qcom,supply-disable-load = <80>;
};
@@ -124,33 +124,6 @@
qcom,supply-enable-load = <13200>;
qcom,supply-disable-load = <80>;
};
-
- qcom,panel-supply-entry@2 {
- reg = <2>;
- qcom,supply-name = "lab";
- qcom,supply-min-voltage = <4600000>;
- qcom,supply-max-voltage = <6100000>;
- qcom,supply-enable-load = <100000>;
- qcom,supply-disable-load = <100>;
- };
-
- qcom,panel-supply-entry@3 {
- reg = <3>;
- qcom,supply-name = "ibb";
- qcom,supply-min-voltage = <4000000>;
- qcom,supply-max-voltage = <6300000>;
- qcom,supply-enable-load = <100000>;
- qcom,supply-disable-load = <100>;
- };
-
- qcom,panel-supply-entry@4 {
- reg = <4>;
- qcom,supply-name = "oledb";
- qcom,supply-min-voltage = <5000000>;
- qcom,supply-max-voltage = <8100000>;
- qcom,supply-enable-load = <100000>;
- qcom,supply-disable-load = <100>;
- };
};
dsi_dual_nt35597_truly_video_display: qcom,dsi-display@0 {
@@ -421,8 +394,7 @@
qcom,dsi-panel = <&dsi_rm67195_amoled_fhd_cmd>;
vddio-supply = <&pm660_l11>;
- lab-supply = <&lcdb_ldo_vreg>;
- ibb-supply = <&lcdb_ncp_vreg>;
+ vdda-3p3-supply = <&pm660l_l6>;
};
dsi_nt35695b_truly_fhd_video_display: qcom,dsi-display@13 {
@@ -527,6 +499,18 @@
&dsi_dual_nt35597_truly_video {
qcom,mdss-dsi-t-clk-post = <0x0D>;
qcom,mdss-dsi-t-clk-pre = <0x2D>;
+ qcom,mdss-dsi-min-refresh-rate = <53>;
+ qcom,mdss-dsi-max-refresh-rate = <60>;
+ qcom,mdss-dsi-pan-enable-dynamic-fps;
+ qcom,mdss-dsi-pan-fps-update =
+ "dfps_immediate_porch_mode_vfp";
+ qcom,esd-check-enabled;
+ qcom,mdss-dsi-panel-status-check-mode = "reg_read";
+ qcom,mdss-dsi-panel-status-command = [06 01 00 01 00 00 01 0a];
+ qcom,mdss-dsi-panel-status-command-state = "dsi_hs_mode";
+ qcom,mdss-dsi-panel-status-value = <0x9c>;
+ qcom,mdss-dsi-panel-on-check-value = <0x9c>;
+ qcom,mdss-dsi-panel-status-read-length = <1>;
qcom,mdss-dsi-display-timings {
timing@0{
qcom,mdss-dsi-panel-phy-timings = [00 1c 07 07 23 21 07
@@ -541,6 +525,14 @@
&dsi_dual_nt35597_truly_cmd {
qcom,mdss-dsi-t-clk-post = <0x0D>;
qcom,mdss-dsi-t-clk-pre = <0x2D>;
+ qcom,ulps-enabled;
+ qcom,esd-check-enabled;
+ qcom,mdss-dsi-panel-status-check-mode = "reg_read";
+ qcom,mdss-dsi-panel-status-command = [06 01 00 01 00 00 01 0a];
+ qcom,mdss-dsi-panel-status-command-state = "dsi_hs_mode";
+ qcom,mdss-dsi-panel-status-value = <0x9c>;
+ qcom,mdss-dsi-panel-on-check-value = <0x9c>;
+ qcom,mdss-dsi-panel-status-read-length = <1>;
qcom,mdss-dsi-display-timings {
timing@0{
qcom,mdss-dsi-panel-phy-timings = [00 1c 07 07 23 21 07
@@ -548,6 +540,8 @@
qcom,display-topology = <2 0 2>,
<1 0 2>;
qcom,default-topology-index = <0>;
+ qcom,partial-update-enabled = "single_roi";
+ qcom,panel-roi-alignment = <720 128 720 128 1440 128>;
};
};
};
@@ -555,6 +549,14 @@
&dsi_nt35597_truly_dsc_cmd {
qcom,mdss-dsi-t-clk-post = <0x0b>;
qcom,mdss-dsi-t-clk-pre = <0x23>;
+ qcom,ulps-enabled;
+ qcom,esd-check-enabled;
+ qcom,mdss-dsi-panel-status-check-mode = "reg_read";
+ qcom,mdss-dsi-panel-status-command = [06 01 00 01 00 00 01 0a];
+ qcom,mdss-dsi-panel-status-command-state = "dsi_hs_mode";
+ qcom,mdss-dsi-panel-status-value = <0x9c>;
+ qcom,mdss-dsi-panel-on-check-value = <0x9c>;
+ qcom,mdss-dsi-panel-status-read-length = <1>;
qcom,mdss-dsi-display-timings {
timing@0{
qcom,mdss-dsi-panel-phy-timings = [00 15 05 05 20 1f 05
@@ -570,6 +572,18 @@
&dsi_nt35597_truly_dsc_video {
qcom,mdss-dsi-t-clk-post = <0x0b>;
qcom,mdss-dsi-t-clk-pre = <0x23>;
+ qcom,mdss-dsi-min-refresh-rate = <53>;
+ qcom,mdss-dsi-max-refresh-rate = <60>;
+ qcom,mdss-dsi-pan-enable-dynamic-fps;
+ qcom,mdss-dsi-pan-fps-update =
+ "dfps_immediate_porch_mode_vfp";
+ qcom,esd-check-enabled;
+ qcom,mdss-dsi-panel-status-check-mode = "reg_read";
+ qcom,mdss-dsi-panel-status-command = [06 01 00 01 00 00 01 0a];
+ qcom,mdss-dsi-panel-status-command-state = "dsi_hs_mode";
+ qcom,mdss-dsi-panel-status-value = <0x9c>;
+ qcom,mdss-dsi-panel-on-check-value = <0x9c>;
+ qcom,mdss-dsi-panel-status-read-length = <1>;
qcom,mdss-dsi-display-timings {
timing@0{
qcom,mdss-dsi-panel-phy-timings = [00 15 05 05 20 1f 05
@@ -707,6 +721,7 @@
&dsi_dual_nt35597_cmd {
qcom,mdss-dsi-t-clk-post = <0x0d>;
qcom,mdss-dsi-t-clk-pre = <0x2d>;
+ qcom,ulps-enabled;
qcom,mdss-dsi-display-timings {
timing@0 {
qcom,mdss-dsi-panel-timings = [00 1c 08 07 23 22 07 07
@@ -714,6 +729,8 @@
qcom,display-topology = <2 0 2>,
<1 0 2>;
qcom,default-topology-index = <0>;
+ qcom,partial-update-enabled = "single_roi";
+ qcom,panel-roi-alignment = <720 128 720 128 1440 128>;
};
};
};
@@ -734,6 +751,11 @@
&dsi_nt35695b_truly_fhd_video {
qcom,mdss-dsi-t-clk-post = <0x07>;
qcom,mdss-dsi-t-clk-pre = <0x1c>;
+ qcom,mdss-dsi-min-refresh-rate = <48>;
+ qcom,mdss-dsi-max-refresh-rate = <60>;
+ qcom,mdss-dsi-pan-enable-dynamic-fps;
+ qcom,mdss-dsi-pan-fps-update =
+ "dfps_immediate_porch_mode_vfp";
qcom,mdss-dsi-display-timings {
timing@0 {
qcom,mdss-dsi-panel-phy-timings = [00 1c 05 06 0b 0c
@@ -747,6 +769,7 @@
&dsi_nt35695b_truly_fhd_cmd {
qcom,mdss-dsi-t-clk-post = <0x07>;
qcom,mdss-dsi-t-clk-pre = <0x1c>;
+ qcom,ulps-enabled;
qcom,mdss-dsi-display-timings {
timing@0 {
qcom,mdss-dsi-panel-phy-timings = [00 1c 05 06 0b 0c
diff --git a/arch/arm64/boot/dts/qcom/sdm670-sde.dtsi b/arch/arm64/boot/dts/qcom/sdm670-sde.dtsi
index 3ab149e..a918687 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-sde.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-sde.dtsi
@@ -69,6 +69,11 @@
qcom,sde-dspp-off = <0x55000 0x57000>;
qcom,sde-dspp-size = <0x17e0>;
+ qcom,sde-dest-scaler-top-off = <0x00061000>;
+ qcom,sde-dest-scaler-top-size = <0xc>;
+ qcom,sde-dest-scaler-off = <0x800 0x1000>;
+ qcom,sde-dest-scaler-size = <0x800>;
+
qcom,sde-wb-off = <0x66000>;
qcom,sde-wb-size = <0x2c8>;
qcom,sde-wb-xin-id = <6>;
@@ -126,13 +131,17 @@
qcom,sde-mixer-blendstages = <0xb>;
qcom,sde-highest-bank-bit = <0x1>;
qcom,sde-ubwc-version = <0x200>;
+ qcom,sde-smart-panel-align-mode = <0xc>;
qcom,sde-panic-per-pipe;
qcom,sde-has-cdp;
qcom,sde-has-src-split;
qcom,sde-has-dim-layer;
qcom,sde-has-idle-pc;
- qcom,sde-max-bw-low-kbps = <9600000>;
- qcom,sde-max-bw-high-kbps = <9600000>;
+ qcom,sde-has-dest-scaler;
+ qcom,sde-max-dest-scaler-input-linewidth = <2048>;
+ qcom,sde-max-dest-scaler-output-linewidth = <2560>;
+ qcom,sde-max-bw-low-kbps = <6800000>;
+ qcom,sde-max-bw-high-kbps = <6800000>;
qcom,sde-dram-channels = <2>;
qcom,sde-num-nrt-paths = <0>;
qcom,sde-dspp-ad-version = <0x00040000>;
@@ -561,7 +570,10 @@
vdda-1p2-supply = <&pm660_l1>;
vdda-0p9-supply = <&pm660l_l1>;
- reg = <0xae90000 0xa84>,
+ reg = <0xae90000 0x0dc>,
+ <0xae90200 0x0c0>,
+ <0xae90400 0x508>,
+ <0xae90a00 0x094>,
<0x88eaa00 0x200>,
<0x88ea200 0x200>,
<0x88ea600 0x200>,
@@ -570,7 +582,9 @@
<0x88ea030 0x10>,
<0x88e8000 0x20>,
<0x0aee1000 0x034>;
- reg-names = "dp_ctrl", "dp_phy", "dp_ln_tx0", "dp_ln_tx1",
+ /* dp_ctrl: dp_ahb, dp_aux, dp_link, dp_p0 */
+ reg-names = "dp_ahb", "dp_aux", "dp_link",
+ "dp_p0", "dp_phy", "dp_ln_tx0", "dp_ln_tx1",
"dp_mmss_cc", "qfprom_physical", "dp_pll",
"usb3_dp_com", "hdcp_physical";
diff --git a/arch/arm64/boot/dts/qcom/sdm670-thermal.dtsi b/arch/arm64/boot/dts/qcom/sdm670-thermal.dtsi
index 1ce8dba..6324b64 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-thermal.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-thermal.dtsi
@@ -147,7 +147,7 @@
};
};
- cpu4-silver-usr {
+ cpuss-0-usr {
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-sensors = <&tsens0 5>;
@@ -161,7 +161,7 @@
};
};
- cpu5-silver-usr {
+ cpuss-1-usr {
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-sensors = <&tsens0 6>;
@@ -175,7 +175,7 @@
};
};
- kryo-l3-0-usr {
+ cpu4-silver-usr {
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-sensors = <&tsens0 7>;
@@ -189,7 +189,7 @@
};
};
- kryo-l3-1-usr {
+ cpu5-silver-usr {
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-sensors = <&tsens0 8>;
@@ -462,15 +462,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&aoss0_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&aoss0_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&aoss0_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&aoss0_trip>;
@@ -511,15 +512,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpu0_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpu0_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpu0_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpu0_trip>;
@@ -560,15 +562,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpu1_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpu1_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpu1_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpu1_trip>;
@@ -609,15 +612,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpu2_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpu2_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpu2_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpu2_trip>;
@@ -658,15 +662,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpu3_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpu3_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpu3_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpu3_trip>;
@@ -691,13 +696,113 @@
};
};
- cpu4-silver-lowf {
+ cpuss-0-lowf {
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-governor = "low_limits_floor";
thermal-sensors = <&tsens0 5>;
tracks-low;
trips {
+ l3_0_trip: l3-0-trip {
+ temperature = <5000>;
+ hysteresis = <5000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ cpu0_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&CPU0 2 2>;
+ };
+ cpu6_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
+ };
+ gpu_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&msm_gpu 4 4>;
+ };
+ cx_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&cx_cdev 0 0>;
+ };
+ mx_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&mx_cdev 0 0>;
+ };
+ modem_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&modem_vdd 0 0>;
+ };
+ adsp_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&adsp_vdd 0 0>;
+ };
+ cdsp_vdd_cdev {
+ trip = <&l3_0_trip>;
+ cooling-device = <&cdsp_vdd 0 0>;
+ };
+ };
+ };
+
+ cpuss-1-lowf {
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+ thermal-governor = "low_limits_floor";
+ thermal-sensors = <&tsens0 6>;
+ tracks-low;
+ trips {
+ l3_1_trip: l3-1-trip {
+ temperature = <5000>;
+ hysteresis = <5000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ cpu0_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&CPU0 2 2>;
+ };
+ cpu6_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
+ };
+ gpu_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&msm_gpu 4 4>;
+ };
+ cx_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&cx_cdev 0 0>;
+ };
+ mx_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&mx_cdev 0 0>;
+ };
+ modem_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&modem_vdd 0 0>;
+ };
+ adsp_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&adsp_vdd 0 0>;
+ };
+ cdsp_vdd_cdev {
+ trip = <&l3_1_trip>;
+ cooling-device = <&cdsp_vdd 0 0>;
+ };
+ };
+ };
+
+ cpu4-silver-lowf {
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+ thermal-governor = "low_limits_floor";
+ thermal-sensors = <&tsens0 7>;
+ tracks-low;
+ trips {
cpu4_trip: cpu4-trip {
temperature = <5000>;
hysteresis = <5000>;
@@ -707,15 +812,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpu4_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpu4_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpu4_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpu4_trip>;
@@ -744,7 +850,7 @@
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-governor = "low_limits_floor";
- thermal-sensors = <&tsens0 6>;
+ thermal-sensors = <&tsens0 8>;
tracks-low;
trips {
cpu5_trip: cpu5-trip {
@@ -756,15 +862,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpu5_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpu5_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpu5_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpu5_trip>;
@@ -789,104 +896,6 @@
};
};
- kryo-l3-0-lowf {
- polling-delay-passive = <0>;
- polling-delay = <0>;
- thermal-governor = "low_limits_floor";
- thermal-sensors = <&tsens0 7>;
- tracks-low;
- trips {
- l3_0_trip: l3-0-trip {
- temperature = <5000>;
- hysteresis = <5000>;
- type = "passive";
- };
- };
- cooling-maps {
- cpu0_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&CPU0 4 4>;
- };
- cpu6_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&CPU6 9 9>;
- };
- gpu_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&msm_gpu 1 1>;
- };
- cx_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&cx_cdev 0 0>;
- };
- mx_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&mx_cdev 0 0>;
- };
- modem_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&modem_vdd 0 0>;
- };
- adsp_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&adsp_vdd 0 0>;
- };
- cdsp_vdd_cdev {
- trip = <&l3_0_trip>;
- cooling-device = <&cdsp_vdd 0 0>;
- };
- };
- };
-
- kryo-l3-1-lowf {
- polling-delay-passive = <0>;
- polling-delay = <0>;
- thermal-governor = "low_limits_floor";
- thermal-sensors = <&tsens0 8>;
- tracks-low;
- trips {
- l3_1_trip: l3-1-trip {
- temperature = <5000>;
- hysteresis = <5000>;
- type = "passive";
- };
- };
- cooling-maps {
- cpu0_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&CPU0 4 4>;
- };
- cpu6_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&CPU6 9 9>;
- };
- gpu_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&msm_gpu 1 1>;
- };
- cx_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&cx_cdev 0 0>;
- };
- mx_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&mx_cdev 0 0>;
- };
- modem_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&modem_vdd 0 0>;
- };
- adsp_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&adsp_vdd 0 0>;
- };
- cdsp_vdd_cdev {
- trip = <&l3_1_trip>;
- cooling-device = <&cdsp_vdd 0 0>;
- };
- };
- };
-
cpu0-gold-lowf {
polling-delay-passive = <0>;
polling-delay = <0>;
@@ -903,15 +912,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpug0_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpug0_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpug0_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpug0_trip>;
@@ -952,15 +962,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&cpug1_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&cpug1_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&cpug1_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&cpug1_trip>;
@@ -1001,15 +1012,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&gpu0_trip_l>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&gpu0_trip_l>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&gpu0_trip_l>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&gpu0_trip_l>;
@@ -1050,15 +1062,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&gpu1_trip_l>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&gpu1_trip_l>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&gpu1_trip_l>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&gpu1_trip_l>;
@@ -1099,15 +1112,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&aoss1_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&aoss1_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&aoss1_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&aoss1_trip>;
@@ -1148,15 +1162,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&dsp_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&dsp_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&dsp_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&dsp_trip>;
@@ -1197,15 +1212,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&ddr_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&ddr_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&ddr_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&ddr_trip>;
@@ -1246,15 +1262,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&wlan_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&wlan_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&wlan_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&wlan_trip>;
@@ -1295,15 +1312,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&hvx_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&hvx_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&hvx_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&hvx_trip>;
@@ -1344,15 +1362,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&camera_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&camera_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&camera_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&camera_trip>;
@@ -1393,15 +1412,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&mmss_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&mmss_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&mmss_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&mmss_trip>;
@@ -1442,15 +1462,16 @@
cooling-maps {
cpu0_vdd_cdev {
trip = <&mdm_trip>;
- cooling-device = <&CPU0 4 4>;
+ cooling-device = <&CPU0 2 2>;
};
cpu6_vdd_cdev {
trip = <&mdm_trip>;
- cooling-device = <&CPU6 9 9>;
+ cooling-device = <&CPU6 (THERMAL_MAX_LIMIT-8)
+ (THERMAL_MAX_LIMIT-8)>;
};
gpu_vdd_cdev {
trip = <&mdm_trip>;
- cooling-device = <&msm_gpu 1 1>;
+ cooling-device = <&msm_gpu 4 4>;
};
cx_vdd_cdev {
trip = <&mdm_trip>;
@@ -1504,4 +1525,118 @@
};
};
};
+
+ xo-therm-cpu-step {
+ polling-delay-passive = <2000>;
+ polling-delay = <0>;
+ thermal-sensors = <&pm660_adc_tm 0x4c>;
+ thermal-governor = "step_wise";
+
+ trips {
+ gold_trip0: gold-trip0 {
+ temperature = <45000>;
+ hysteresis = <0>;
+ type = "passive";
+ };
+ silver_trip1: silver-trip1 {
+ temperature = <48000>;
+ hysteresis = <0>;
+ type = "passive";
+ };
+ };
+
+ cooling-maps {
+ skin_cpu6 {
+ trip = <&gold_trip0>;
+ cooling-device =
+ /* throttle from fmax to 1747200KHz */
+ <&CPU6 THERMAL_NO_LIMIT
+ (THERMAL_MAX_LIMIT-8)>;
+ };
+ skin_cpu7 {
+ trip = <&gold_trip0>;
+ cooling-device =
+ <&CPU7 THERMAL_NO_LIMIT
+ (THERMAL_MAX_LIMIT-8)>;
+ };
+ skin_cpu0 {
+ trip = <&silver_trip1>;
+ /* throttle from fmax to 1516800KHz */
+ cooling-device = <&CPU0 THERMAL_NO_LIMIT 2>;
+ };
+ skin_cpu1 {
+ trip = <&silver_trip1>;
+ cooling-device = <&CPU1 THERMAL_NO_LIMIT 2>;
+ };
+ skin_cpu2 {
+ trip = <&silver_trip1>;
+ cooling-device = <&CPU2 THERMAL_NO_LIMIT 2>;
+ };
+ skin_cpu3 {
+ trip = <&silver_trip1>;
+ cooling-device = <&CPU3 THERMAL_NO_LIMIT 2>;
+ };
+ skin_cpu4 {
+ trip = <&silver_trip1>;
+ cooling-device = <&CPU4 THERMAL_NO_LIMIT 2>;
+ };
+ skin_cpu5 {
+ trip = <&silver_trip1>;
+ cooling-device = <&CPU5 THERMAL_NO_LIMIT 2>;
+ };
+ };
+ };
+
+ xo-therm-mdm-step {
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+ thermal-sensors = <&pm660_adc_tm 0x4c>;
+ thermal-governor = "step_wise";
+
+ trips {
+ modem_trip0: modem-trip0 {
+ temperature = <44000>;
+ hysteresis = <4000>;
+ type = "passive";
+ };
+ modem_trip1: modem-trip1 {
+ temperature = <46000>;
+ hysteresis = <3000>;
+ type = "passive";
+ };
+ modem_trip2: modem-trip2 {
+ temperature = <48000>;
+ hysteresis = <2000>;
+ type = "passive";
+ };
+ modem_trip3: modem-trip3 {
+ temperature = <55000>;
+ hysteresis = <5000>;
+ type = "passive";
+ };
+ };
+
+ cooling-maps {
+ modem_lvl1 {
+ trip = <&modem_trip1>;
+ cooling-device = <&modem_pa 1 1>;
+ };
+ modem_lvl2 {
+ trip = <&modem_trip2>;
+ cooling-device = <&modem_pa 2 2>;
+ };
+ modem_lvl3 {
+ trip = <&modem_trip3>;
+ cooling-device = <&modem_pa 3 3>;
+ };
+ modem_proc_lvl1 {
+ trip = <&modem_trip0>;
+ cooling-device = <&modem_proc 1 1>;
+ };
+ modem_proc_lvl3 {
+ trip = <&modem_trip3>;
+ cooling-device = <&modem_proc 3 3>;
+ };
+ };
+ };
};
diff --git a/arch/arm64/boot/dts/qcom/sdm670-usb.dtsi b/arch/arm64/boot/dts/qcom/sdm670-usb.dtsi
index 6a69e29..84c7459 100644
--- a/arch/arm64/boot/dts/qcom/sdm670-usb.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670-usb.dtsi
@@ -11,7 +11,7 @@
* GNU General Public License for more details.
*/
-#include "sdm845-usb.dtsi"
+#include "sdm845-670-usb-common.dtsi"
&soc {
/delete-node/ ssusb@a800000;
@@ -29,7 +29,7 @@
&usb0 {
/delete-property/ iommus;
/delete-property/ qcom,smmu-s1-bypass;
- extcon = <0>, <0>, <0> /* <&eud> */;
+ extcon = <0>, <0>, <&eud>, <0>, <0>;
};
&qusb_phy0 {
diff --git a/arch/arm64/boot/dts/qcom/sdm670.dtsi b/arch/arm64/boot/dts/qcom/sdm670.dtsi
index 7c0f0ef..7ee700d 100644
--- a/arch/arm64/boot/dts/qcom/sdm670.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm670.dtsi
@@ -690,6 +690,32 @@
clock-frequency = <19200000>;
};
+ qcom,memshare {
+ compatible = "qcom,memshare";
+
+ qcom,client_1 {
+ compatible = "qcom,memshare-peripheral";
+ qcom,peripheral-size = <0x0>;
+ qcom,client-id = <0>;
+ qcom,allocate-boot-time;
+ label = "modem";
+ };
+
+ qcom,client_2 {
+ compatible = "qcom,memshare-peripheral";
+ qcom,peripheral-size = <0x0>;
+ qcom,client-id = <2>;
+ label = "modem";
+ };
+
+ mem_client_3_size: qcom,client_3 {
+ compatible = "qcom,memshare-peripheral";
+ qcom,peripheral-size = <0x500000>;
+ qcom,client-id = <1>;
+ label = "modem";
+ };
+ };
+
qcom,sps {
compatible = "qcom,msm_sps_4k";
qcom,pipe-attr-ee;
@@ -721,8 +747,8 @@
qcom,ce-opp-freq = <171430000>;
qcom,request-bw-before-clk;
qcom,smmu-s1-enable;
- iommus = <&apps_smmu 0x706 0x3>,
- <&apps_smmu 0x716 0x3>;
+ iommus = <&apps_smmu 0x706 0x1>,
+ <&apps_smmu 0x716 0x1>;
};
qcom_crypto: qcrypto@1de0000 {
@@ -758,8 +784,8 @@
qcom,use-sw-ahash-algo;
qcom,use-sw-hmac-algo;
qcom,smmu-s1-enable;
- iommus = <&apps_smmu 0x704 0x3>,
- <&apps_smmu 0x714 0x3>;
+ iommus = <&apps_smmu 0x704 0x1>,
+ <&apps_smmu 0x714 0x1>;
};
qcom,qbt1000 {
@@ -780,6 +806,7 @@
qcom,disk-encrypt-pipe-pair = <2>;
qcom,support-fde;
qcom,no-clock-support;
+ qcom,fde-key-size;
qcom,appsbl-qseecom-support;
qcom,msm-bus,name = "qseecom-noc";
qcom,msm-bus,num-cases = <4>;
@@ -1007,9 +1034,14 @@
compatible = "qcom,clk-cpu-osm-sdm670";
reg = <0x17d41000 0x1400>,
<0x17d43000 0x1400>,
- <0x17d45800 0x1400>;
- reg-names = "osm_l3_base", "osm_pwrcl_base", "osm_perfcl_base";
+ <0x17d45800 0x1400>,
+ <0x784248 0x4>;
+ reg-names = "osm_l3_base", "osm_pwrcl_base", "osm_perfcl_base",
+ "cpr_rc";
+ vdd_l3_mx_ao-supply = <&pm660l_s1_level_ao>;
+ vdd_pwrcl_mx_ao-supply = <&pm660l_s1_level_ao>;
+ qcom,mx-turbo-freq = <1478400000 1689600000 3300000001>;
l3-devs = <&l3_cpu0 &l3_cpu6>;
clock-names = "xo_ao";
@@ -1106,6 +1138,11 @@
reg = <0x10 8>;
};
+ dload_type@1c {
+ compatible = "qcom,msm-imem-dload-type";
+ reg = <0x1c 0x4>;
+ };
+
restart_reason@65c {
compatible = "qcom,msm-imem-restart_reason";
reg = <0x65c 4>;
@@ -1280,52 +1317,52 @@
compatible = "qcom,mem-dump";
memory-region = <&dump_mem>;
- rpmh_dump {
+ rpmh {
qcom,dump-size = <0x2000000>;
qcom,dump-id = <0xec>;
};
- rpm_sw_dump {
+ rpm_sw {
qcom,dump-size = <0x28000>;
qcom,dump-id = <0xea>;
};
- pmic_dump {
+ pmic {
qcom,dump-size = <0x10000>;
qcom,dump-id = <0xe4>;
};
- tmc_etf_dump {
+ tmc_etf {
qcom,dump-size = <0x10000>;
qcom,dump-id = <0xf0>;
};
- tmc_etf_swao_dump {
+ tmc_etfswao {
qcom,dump-size = <0x8400>;
qcom,dump-id = <0xf1>;
};
- tmc_etr_reg_dump {
+ tmc_etr_reg {
qcom,dump-size = <0x1000>;
qcom,dump-id = <0x100>;
};
- tmc_etf_reg_dump {
+ tmc_etf_reg {
qcom,dump-size = <0x1000>;
qcom,dump-id = <0x101>;
};
- tmc_etf_swao_reg_dump {
+ etfswao_reg {
qcom,dump-size = <0x1000>;
qcom,dump-id = <0x102>;
};
- misc_data_dump {
+ misc_data {
qcom,dump-size = <0x1000>;
qcom,dump-id = <0xe8>;
};
- power_regs_data_dump {
+ power_regs {
qcom,dump-size = <0x100000>;
qcom,dump-id = <0xed>;
};
@@ -1617,6 +1654,10 @@
qcom,dump-size = <0x80000>;
};
+ qcom,llcc-perfmon {
+ compatible = "qcom,llcc-perfmon";
+ };
+
qcom,llcc-erp {
compatible = "qcom,llcc-erp";
interrupt-names = "ecc_irq";
@@ -2049,6 +2090,8 @@
vdd_cx-voltage = <RPMH_REGULATOR_LEVEL_TURBO>;
vdd_mx-supply = <&pm660l_s1_level>;
vdd_mx-uV = <RPMH_REGULATOR_LEVEL_TURBO>;
+ vdd_mss-supply = <&pm660_s5_level>;
+ vdd_mss-uV = <RPMH_REGULATOR_LEVEL_TURBO>;
qcom,firmware-name = "modem";
qcom,pil-self-auth;
qcom,sysmon-id = <0>;
@@ -2177,6 +2220,8 @@
qcom,clk-rates = <400000 20000000 25000000 50000000 100000000
192000000 384000000>;
+ qcom,bus-aggr-clk-rates = <50000000 50000000 50000000 50000000
+ 100000000 200000000 200000000>;
qcom,bus-speed-mode = "HS400_1p8v", "HS200_1p8v", "DDR_1p8v";
qcom,devfreq,freq-table = <50000000 200000000>;
@@ -2198,10 +2243,10 @@
<1 782 100000 100000>,
/* 50 MB/s */
<150 512 130718 200000>,
- <1 782 133320 133320>,
+ <1 782 100000 100000>,
/* 100 MB/s */
<150 512 130718 200000>,
- <1 782 150000 150000>,
+ <1 782 130000 130000>,
/* 200 MB/s */
<150 512 261438 400000>,
<1 782 300000 300000>,
@@ -2234,7 +2279,6 @@
qcom,nonremovable;
- qcom,scaling-lower-bus-speed-mode = "DDR52";
status = "disabled";
};
@@ -2273,10 +2317,10 @@
<1 608 100000 100000>,
/* 50 MB/s */
<81 512 130718 200000>,
- <1 608 133320 133320>,
+ <1 608 100000 100000>,
/* 100 MB/s */
<81 512 261438 200000>,
- <1 608 150000 150000>,
+ <1 608 130000 130000>,
/* 200 MB/s */
<81 512 261438 400000>,
<1 608 300000 300000>,
@@ -2311,6 +2355,7 @@
qcom,msm_fastrpc {
compatible = "qcom,msm-fastrpc-compute";
+ qcom,adsp-remoteheap-vmid = <37>;
qcom,msm_fastrpc_compute_cb1 {
compatible = "qcom,msm-fastrpc-compute-cb";
@@ -2459,6 +2504,8 @@
qcom,count-unit = <0x10000>;
qcom,hw-timer-hz = <19200000>;
qcom,target-dev = <&cpubw>;
+ qcom,byte-mid-mask = <0xe000>;
+ qcom,byte-mid-match = <0xe000>;
};
memlat_cpu0: qcom,memlat-cpu0 {
@@ -2500,6 +2547,16 @@
< MHZ_TO_MBPS(1804, 4) >; /* 6881 MB/s */
};
+ snoc_cnoc_keepalive: qcom,snoc_cnoc_keepalive {
+ compatible = "qcom,devbw";
+ governor = "powersave";
+ qcom,src-dst-ports = <139 627>;
+ qcom,active-only;
+ status = "ok";
+ qcom,bw-tbl =
+ < 1 >;
+ };
+
devfreq_memlat_0: qcom,cpu0-memlat-mon {
compatible = "qcom,arm-memlat-mon";
qcom,cpulist = <&CPU0 &CPU1 &CPU2 &CPU3 &CPU4 &CPU5>;
@@ -2748,6 +2805,8 @@
&mdss_core_gdsc {
status = "ok";
+ proxy-supply = <&mdss_core_gdsc>;
+ qcom,proxy-consumer-enable;
};
&gpu_cx_gdsc {
diff --git a/arch/arm64/boot/dts/qcom/sdm845-usb.dtsi b/arch/arm64/boot/dts/qcom/sdm845-670-usb-common.dtsi
similarity index 97%
rename from arch/arm64/boot/dts/qcom/sdm845-usb.dtsi
rename to arch/arm64/boot/dts/qcom/sdm845-670-usb-common.dtsi
index cb26b61..f6fa948 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-usb.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-670-usb-common.dtsi
@@ -36,7 +36,7 @@
qcom,dwc-usb3-msm-tx-fifo-size = <21288>;
qcom,num-gsi-evt-buffs = <0x3>;
qcom,use-pdc-interrupts;
- extcon = <0>, <0>, <&eud>;
+ extcon = <0>, <0>, <&eud>, <0>, <0>;
clocks = <&clock_gcc GCC_USB30_PRIM_MASTER_CLK>,
<&clock_gcc GCC_CFG_NOC_USB3_PRIM_AXI_CLK>,
@@ -136,7 +136,9 @@
0x230 /* QUSB2PHY_INTR_CTRL */
0x0a8 /* QUSB2PHY_PLL_CORE_INPUT_OVERRIDE */
0x254 /* QUSB2PHY_TEST1 */
- 0x198>; /* PLL_BIAS_CONTROL_2 */
+ 0x198 /* PLL_BIAS_CONTROL_2 */
+ 0x228 /* QUSB2PHY_SQ_CTRL1 */
+ 0x22c>; /* QUSB2PHY_SQ_CTRL2 */
qcom,qusb-phy-init-seq =
/* <value reg_offset> */
@@ -224,6 +226,8 @@
0x14fc 0x80 0x00 /* RXA_RX_OFFSET_ADAPTOR_CNTRL2 */
0x1504 0x03 0x00 /* RXA_SIGDET_CNTRL */
0x150c 0x16 0x00 /* RXA_SIGDET_DEGLITCH_CNTRL */
+ 0x1564 0x05 0x00 /* RXA_RX_MODE_00 */
+ 0x14c0 0x03 0x00 /* RXA_VGA_CAL_CNTRL2 */
0x1830 0x0b 0x00 /* RXB_UCDR_FASTLOCK_FO_GAIN */
0x18d4 0x0f 0x00 /* RXB_RX_EQU_ADAPTOR_CNTRL2 */
0x18d8 0x4e 0x00 /* RXB_RX_EQU_ADAPTOR_CNTRL3 */
@@ -232,6 +236,8 @@
0x18fc 0x80 0x00 /* RXB_RX_OFFSET_ADAPTOR_CNTRL2 */
0x1904 0x03 0x00 /* RXB_SIGDET_CNTRL */
0x190c 0x16 0x00 /* RXB_SIGDET_DEGLITCH_CNTRL */
+ 0x1964 0x05 0x00 /* RXB_RX_MODE_00 */
+ 0x18c0 0x03 0x00 /* RXB_VGA_CAL_CNTRL2 */
0x1260 0x10 0x00 /* TXA_HIGHZ_DRVR_EN */
0x12a4 0x12 0x00 /* TXA_RCV_DETECT_LVL_2 */
0x128c 0x16 0x00 /* TXA_LANE_MODE_1 */
@@ -272,6 +278,8 @@
0x1c48 0x0d 0x00 /* PCS_TXDEEMPH_M3P5DB_V4 */
0x1c4c 0x15 0x00 /* PCS_TXDEEMPH_M6DB_LS */
0x1c50 0x0d 0x00 /* PCS_TXDEEMPH_M3P5DB_LS */
+ 0x1e0c 0x21 0x00 /* PCS_REFGEN_REQ_CONFIG1 */
+ 0x1e10 0x60 0x00 /* PCS_REFGEN_REQ_CONFIG2 */
0x1c5c 0x02 0x00 /* PCS_RATE_SLEW_CNTRL */
0x1ca0 0x04 0x00 /* PCS_PWRUP_RESET_DLY_TIME_AUXCLK */
0x1c8c 0x44 0x00 /* PCS_TSYNC_RSYNC_TIME */
@@ -282,6 +290,7 @@
0x1cb8 0x75 0x00 /* PCS_RXEQTRAINING_WAIT_TIME */
0x1cb0 0x86 0x00 /* PCS_LFPS_TX_ECSTART_EQTLOCK */
0x1cbc 0x13 0x00 /* PCS_RXEQTRAINING_RUN_TIME */
+ 0x1cac 0x04 0x00 /* PCS_LFPS_DET_HIGH_COUNT_VAL */
0xffffffff 0xffffffff 0x00>;
qcom,qmp-phy-reg-offset =
@@ -414,7 +423,9 @@
0x230 /* QUSB2PHY_INTR_CTRL */
0x0a8 /* QUSB2PHY_PLL_CORE_INPUT_OVERRIDE */
0x254 /* QUSB2PHY_TEST1 */
- 0x198>; /* PLL_BIAS_CONTROL_2 */
+ 0x198 /* PLL_BIAS_CONTROL_2 */
+ 0x228 /* QUSB2PHY_SQ_CTRL1 */
+ 0x22c>; /* QUSB2PHY_SQ_CTRL2 */
qcom,qusb-phy-init-seq =
/* <value reg_offset> */
diff --git a/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-cdp.dtsi b/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-cdp.dtsi
index d8a6dc3..31cfdd6 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-cdp.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-cdp.dtsi
@@ -27,7 +27,7 @@
reg = <0x01 0x00>;
compatible = "qcom,camera-flash";
flash-source = <&pmi8998_flash0 &pmi8998_flash1>;
- torch-source = <&pmi8998_torch0 &pmi8998_torch1 >;
+ torch-source = <&pmi8998_torch0 &pmi8998_torch1>;
switch-source = <&pmi8998_switch0>;
status = "ok";
};
@@ -42,6 +42,16 @@
status = "ok";
};
+ led_flash_iris: qcom,camera-flash@3 {
+ cell-index = <3>;
+ reg = <0x03 0x00>;
+ compatible = "qcom,camera-flash";
+ flash-source = <&pmi8998_flash2>;
+ torch-source = <&pmi8998_torch2>;
+ switch-source = <&pmi8998_switch2>;
+ status = "ok";
+ };
+
actuator_regulator: gpio-regulator@0 {
compatible = "regulator-fixed";
reg = <0x00 0x00>;
@@ -412,6 +422,7 @@
sensor-position-roll = <270>;
sensor-position-pitch = <0>;
sensor-position-yaw = <0>;
+ led-flash-src = <&led_flash_iris>;
cam_vio-supply = <&pm8998_lvs1>;
cam_vana-supply = <&pmi8998_bob>;
cam_vdig-supply = <&camera_ldo>;
diff --git a/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-mtp.dtsi b/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-mtp.dtsi
index 952ba29..d7f25977 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-mtp.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-camera-sensor-mtp.dtsi
@@ -42,6 +42,16 @@
status = "ok";
};
+ led_flash_iris: qcom,camera-flash@3 {
+ cell-index = <3>;
+ reg = <0x03 0x00>;
+ compatible = "qcom,camera-flash";
+ flash-source = <&pmi8998_flash2>;
+ torch-source = <&pmi8998_torch2>;
+ switch-source = <&pmi8998_switch2>;
+ status = "ok";
+ };
+
actuator_regulator: gpio-regulator@0 {
compatible = "regulator-fixed";
reg = <0x00 0x00>;
@@ -403,6 +413,7 @@
clock-cntl-level = "turbo";
clock-rates = <24000000>;
};
+
qcom,cam-sensor@3 {
cell-index = <3>;
compatible = "qcom,cam-sensor";
@@ -411,6 +422,7 @@
sensor-position-roll = <270>;
sensor-position-pitch = <0>;
sensor-position-yaw = <0>;
+ led-flash-src = <&led_flash_iris>;
cam_vio-supply = <&pm8998_lvs1>;
cam_vana-supply = <&pmi8998_bob>;
cam_vdig-supply = <&camera_ldo>;
@@ -445,5 +457,4 @@
clock-cntl-level = "turbo";
clock-rates = <24000000>;
};
-
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-camera.dtsi b/arch/arm64/boot/dts/qcom/sdm845-camera.dtsi
index 17bcf0955..35a7774 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-camera.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-camera.dtsi
@@ -434,13 +434,14 @@
"csid0", "csid1", "csid2",
"ife0", "ife1", "ife2", "ipe0",
"ipe1", "cam-cdm-intf0", "cpas-cdm0", "bps0",
- "icp0", "jpeg-dma0", "jpeg-enc0", "fd0";
+ "icp0", "jpeg-dma0", "jpeg-enc0", "fd0", "lrmecpas";
client-axi-port-names =
"cam_hf_1", "cam_hf_2", "cam_hf_2", "cam_sf_1",
"cam_hf_1", "cam_hf_2", "cam_hf_2",
"cam_hf_1", "cam_hf_2", "cam_hf_2", "cam_sf_1",
"cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1",
- "cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1";
+ "cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1",
+ "cam_sf_1";
client-bus-camnoc-based;
qcom,axi-port-list {
qcom,axi-port1 {
@@ -529,7 +530,8 @@
cdm-client-names = "vfe",
"jpegdma",
"jpegenc",
- "fd";
+ "fd",
+ "lrmecdm";
status = "ok";
};
@@ -775,7 +777,7 @@
clock-rates =
<0 0 0 0 0 0 384000000 0 0 0 404000000 0>,
<0 0 0 0 0 0 538000000 0 0 0 600000000 0>;
- clock-cntl-level = "svs";
+ clock-cntl-level = "svs", "turbo";
src-clock-name = "ife_csid_clk_src";
status = "ok";
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-coresight.dtsi b/arch/arm64/boot/dts/qcom/sdm845-coresight.dtsi
index a61d96e..fcfab09 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-coresight.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-coresight.dtsi
@@ -602,6 +602,7 @@
<13 32>;
qcom,cmb-elem-size = <3 64>,
<7 64>,
+ <9 64>,
<13 64>;
clocks = <&clock_aop QDSS_CLK>;
@@ -674,6 +675,15 @@
};
port@7 {
+ reg = <9>;
+ tpda_in_tpdm_prng: endpoint {
+ slave-mode;
+ remote-endpoint =
+ <&tpdm_prng_out_tpda>;
+ };
+ };
+
+ port@8 {
reg = <10>;
tpda_in_tpdm_qm: endpoint {
slave-mode;
@@ -682,7 +692,7 @@
};
};
- port@8 {
+ port@9 {
reg = <11>;
tpda_in_tpdm_north: endpoint {
slave-mode;
@@ -691,7 +701,7 @@
};
};
- port@9 {
+ port@10 {
reg = <13>;
tpda_in_tpdm_pimem: endpoint {
slave-mode;
@@ -1329,6 +1339,24 @@
};
};
+ tpdm_prng: tpdm@684c000 {
+ compatible = "arm,primecell";
+ arm,primecell-periphid = <0x0003b968>;
+ reg = <0x684c000 0x1000>;
+ reg-names = "tpdm-base";
+
+ coresight-name = "coresight-tpdm-prng";
+
+ clocks = <&clock_aop QDSS_CLK>;
+ clock-names = "apb_pclk";
+
+ port{
+ tpdm_prng_out_tpda: endpoint {
+ remote-endpoint = <&tpda_in_tpdm_prng>;
+ };
+ };
+ };
+
tpdm_vsense: tpdm@6840000 {
compatible = "arm,primecell";
arm,primecell-periphid = <0x0003b968>;
@@ -1556,7 +1584,7 @@
reg = <0x69e1000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-DDR_DL_0_CTI";
+ coresight-name = "coresight-cti-ddr_dl_0_cti";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1568,7 +1596,7 @@
reg = <0x69e4000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-DDR_DL_1_CTI0";
+ coresight-name = "coresight-cti-ddr_dl_1_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1580,7 +1608,7 @@
reg = <0x69e5000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-DDR_DL_1_CTI1";
+ coresight-name = "coresight-cti-ddr_dl_1_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1592,7 +1620,7 @@
reg = <0x6c09000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-DLMM_CTI0";
+ coresight-name = "coresight-cti-dlmm_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1604,7 +1632,7 @@
reg = <0x6c0a000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-DLMM_CTI1";
+ coresight-name = "coresight-cti-dlmm_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1616,7 +1644,7 @@
reg = <0x78e0000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-APSS_CTI0";
+ coresight-name = "coresight-cti-apss_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1628,7 +1656,7 @@
reg = <0x78f0000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-APSS_CTI1";
+ coresight-name = "coresight-cti-apss_cti1";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1640,7 +1668,7 @@
reg = <0x7900000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-APSS_CTI2";
+ coresight-name = "coresight-cti-apss_cti2";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
@@ -1968,7 +1996,7 @@
reg = <0x6b04000 0x1000>;
reg-names = "cti-base";
- coresight-name = "coresight-cti-SWAO_CTI0";
+ coresight-name = "coresight-cti-swao_cti0";
clocks = <&clock_aop QDSS_CLK>;
clock-names = "apb_pclk";
diff --git a/arch/arm64/boot/dts/qcom/sdm845-interposer-pm660.dtsi b/arch/arm64/boot/dts/qcom/sdm845-interposer-pm660.dtsi
index f38f5f8..c16e1d8 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-interposer-pm660.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-interposer-pm660.dtsi
@@ -66,6 +66,12 @@
ibb-supply = <&lcdb_ncp_vreg>;
};
+&dsi_dual_nt36850_truly_cmd_display {
+ vddio-supply = <&pm660_l11>;
+ lab-supply = <&lcdb_ldo_vreg>;
+ ibb-supply = <&lcdb_ncp_vreg>;
+};
+
&sde_dp {
status = "disabled";
/delete-property/ vdda-1p2-supply;
@@ -166,6 +172,12 @@
/delete-property/ switch-source;
};
+&led_flash_iris {
+ /delete-property/ flash-source;
+ /delete-property/ torch-source;
+ /delete-property/ switch-source;
+};
+
&actuator_regulator {
/delete-property/ vin-supply;
};
@@ -236,6 +248,11 @@
/delete-property/ vdd_gfx-supply;
};
+&clock_cpucc {
+ /delete-property/ vdd_l3_mx_ao-supply;
+ /delete-property/ vdd_pwrcl_mx_ao-supply;
+};
+
&pil_modem {
/delete-property/ vdd_cx-supply;
/delete-property/ vdd_mx-supply;
diff --git a/arch/arm64/boot/dts/qcom/sdm845-mtp.dtsi b/arch/arm64/boot/dts/qcom/sdm845-mtp.dtsi
index d01149b..825f121 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-mtp.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-mtp.dtsi
@@ -263,6 +263,12 @@
pinctrl-1 = <&flash_led3_front_dis>;
};
+&pmi8998_switch2 {
+ pinctrl-names = "led_enable", "led_disable";
+ pinctrl-0 = <&flash_led3_iris_en>;
+ pinctrl-1 = <&flash_led3_iris_dis>;
+};
+
&vendor {
mtp_batterydata: qcom,battery-data {
qcom,batt-id-range-pct = <15>;
diff --git a/arch/arm64/boot/dts/qcom/sdm845-pinctrl.dtsi b/arch/arm64/boot/dts/qcom/sdm845-pinctrl.dtsi
index 5035c9f..244ac1d 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-pinctrl.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-pinctrl.dtsi
@@ -97,6 +97,37 @@
};
};
+ flash_led3_iris {
+ flash_led3_iris_en: flash_led3_iris_en {
+ mux {
+ pins = "gpio23";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio23";
+ drive_strength = <2>;
+ output-high;
+ bias-disable;
+ };
+ };
+
+ flash_led3_iris_dis: flash_led3_iris_dis {
+ mux {
+ pins = "gpio23";
+ function = "gpio";
+ };
+
+ config {
+ pins = "gpio23";
+ drive_strength = <2>;
+ output-low;
+ bias-disable;
+ };
+ };
+ };
+
+
wcd9xxx_intr {
wcd_intr_default: wcd_intr_default{
mux {
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dtsi b/arch/arm64/boot/dts/qcom/sdm845-qvr.dtsi
index 54d25e1..00f0650 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-qvr.dtsi
@@ -159,3 +159,7 @@
status = "ok";
};
+
+&wil6210 {
+ status = "ok";
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-sde-display.dtsi b/arch/arm64/boot/dts/qcom/sdm845-sde-display.dtsi
index 1e8c943..d12a954 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-sde-display.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-sde-display.dtsi
@@ -27,6 +27,7 @@
#include "dsi-panel-s6e3ha3-amoled-dualmipi-wqhd-cmd.dtsi"
#include "dsi-panel-nt35597-dualmipi-wqxga-video.dtsi"
#include "dsi-panel-nt35597-dualmipi-wqxga-cmd.dtsi"
+#include "dsi-panel-nt36850-truly-dualmipi-wqhd-cmd.dtsi"
#include <dt-bindings/clock/mdss-10nm-pll-clk.h>
&soc {
@@ -451,6 +452,30 @@
ibb-supply = <&ibb_regulator>;
};
+ dsi_dual_nt36850_truly_cmd_display: qcom,dsi-display@16 {
+ compatible = "qcom,dsi-display";
+ label = "dsi_dual_nt36850_truly_cmd_display";
+ qcom,display-type = "primary";
+
+ qcom,dsi-ctrl = <&mdss_dsi0 &mdss_dsi1>;
+ qcom,dsi-phy = <&mdss_dsi_phy0 &mdss_dsi_phy1>;
+ clocks = <&mdss_dsi0_pll BYTECLK_MUX_0_CLK>,
+ <&mdss_dsi0_pll PCLK_MUX_0_CLK>;
+ clock-names = "src_byte_clk", "src_pixel_clk";
+
+ pinctrl-names = "panel_active", "panel_suspend";
+ pinctrl-0 = <&sde_dsi_active &sde_te_active>;
+ pinctrl-1 = <&sde_dsi_suspend &sde_te_suspend>;
+ qcom,platform-te-gpio = <&tlmm 10 0>;
+ qcom,platform-reset-gpio = <&tlmm 6 0>;
+ qcom,panel-mode-gpio = <&tlmm 52 0>;
+
+ qcom,dsi-panel = <&dsi_dual_nt36850_truly_cmd>;
+ vddio-supply = <&pm8998_l14>;
+ lab-supply = <&lab_regulator>;
+ ibb-supply = <&ibb_regulator>;
+ };
+
sde_wb: qcom,wb-display@0 {
compatible = "qcom,wb-display";
cell-index = <0>;
@@ -490,6 +515,13 @@
qcom,mdss-dsi-pan-enable-dynamic-fps;
qcom,mdss-dsi-pan-fps-update =
"dfps_immediate_porch_mode_vfp";
+ qcom,esd-check-enabled;
+ qcom,mdss-dsi-panel-status-check-mode = "reg_read";
+ qcom,mdss-dsi-panel-status-command = [06 01 00 01 00 00 01 0a];
+ qcom,mdss-dsi-panel-status-command-state = "dsi_hs_mode";
+ qcom,mdss-dsi-panel-status-value = <0x9c>;
+ qcom,mdss-dsi-panel-on-check-value = <0x9c>;
+ qcom,mdss-dsi-panel-status-read-length = <1>;
qcom,mdss-dsi-display-timings {
timing@0{
qcom,mdss-dsi-panel-phy-timings = [00 1c 07 07 23 21 07
@@ -553,6 +585,13 @@
qcom,mdss-dsi-pan-enable-dynamic-fps;
qcom,mdss-dsi-pan-fps-update =
"dfps_immediate_porch_mode_vfp";
+ qcom,esd-check-enabled;
+ qcom,mdss-dsi-panel-status-check-mode = "reg_read";
+ qcom,mdss-dsi-panel-status-command = [06 01 00 01 00 00 01 0a];
+ qcom,mdss-dsi-panel-status-command-state = "dsi_hs_mode";
+ qcom,mdss-dsi-panel-status-value = <0x9c>;
+ qcom,mdss-dsi-panel-on-check-value = <0x9c>;
+ qcom,mdss-dsi-panel-status-read-length = <1>;
qcom,mdss-dsi-display-timings {
timing@0{
qcom,mdss-dsi-panel-phy-timings = [00 15 05 05 20 1f 05
@@ -774,3 +813,17 @@
};
};
};
+
+&dsi_dual_nt36850_truly_cmd {
+ qcom,mdss-dsi-t-clk-post = <0x0E>;
+ qcom,mdss-dsi-t-clk-pre = <0x30>;
+ qcom,mdss-dsi-display-timings {
+ timing@0 {
+ qcom,mdss-dsi-panel-phy-timings = [00 1f 08 08 24 23 08
+ 08 05 03 04 00];
+ qcom,display-topology = <2 0 2>,
+ <1 0 2>;
+ qcom,default-topology-index = <0>;
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-sde.dtsi b/arch/arm64/boot/dts/qcom/sdm845-sde.dtsi
index 23ed2bc..4194e67 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-sde.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-sde.dtsi
@@ -134,6 +134,7 @@
qcom,sde-mixer-blendstages = <0xb>;
qcom,sde-highest-bank-bit = <0x2>;
qcom,sde-ubwc-version = <0x200>;
+ qcom,sde-smart-panel-align-mode = <0xc>;
qcom,sde-panic-per-pipe;
qcom,sde-has-cdp;
qcom,sde-has-src-split;
@@ -578,7 +579,10 @@
vdda-1p2-supply = <&pm8998_l26>;
vdda-0p9-supply = <&pm8998_l1>;
- reg = <0xae90000 0xa84>,
+ reg = <0xae90000 0x0dc>,
+ <0xae90200 0x0c0>,
+ <0xae90400 0x508>,
+ <0xae90a00 0x094>,
<0x88eaa00 0x200>,
<0x88ea200 0x200>,
<0x88ea600 0x200>,
@@ -587,7 +591,9 @@
<0x88ea030 0x10>,
<0x88e8000 0x20>,
<0x0aee1000 0x034>;
- reg-names = "dp_ctrl", "dp_phy", "dp_ln_tx0", "dp_ln_tx1",
+ /* dp_ctrl: dp_ahb, dp_aux, dp_link, dp_p0 */
+ reg-names = "dp_ahb", "dp_aux", "dp_link",
+ "dp_p0", "dp_phy", "dp_ln_tx0", "dp_ln_tx1",
"dp_mmss_cc", "qfprom_physical", "dp_pll",
"usb3_dp_com", "hdcp_physical";
diff --git a/arch/arm64/boot/dts/qcom/sdm845-v2-camera.dtsi b/arch/arm64/boot/dts/qcom/sdm845-v2-camera.dtsi
index d867129..d2ee9eb 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-v2-camera.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-v2-camera.dtsi
@@ -157,6 +157,33 @@
compatible = "qcom,msm-cam-smmu";
status = "ok";
+ msm_cam_smmu_lrme {
+ compatible = "qcom,msm-cam-smmu-cb";
+ iommus = <&apps_smmu 0x1038 0x0>,
+ <&apps_smmu 0x1058 0x0>,
+ <&apps_smmu 0x1039 0x0>,
+ <&apps_smmu 0x1059 0x0>;
+ label = "lrme";
+ lrme_iova_mem_map: iova-mem-map {
+ iova-mem-region-shared {
+ /* Shared region is 100MB long */
+ iova-region-name = "shared";
+ iova-region-start = <0x7400000>;
+ iova-region-len = <0x6400000>;
+ iova-region-id = <0x1>;
+ status = "ok";
+ };
+ /* IO region is approximately 3.3 GB */
+ iova-mem-region-io {
+ iova-region-name = "io";
+ iova-region-start = <0xd800000>;
+ iova-region-len = <0xd2800000>;
+ iova-region-id = <0x3>;
+ status = "ok";
+ };
+ };
+ };
+
msm_cam_smmu_ife {
compatible = "qcom,msm-cam-smmu-cb";
iommus = <&apps_smmu 0x808 0x0>,
@@ -329,13 +356,14 @@
"csid0", "csid1", "csid2",
"ife0", "ife1", "ife2", "ipe0",
"ipe1", "cam-cdm-intf0", "cpas-cdm0", "bps0",
- "icp0", "jpeg-dma0", "jpeg-enc0", "fd0";
+ "icp0", "jpeg-dma0", "jpeg-enc0", "fd0", "lrmecpas0";
client-axi-port-names =
"cam_hf_1", "cam_hf_2", "cam_hf_2", "cam_hf_2",
"cam_sf_1", "cam_hf_1", "cam_hf_2", "cam_hf_2",
"cam_hf_1", "cam_hf_2", "cam_hf_2", "cam_sf_1",
"cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1",
- "cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1";
+ "cam_sf_1", "cam_sf_1", "cam_sf_1", "cam_sf_1",
+ "cam_sf_1";
client-bus-camnoc-based;
qcom,axi-port-list {
qcom,axi-port1 {
@@ -415,4 +443,44 @@
};
};
};
+
+ qcom,cam-lrme {
+ compatible = "qcom,cam-lrme";
+ arch-compat = "lrme";
+ status = "ok";
+ };
+
+ cam_lrme: qcom,lrme@ac6b000 {
+ cell-index = <0>;
+ compatible = "qcom,lrme";
+ reg-names = "lrme";
+ reg = <0xac6b000 0xa00>;
+ reg-cam-base = <0x6b000>;
+ interrupt-names = "lrme";
+ interrupts = <0 476 0>;
+ regulator-names = "camss";
+ camss-supply = <&titan_top_gdsc>;
+ clock-names = "camera_ahb",
+ "camera_axi",
+ "soc_ahb_clk",
+ "cpas_ahb_clk",
+ "camnoc_axi_clk",
+ "lrme_clk_src",
+ "lrme_clk";
+ clocks = <&clock_gcc GCC_CAMERA_AHB_CLK>,
+ <&clock_gcc GCC_CAMERA_AXI_CLK>,
+ <&clock_camcc CAM_CC_SOC_AHB_CLK>,
+ <&clock_camcc CAM_CC_CPAS_AHB_CLK>,
+ <&clock_camcc CAM_CC_CAMNOC_AXI_CLK>,
+ <&clock_camcc CAM_CC_LRME_CLK_SRC>,
+ <&clock_camcc CAM_CC_LRME_CLK>;
+ clock-rates = <0 0 0 0 0 200000000 200000000>,
+ <0 0 0 0 0 269000000 269000000>,
+ <0 0 0 0 0 320000000 320000000>,
+ <0 0 0 0 0 400000000 400000000>;
+
+ clock-cntl-level = "lowsvs", "svs", "svs_l1", "turbo";
+ src-clock-name = "lrme_clk_src";
+ status = "ok";
+ };
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr-overlay.dts b/arch/arm64/boot/dts/qcom/sdm845-v2-qvr-overlay.dts
similarity index 95%
rename from arch/arm64/boot/dts/qcom/sdm845-qvr-overlay.dts
rename to arch/arm64/boot/dts/qcom/sdm845-v2-qvr-overlay.dts
index 58f5782..e1ec364 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr-overlay.dts
+++ b/arch/arm64/boot/dts/qcom/sdm845-v2-qvr-overlay.dts
@@ -25,7 +25,7 @@
#include "sdm845-camera-sensor-qvr.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 v2 QVR";
+ model = "Qualcomm Technologies, Inc. SDM845 V2 QVR";
compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
qcom,msm-id = <321 0x20000>;
qcom,board-id = <0x01000B 0x20>;
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/arch/arm64/boot/dts/qcom/sdm845-v2-qvr.dts
similarity index 92%
rename from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
rename to arch/arm64/boot/dts/qcom/sdm845-v2-qvr.dts
index 5513c92..0a56c79 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/arch/arm64/boot/dts/qcom/sdm845-v2-qvr.dts
@@ -18,7 +18,7 @@
#include "sdm845-camera-sensor-qvr.dtsi"
/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
+ model = "Qualcomm Technologies, Inc. SDM845 V2 QVR";
compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
qcom,board-id = <0x01000B 0x20>;
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-v2.dtsi b/arch/arm64/boot/dts/qcom/sdm845-v2.dtsi
index 5c2a10c..1fcf893 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-v2.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-v2.dtsi
@@ -81,6 +81,12 @@
&clock_cpucc {
compatible = "qcom,clk-cpu-osm-v2";
+ reg = <0x17d41000 0x1400>,
+ <0x17d43000 0x1400>,
+ <0x17d45800 0x1400>,
+ <0x78425c 0x4>;
+ reg-names = "osm_l3_base", "osm_pwrcl_base", "osm_perfcl_base",
+ "cpr_rc";
};
&pcie1 {
@@ -588,7 +594,7 @@
qcom,gpu-freq = <520000000>;
qcom,bus-freq = <9>;
qcom,bus-min = <8>;
- qcom,bus-max = <10>;
+ qcom,bus-max = <11>;
};
qcom,gpu-pwrlevel@4 {
@@ -662,7 +668,7 @@
0x40 0x194 /* PLL_BIAS_CONTROL_1 */
0x20 0x198 /* PLL_BIAS_CONTROL_2 */
0x21 0x214 /* PWR_CTRL2 */
- 0x07 0x220 /* IMP_CTRL1 */
+ 0x08 0x220 /* IMP_CTRL1 */
0x58 0x224 /* IMP_CTRL2 */
0x45 0x240 /* TUNE1 */
0x29 0x244 /* TUNE2 */
diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
index e8e9ce7..5b050c5 100644
--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
@@ -1229,9 +1229,14 @@
compatible = "qcom,clk-cpu-osm";
reg = <0x17d41000 0x1400>,
<0x17d43000 0x1400>,
- <0x17d45800 0x1400>;
- reg-names = "osm_l3_base", "osm_pwrcl_base", "osm_perfcl_base";
+ <0x17d45800 0x1400>,
+ <0x784248 0x4>;
+ reg-names = "osm_l3_base", "osm_pwrcl_base", "osm_perfcl_base",
+ "cpr_rc";
+ vdd_l3_mx_ao-supply = <&pm8998_s6_level_ao>;
+ vdd_pwrcl_mx_ao-supply = <&pm8998_s6_level_ao>;
+ qcom,mx-turbo-freq = <1478400000 1689600000 3300000001>;
l3-devs = <&l3_cpu0 &l3_cpu4 &l3_cdsp>;
clock-names = "xo_ao";
@@ -2492,7 +2497,7 @@
qcom,msm-bus,num-paths = <1>;
qcom,msm-bus,vectors-KBps =
<1 618 0 0>, /* No vote */
- <1 618 0 800>; /* 100 KHz */
+ <1 618 0 300000>; /* 75 MHz */
clocks = <&clock_gcc GCC_PRNG_AHB_CLK>;
clock-names = "iface_clk";
};
@@ -3498,6 +3503,182 @@
};
};
+ cpu0-silver-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 1>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config0: emerg-config0 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev0 {
+ trip = <&emerg_config0>;
+ cooling-device =
+ <&CPU0 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
+ cpu1-silver-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 2>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config1: emerg-config1 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev1 {
+ trip = <&emerg_config1>;
+ cooling-device =
+ <&CPU1 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
+ cpu2-silver-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 3>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config2: emerg-config2 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev2 {
+ trip = <&emerg_config2>;
+ cooling-device =
+ <&CPU2 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
+ cpu3-silver-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 4>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config3: emerg-config3 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev3 {
+ trip = <&emerg_config3>;
+ cooling-device =
+ <&CPU3 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
+ cpu0-gold-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 7>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config4: emerg-config4 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev4 {
+ trip = <&emerg_config4>;
+ cooling-device =
+ <&CPU4 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
+ cpu1-gold-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 8>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config5: emerg-config5 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev5 {
+ trip = <&emerg_config5>;
+ cooling-device =
+ <&CPU5 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
+ cpu2-gold-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 9>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config6: emerg-config6 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev6 {
+ trip = <&emerg_config6>;
+ cooling-device =
+ <&CPU6 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
+ cpu3-gold-step {
+ polling-delay-passive = <100>;
+ polling-delay = <0>;
+ thermal-sensors = <&tsens0 10>;
+ thermal-governor = "step_wise";
+ trips {
+ emerg_config7: emerg-config7 {
+ temperature = <110000>;
+ hysteresis = <10000>;
+ type = "passive";
+ };
+ };
+ cooling-maps {
+ emerg_cdev7 {
+ trip = <&emerg_config7>;
+ cooling-device =
+ <&CPU7 THERMAL_MAX_LIMIT
+ THERMAL_MAX_LIMIT>;
+ };
+ };
+ };
+
lmh-dcvs-01 {
polling-delay-passive = <0>;
polling-delay = <0>;
@@ -3605,6 +3786,11 @@
qcom,dump-size = <0x1000>;
qcom,dump-id = <0xe8>;
};
+
+ tpdm_swao_dump {
+ qcom,dump-size = <0x512>;
+ qcom,dump-id = <0xf2>;
+ };
};
gpi_dma0: qcom,gpi-dma@0x800000 {
@@ -3866,7 +4052,7 @@
#include "sdm845-pcie.dtsi"
#include "sdm845-audio.dtsi"
#include "sdm845-gpu.dtsi"
-#include "sdm845-usb.dtsi"
+#include "sdm845-670-usb-common.dtsi"
&pm8998_temp_alarm {
cooling-maps {
diff --git a/arch/arm64/configs/msm8953-perf_defconfig b/arch/arm64/configs/msm8953-perf_defconfig
index 12365b3..db96773 100644
--- a/arch/arm64/configs/msm8953-perf_defconfig
+++ b/arch/arm64/configs/msm8953-perf_defconfig
@@ -284,9 +284,10 @@
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_MSM=y
CONFIG_SERIAL_MSM_CONSOLE=y
+CONFIG_SERIAL_MSM_SMD=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_MSM_LEGACY=y
-CONFIG_MSM_ADSPRPC=y
+CONFIG_MSM_SMD_PKT=y
CONFIG_MSM_RDBG=m
CONFIG_I2C_CHARDEV=y
CONFIG_SPI=y
@@ -352,6 +353,10 @@
CONFIG_USB_GADGET_DEBUG_FILES=y
CONFIG_USB_GADGET_DEBUG_FS=y
CONFIG_USB_GADGET_VBUS_DRAW=500
+CONFIG_USB_CONFIGFS=y
+CONFIG_USB_CONFIGFS_F_FS=y
+CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_F_DIAG=y
CONFIG_MMC=y
CONFIG_MMC_PERF_PROFILING=y
CONFIG_MMC_PARANOID_SD_INIT=y
@@ -398,17 +403,15 @@
CONFIG_QCOM_EUD=y
CONFIG_QCOM_WATCHDOG_V2=y
CONFIG_QCOM_MEMORY_DUMP_V2=y
+CONFIG_MSM_RPM_SMD=y
CONFIG_QCOM_SECURE_BUFFER=y
CONFIG_QCOM_EARLY_RANDOM=y
CONFIG_MSM_SMEM=y
-CONFIG_MSM_GLINK=y
-CONFIG_MSM_GLINK_LOOPBACK_SERVER=y
-CONFIG_MSM_GLINK_SMEM_NATIVE_XPRT=y
-CONFIG_MSM_GLINK_SPI_XPRT=y
+CONFIG_MSM_SMD=y
+CONFIG_MSM_SMD_DEBUG=y
CONFIG_MSM_SMP2P=y
-CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
+CONFIG_MSM_IPC_ROUTER_SMD_XPRT=y
CONFIG_MSM_QMI_INTERFACE=y
-CONFIG_MSM_GLINK_PKT=y
CONFIG_MSM_SUBSYSTEM_RESTART=y
CONFIG_MSM_PIL=y
CONFIG_MSM_PIL_SSR_GENERIC=y
diff --git a/arch/arm64/configs/msm8953_defconfig b/arch/arm64/configs/msm8953_defconfig
index 8757cc3..d893fd0 100644
--- a/arch/arm64/configs/msm8953_defconfig
+++ b/arch/arm64/configs/msm8953_defconfig
@@ -294,9 +294,10 @@
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_MSM=y
CONFIG_SERIAL_MSM_CONSOLE=y
+CONFIG_SERIAL_MSM_SMD=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_MSM_LEGACY=y
-CONFIG_MSM_ADSPRPC=y
+CONFIG_MSM_SMD_PKT=y
CONFIG_MSM_RDBG=m
CONFIG_I2C_CHARDEV=y
CONFIG_SPI=y
@@ -363,6 +364,10 @@
CONFIG_USB_GADGET_DEBUG_FILES=y
CONFIG_USB_GADGET_DEBUG_FS=y
CONFIG_USB_GADGET_VBUS_DRAW=500
+CONFIG_USB_CONFIGFS=y
+CONFIG_USB_CONFIGFS_F_FS=y
+CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_F_DIAG=y
CONFIG_MMC=y
CONFIG_MMC_PERF_PROFILING=y
CONFIG_MMC_RING_BUFFER=y
@@ -414,18 +419,16 @@
CONFIG_QCOM_EUD=y
CONFIG_QCOM_WATCHDOG_V2=y
CONFIG_QCOM_MEMORY_DUMP_V2=y
+CONFIG_MSM_RPM_SMD=y
CONFIG_QCOM_SECURE_BUFFER=y
CONFIG_QCOM_EARLY_RANDOM=y
CONFIG_MSM_SMEM=y
-CONFIG_MSM_GLINK=y
-CONFIG_MSM_GLINK_LOOPBACK_SERVER=y
-CONFIG_MSM_GLINK_SMEM_NATIVE_XPRT=y
-CONFIG_MSM_GLINK_SPI_XPRT=y
+CONFIG_MSM_SMD=y
+CONFIG_MSM_SMD_DEBUG=y
CONFIG_TRACER_PKT=y
CONFIG_MSM_SMP2P=y
-CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
+CONFIG_MSM_IPC_ROUTER_SMD_XPRT=y
CONFIG_MSM_QMI_INTERFACE=y
-CONFIG_MSM_GLINK_PKT=y
CONFIG_MSM_SUBSYSTEM_RESTART=y
CONFIG_MSM_PIL=y
CONFIG_MSM_PIL_SSR_GENERIC=y
diff --git a/arch/arm64/configs/sdm670-perf_defconfig b/arch/arm64/configs/sdm670-perf_defconfig
index 371c77e..9a43bb6 100644
--- a/arch/arm64/configs/sdm670-perf_defconfig
+++ b/arch/arm64/configs/sdm670-perf_defconfig
@@ -21,6 +21,8 @@
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_SCHEDTUNE=y
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
CONFIG_BLK_CGROUP=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_BPF=y
@@ -65,12 +67,14 @@
CONFIG_CMA=y
CONFIG_ZSMALLOC=y
CONFIG_BALANCE_ANON_FILE_RECLAIM=y
+CONFIG_PROCESS_RECLAIM=y
CONFIG_SECCOMP=y
CONFIG_ARMV8_DEPRECATED=y
CONFIG_SWP_EMULATION=y
CONFIG_CP15_BARRIER_EMULATION=y
CONFIG_SETEND_EMULATION=y
# CONFIG_ARM64_VHE is not set
+CONFIG_RANDOMIZE_BASE=y
# CONFIG_EFI is not set
CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
@@ -347,6 +351,7 @@
CONFIG_MFD_I2C_PMIC=y
CONFIG_MFD_SPMI_PMIC=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_PROXY_CONSUMER=y
CONFIG_REGULATOR_CPRH_KBSS=y
CONFIG_REGULATOR_QPNP_LABIBB=y
CONFIG_REGULATOR_QPNP_LCDB=y
@@ -370,8 +375,7 @@
CONFIG_MSM_SDE_ROTATOR_EVTLOG_DEBUG=y
CONFIG_DVB_MPQ=m
CONFIG_DVB_MPQ_DEMUX=m
-CONFIG_DVB_MPQ_TSPP1=y
-CONFIG_TSPP=m
+CONFIG_DVB_MPQ_SW=y
CONFIG_QCOM_KGSL=y
CONFIG_DRM=y
CONFIG_DRM_SDE_EVTLOG_DEBUG=y
@@ -426,8 +430,10 @@
CONFIG_USB_CONFIGFS_F_ACC=y
CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_F_UAC2=y
CONFIG_USB_CONFIGFS_F_MIDI=y
CONFIG_USB_CONFIGFS_F_HID=y
+CONFIG_USB_CONFIGFS_F_UVC=y
CONFIG_USB_CONFIGFS_F_DIAG=y
CONFIG_USB_CONFIGFS_F_CDEV=y
CONFIG_USB_CONFIGFS_F_CCID=y
@@ -447,6 +453,7 @@
CONFIG_MMC_CQ_HCI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_GPIO=y
CONFIG_LEDS_QPNP=y
CONFIG_LEDS_QPNP_FLASH_V2=y
CONFIG_LEDS_QPNP_WLED=y
@@ -497,6 +504,7 @@
CONFIG_QCOM_RUN_QUEUE_STATS=y
CONFIG_QCOM_LLCC=y
CONFIG_QCOM_SDM670_LLCC=y
+CONFIG_QCOM_LLCC_PERFMON=m
CONFIG_MSM_SERVICE_LOCATOR=y
CONFIG_MSM_SERVICE_NOTIFIER=y
CONFIG_MSM_BOOT_STATS=y
@@ -534,9 +542,11 @@
CONFIG_MSM_EVENT_TIMER=y
CONFIG_MSM_PM=y
CONFIG_MSM_QBT1000=y
+CONFIG_QCOM_DCC_V2=y
CONFIG_QTI_RPM_STATS_LOG=y
CONFIG_QCOM_FORCE_WDOG_BITE_ON_PANIC=y
CONFIG_QMP_DEBUGFS_CLIENT=y
+CONFIG_MEM_SHARE_QMI_SERVICE=y
CONFIG_MSM_REMOTEQDSS=y
CONFIG_QCOM_BIMC_BWMON=y
CONFIG_ARM_MEMLAT_MON=y
diff --git a/arch/arm64/configs/sdm670_defconfig b/arch/arm64/configs/sdm670_defconfig
index f6c3ec7..822324d 100644
--- a/arch/arm64/configs/sdm670_defconfig
+++ b/arch/arm64/configs/sdm670_defconfig
@@ -22,6 +22,8 @@
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_SCHEDTUNE=y
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
CONFIG_BLK_CGROUP=y
CONFIG_DEBUG_BLK_CGROUP=y
CONFIG_RT_GROUP_SCHED=y
@@ -70,12 +72,14 @@
CONFIG_CMA_DEBUGFS=y
CONFIG_ZSMALLOC=y
CONFIG_BALANCE_ANON_FILE_RECLAIM=y
+CONFIG_PROCESS_RECLAIM=y
CONFIG_SECCOMP=y
CONFIG_ARMV8_DEPRECATED=y
CONFIG_SWP_EMULATION=y
CONFIG_CP15_BARRIER_EMULATION=y
CONFIG_SETEND_EMULATION=y
# CONFIG_ARM64_VHE is not set
+CONFIG_RANDOMIZE_BASE=y
CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_COMPAT=y
@@ -352,6 +356,7 @@
CONFIG_MFD_I2C_PMIC=y
CONFIG_MFD_SPMI_PMIC=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_PROXY_CONSUMER=y
CONFIG_REGULATOR_CPRH_KBSS=y
CONFIG_REGULATOR_QPNP_LABIBB=y
CONFIG_REGULATOR_QPNP_LCDB=y
@@ -362,6 +367,7 @@
CONFIG_REGULATOR_STUB=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
CONFIG_MEDIA_CONTROLLER=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
CONFIG_VIDEO_ADV_DEBUG=y
@@ -372,6 +378,9 @@
CONFIG_MSM_VIDC_GOVERNORS=y
CONFIG_MSM_SDE_ROTATOR=y
CONFIG_MSM_SDE_ROTATOR_EVTLOG_DEBUG=y
+CONFIG_DVB_MPQ=m
+CONFIG_DVB_MPQ_DEMUX=m
+CONFIG_DVB_MPQ_SW=y
CONFIG_QCOM_KGSL=y
CONFIG_DRM=y
CONFIG_DRM_SDE_EVTLOG_DEBUG=y
@@ -425,8 +434,10 @@
CONFIG_USB_CONFIGFS_F_ACC=y
CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_F_UAC2=y
CONFIG_USB_CONFIGFS_F_MIDI=y
CONFIG_USB_CONFIGFS_F_HID=y
+CONFIG_USB_CONFIGFS_F_UVC=y
CONFIG_USB_CONFIGFS_F_DIAG=y
CONFIG_USB_CONFIGFS_F_CDEV=y
CONFIG_USB_CONFIGFS_F_CCID=y
@@ -447,6 +458,7 @@
CONFIG_MMC_CQ_HCI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_GPIO=y
CONFIG_LEDS_QPNP=y
CONFIG_LEDS_QPNP_FLASH_V2=y
CONFIG_LEDS_QPNP_WLED=y
@@ -504,6 +516,7 @@
CONFIG_QCOM_RUN_QUEUE_STATS=y
CONFIG_QCOM_LLCC=y
CONFIG_QCOM_SDM670_LLCC=y
+CONFIG_QCOM_LLCC_PERFMON=m
CONFIG_MSM_SERVICE_LOCATOR=y
CONFIG_MSM_SERVICE_NOTIFIER=y
CONFIG_MSM_BOOT_STATS=y
@@ -549,6 +562,7 @@
CONFIG_QTI_RPM_STATS_LOG=y
CONFIG_QCOM_FORCE_WDOG_BITE_ON_PANIC=y
CONFIG_QMP_DEBUGFS_CLIENT=y
+CONFIG_MEM_SHARE_QMI_SERVICE=y
CONFIG_MSM_REMOTEQDSS=y
CONFIG_QCOM_BIMC_BWMON=y
CONFIG_ARM_MEMLAT_MON=y
diff --git a/arch/arm64/configs/sdm845-perf_defconfig b/arch/arm64/configs/sdm845-perf_defconfig
index 1cfa935..357a6b2 100644
--- a/arch/arm64/configs/sdm845-perf_defconfig
+++ b/arch/arm64/configs/sdm845-perf_defconfig
@@ -559,6 +559,9 @@
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT3_FS=y
CONFIG_EXT4_FS_SECURITY=y
+CONFIG_EXT4_ENCRYPTION=y
+CONFIG_EXT4_FS_ENCRYPTION=y
+CONFIG_EXT4_FS_ICE_ENCRYPTION=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -590,13 +593,13 @@
CONFIG_CORESIGHT_EVENT=y
CONFIG_CORESIGHT_HWEVENT=y
CONFIG_CORESIGHT_DUMMY=y
+CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SMACK=y
-CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_XCBC=y
CONFIG_CRYPTO_MD4=y
CONFIG_CRYPTO_TWOFISH=y
diff --git a/arch/arm64/configs/sdm845_defconfig b/arch/arm64/configs/sdm845_defconfig
index eceb4be..d0a32e7 100644
--- a/arch/arm64/configs/sdm845_defconfig
+++ b/arch/arm64/configs/sdm845_defconfig
@@ -575,6 +575,9 @@
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT3_FS=y
CONFIG_EXT4_FS_SECURITY=y
+CONFIG_EXT4_ENCRYPTION=y
+CONFIG_EXT4_FS_ENCRYPTION=y
+CONFIG_EXT4_FS_ICE_ENCRYPTION=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -655,13 +658,13 @@
CONFIG_CORESIGHT_TGU=y
CONFIG_CORESIGHT_HWEVENT=y
CONFIG_CORESIGHT_DUMMY=y
+CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SMACK=y
-CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_XCBC=y
CONFIG_CRYPTO_MD4=y
CONFIG_CRYPTO_TWOFISH=y
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index f3a142e..8fe5ffc 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -25,6 +25,7 @@
#include <linux/const.h>
#include <linux/types.h>
#include <asm/bug.h>
+#include <asm/page.h>
#include <asm/sizes.h>
/*
@@ -92,15 +93,26 @@
#define KERNEL_END _end
/*
- * The size of the KASAN shadow region. This should be 1/8th of the
- * size of the entire kernel virtual address space.
+ * KASAN requires 1/8th of the kernel virtual address space for the shadow
+ * region. KASAN can bloat the stack significantly, so double the (minimum)
+ * stack size when KASAN is in use.
*/
#ifdef CONFIG_KASAN
#define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - 3))
+#define KASAN_THREAD_SHIFT 1
#else
#define KASAN_SHADOW_SIZE (0)
+#define KASAN_THREAD_SHIFT 0
#endif
+#define THREAD_SHIFT (14 + KASAN_THREAD_SHIFT)
+
+#if THREAD_SHIFT >= PAGE_SHIFT
+#define THREAD_SIZE_ORDER (THREAD_SHIFT - PAGE_SHIFT)
+#endif
+
+#define THREAD_SIZE (UL(1) << THREAD_SHIFT)
+
/*
* Physical vs virtual RAM address space conversion. These are
* private definitions which should NOT be used outside memory.h
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index ebd18b7..ba3a69a 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -23,19 +23,13 @@
#include <linux/compiler.h>
-#ifdef CONFIG_ARM64_4K_PAGES
-#define THREAD_SIZE_ORDER 2
-#elif defined(CONFIG_ARM64_16K_PAGES)
-#define THREAD_SIZE_ORDER 0
-#endif
-
-#define THREAD_SIZE 16384
#define THREAD_START_SP (THREAD_SIZE - 16)
#ifndef __ASSEMBLY__
struct task_struct;
+#include <asm/memory.h>
#include <asm/stack_pointer.h>
#include <asm/types.h>
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index c1e932d..52710f1 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -779,8 +779,8 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev)
struct perf_event *event = cpuc->events[idx];
struct hw_perf_event *hwc;
- /* Ignore if we don't have an event. */
- if (!event)
+ /* Ignore if we don't have an event or if it's a zombie event */
+ if (!event || event->state == PERF_EVENT_STATE_ZOMBIE)
continue;
/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 2437f15..623dd48 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -54,6 +54,7 @@
#include <asm/tlbflush.h>
#include <asm/ptrace.h>
#include <asm/virt.h>
+#include <soc/qcom/minidump.h>
#define CREATE_TRACE_POINTS
#include <trace/events/ipi.h>
@@ -844,6 +845,7 @@ static void ipi_cpu_stop(unsigned int cpu, struct pt_regs *regs)
pr_crit("CPU%u: stopping\n", cpu);
show_regs(regs);
dump_stack();
+ dump_stack_minidump(regs->sp);
raw_spin_unlock(&stop_lock);
}
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 900c1ec..f7ce3d2 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -176,7 +176,8 @@ void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
trace->entries[trace->nr_entries++] = ULONG_MAX;
}
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+static noinline void __save_stack_trace(struct task_struct *tsk,
+ struct stack_trace *trace, unsigned int nosched)
{
struct stack_trace_data data;
struct stackframe frame;
@@ -186,17 +187,18 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
data.trace = trace;
data.skip = trace->skip;
+ data.no_sched_functions = nosched;
if (tsk != current) {
- data.no_sched_functions = 1;
frame.fp = thread_saved_fp(tsk);
frame.sp = thread_saved_sp(tsk);
frame.pc = thread_saved_pc(tsk);
} else {
- data.no_sched_functions = 0;
+ /* We don't want this function nor the caller */
+ data.skip += 2;
frame.fp = (unsigned long)__builtin_frame_address(0);
frame.sp = current_stack_pointer;
- frame.pc = (unsigned long)save_stack_trace_tsk;
+ frame.pc = (unsigned long)__save_stack_trace;
}
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
frame.graph = tsk->curr_ret_stack;
@@ -210,9 +212,15 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
}
EXPORT_SYMBOL(save_stack_trace_tsk);
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+ __save_stack_trace(tsk, trace, 1);
+}
+
void save_stack_trace(struct stack_trace *trace)
{
- save_stack_trace_tsk(current, trace);
+ __save_stack_trace(current, trace, 0);
}
+
EXPORT_SYMBOL_GPL(save_stack_trace);
#endif
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 5620500..19f3515 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -114,7 +114,7 @@ static void __dump_instr(const char *lvl, struct pt_regs *regs)
for (i = -4; i < 1; i++) {
unsigned int val, bad;
- bad = __get_user(val, &((u32 *)addr)[i]);
+ bad = get_user(val, &((u32 *)addr)[i]);
if (!bad)
p += sprintf(p, i == 0 ? "(%08x) " : "%08x ", val);
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 14c4e3b..48b0354 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -2,7 +2,7 @@
# Makefile for Kernel-based Virtual Machine module, HYP part
#
-ccflags-y += -fno-stack-protector
+ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING
KVM=../../../../virt/kvm
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index da6a8cf..3556715 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -33,12 +33,26 @@
#define LOWER_EL_AArch64_VECTOR 0x400
#define LOWER_EL_AArch32_VECTOR 0x600
+/*
+ * Table taken from ARMv8 ARM DDI0487B-B, table G1-10.
+ */
+static const u8 return_offsets[8][2] = {
+ [0] = { 0, 0 }, /* Reset, unused */
+ [1] = { 4, 2 }, /* Undefined */
+ [2] = { 0, 0 }, /* SVC, unused */
+ [3] = { 4, 4 }, /* Prefetch abort */
+ [4] = { 8, 8 }, /* Data abort */
+ [5] = { 0, 0 }, /* HVC, unused */
+ [6] = { 4, 4 }, /* IRQ, unused */
+ [7] = { 4, 4 }, /* FIQ, unused */
+};
+
static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
{
unsigned long cpsr;
unsigned long new_spsr_value = *vcpu_cpsr(vcpu);
bool is_thumb = (new_spsr_value & COMPAT_PSR_T_BIT);
- u32 return_offset = (is_thumb) ? 4 : 0;
+ u32 return_offset = return_offsets[vect_offset >> 2][is_thumb];
u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR);
cpsr = mode | COMPAT_PSR_I_BIT;
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 0522f50..31d4684 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -971,14 +971,21 @@ static bool do_iommu_attach(struct device *dev, const struct iommu_ops *ops,
* then the IOMMU core will have already configured a group for this
* device, and allocated the default domain for that group.
*/
- if (!domain || iommu_dma_init_domain(domain, dma_base, size, dev)) {
- pr_debug("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
- dev_name(dev));
- return false;
+ if (!domain)
+ goto out_err;
+
+ if (domain->type == IOMMU_DOMAIN_DMA) {
+ if (iommu_dma_init_domain(domain, dma_base, size, dev))
+ goto out_err;
+
+ dev->archdata.dma_ops = &iommu_dma_ops;
}
- dev->archdata.dma_ops = &iommu_dma_ops;
return true;
+out_err:
+ pr_debug("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
+ dev_name(dev));
+ return false;
}
static void queue_iommu_attach(struct device *dev, const struct iommu_ops *ops,
diff --git a/arch/mips/ar7/platform.c b/arch/mips/ar7/platform.c
index 58fca9a..3446b6f 100644
--- a/arch/mips/ar7/platform.c
+++ b/arch/mips/ar7/platform.c
@@ -576,6 +576,7 @@ static int __init ar7_register_uarts(void)
uart_port.type = PORT_AR7;
uart_port.uartclk = clk_get_rate(bus_clk) / 2;
uart_port.iotype = UPIO_MEM32;
+ uart_port.flags = UPF_FIXED_TYPE;
uart_port.regshift = 2;
uart_port.line = 0;
@@ -654,6 +655,10 @@ static int __init ar7_register_devices(void)
u32 val;
int res;
+ res = ar7_gpio_init();
+ if (res)
+ pr_warn("unable to register gpios: %d\n", res);
+
res = ar7_register_uarts();
if (res)
pr_err("unable to setup uart(s): %d\n", res);
diff --git a/arch/mips/ar7/prom.c b/arch/mips/ar7/prom.c
index a23adc4..36aabee 100644
--- a/arch/mips/ar7/prom.c
+++ b/arch/mips/ar7/prom.c
@@ -246,8 +246,6 @@ void __init prom_init(void)
ar7_init_cmdline(fw_arg0, (char **)fw_arg1);
ar7_init_env((struct env_var *)fw_arg2);
console_config();
-
- ar7_gpio_init();
}
#define PORT(offset) (KSEG1ADDR(AR7_REGS_UART0 + (offset * 4)))
diff --git a/arch/mips/include/asm/asm.h b/arch/mips/include/asm/asm.h
index 7c26b28..859cf70 100644
--- a/arch/mips/include/asm/asm.h
+++ b/arch/mips/include/asm/asm.h
@@ -54,7 +54,8 @@
.align 2; \
.type symbol, @function; \
.ent symbol, 0; \
-symbol: .frame sp, 0, ra
+symbol: .frame sp, 0, ra; \
+ .insn
/*
* NESTED - declare nested routine entry point
@@ -63,8 +64,9 @@ symbol: .frame sp, 0, ra
.globl symbol; \
.align 2; \
.type symbol, @function; \
- .ent symbol, 0; \
-symbol: .frame sp, framesize, rpc
+ .ent symbol, 0; \
+symbol: .frame sp, framesize, rpc; \
+ .insn
/*
* END - mark end of function
@@ -86,7 +88,7 @@ symbol: .frame sp, framesize, rpc
#define FEXPORT(symbol) \
.globl symbol; \
.type symbol, @function; \
-symbol:
+symbol: .insn
/*
* ABS - export absolute symbol
diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
index 2e41807..163317f 100644
--- a/arch/mips/include/asm/mips-cm.h
+++ b/arch/mips/include/asm/mips-cm.h
@@ -187,6 +187,7 @@ BUILD_CM_R_(config, MIPS_CM_GCB_OFS + 0x00)
BUILD_CM_RW(base, MIPS_CM_GCB_OFS + 0x08)
BUILD_CM_RW(access, MIPS_CM_GCB_OFS + 0x20)
BUILD_CM_R_(rev, MIPS_CM_GCB_OFS + 0x30)
+BUILD_CM_RW(err_control, MIPS_CM_GCB_OFS + 0x38)
BUILD_CM_RW(error_mask, MIPS_CM_GCB_OFS + 0x40)
BUILD_CM_RW(error_cause, MIPS_CM_GCB_OFS + 0x48)
BUILD_CM_RW(error_addr, MIPS_CM_GCB_OFS + 0x50)
@@ -239,8 +240,8 @@ BUILD_CM_Cx_R_(tcid_8_priority, 0x80)
#define CM_GCR_BASE_GCRBASE_MSK (_ULCAST_(0x1ffff) << 15)
#define CM_GCR_BASE_CMDEFTGT_SHF 0
#define CM_GCR_BASE_CMDEFTGT_MSK (_ULCAST_(0x3) << 0)
-#define CM_GCR_BASE_CMDEFTGT_DISABLED 0
-#define CM_GCR_BASE_CMDEFTGT_MEM 1
+#define CM_GCR_BASE_CMDEFTGT_MEM 0
+#define CM_GCR_BASE_CMDEFTGT_RESERVED 1
#define CM_GCR_BASE_CMDEFTGT_IOCU0 2
#define CM_GCR_BASE_CMDEFTGT_IOCU1 3
@@ -266,6 +267,12 @@ BUILD_CM_Cx_R_(tcid_8_priority, 0x80)
#define CM_REV_CM2_5 CM_ENCODE_REV(7, 0)
#define CM_REV_CM3 CM_ENCODE_REV(8, 0)
+/* GCR_ERR_CONTROL register fields */
+#define CM_GCR_ERR_CONTROL_L2_ECC_EN_SHF 1
+#define CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK (_ULCAST_(0x1) << 1)
+#define CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT_SHF 0
+#define CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT_MSK (_ULCAST_(0x1) << 0)
+
/* GCR_ERROR_CAUSE register fields */
#define CM_GCR_ERROR_CAUSE_ERRTYPE_SHF 27
#define CM_GCR_ERROR_CAUSE_ERRTYPE_MSK (_ULCAST_(0x1f) << 27)
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 1b50958..c558bce 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -50,9 +50,7 @@
#ifdef CONFIG_HOTPLUG_CPU
void arch_cpu_idle_dead(void)
{
- /* What the heck is this check doing ? */
- if (!cpumask_test_cpu(smp_processor_id(), &cpu_callin_map))
- play_dead();
+ play_dead();
}
#endif
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index f66e5ce..6959503 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -153,6 +153,35 @@ void __init detect_memory_region(phys_addr_t start, phys_addr_t sz_min, phys_add
add_memory_region(start, size, BOOT_MEM_RAM);
}
+bool __init memory_region_available(phys_addr_t start, phys_addr_t size)
+{
+ int i;
+ bool in_ram = false, free = true;
+
+ for (i = 0; i < boot_mem_map.nr_map; i++) {
+ phys_addr_t start_, end_;
+
+ start_ = boot_mem_map.map[i].addr;
+ end_ = boot_mem_map.map[i].addr + boot_mem_map.map[i].size;
+
+ switch (boot_mem_map.map[i].type) {
+ case BOOT_MEM_RAM:
+ if (start >= start_ && start + size <= end_)
+ in_ram = true;
+ break;
+ case BOOT_MEM_RESERVED:
+ if ((start >= start_ && start < end_) ||
+ (start < start_ && start + size >= start_))
+ free = false;
+ break;
+ default:
+ continue;
+ }
+ }
+
+ return in_ram && free;
+}
+
static void __init print_memory_map(void)
{
int i;
@@ -332,11 +361,19 @@ static void __init bootmem_init(void)
#else /* !CONFIG_SGI_IP27 */
+static unsigned long __init bootmap_bytes(unsigned long pages)
+{
+ unsigned long bytes = DIV_ROUND_UP(pages, 8);
+
+ return ALIGN(bytes, sizeof(long));
+}
+
static void __init bootmem_init(void)
{
unsigned long reserved_end;
unsigned long mapstart = ~0UL;
unsigned long bootmap_size;
+ bool bootmap_valid = false;
int i;
/*
@@ -430,11 +467,42 @@ static void __init bootmem_init(void)
#endif
/*
+ * check that mapstart doesn't overlap with any of
+ * memory regions that have been reserved through eg. DTB
+ */
+ bootmap_size = bootmap_bytes(max_low_pfn - min_low_pfn);
+
+ bootmap_valid = memory_region_available(PFN_PHYS(mapstart),
+ bootmap_size);
+ for (i = 0; i < boot_mem_map.nr_map && !bootmap_valid; i++) {
+ unsigned long mapstart_addr;
+
+ switch (boot_mem_map.map[i].type) {
+ case BOOT_MEM_RESERVED:
+ mapstart_addr = PFN_ALIGN(boot_mem_map.map[i].addr +
+ boot_mem_map.map[i].size);
+ if (PHYS_PFN(mapstart_addr) < mapstart)
+ break;
+
+ bootmap_valid = memory_region_available(mapstart_addr,
+ bootmap_size);
+ if (bootmap_valid)
+ mapstart = PHYS_PFN(mapstart_addr);
+ break;
+ default:
+ break;
+ }
+ }
+
+ if (!bootmap_valid)
+ panic("No memory area to place a bootmap bitmap");
+
+ /*
* Initialize the boot-time allocator with low memory only.
*/
- bootmap_size = init_bootmem_node(NODE_DATA(0), mapstart,
- min_low_pfn, max_low_pfn);
-
+ if (bootmap_size != init_bootmem_node(NODE_DATA(0), mapstart,
+ min_low_pfn, max_low_pfn))
+ panic("Unexpected memory size required for bootmap");
for (i = 0; i < boot_mem_map.nr_map; i++) {
unsigned long start, end;
@@ -483,6 +551,10 @@ static void __init bootmem_init(void)
continue;
default:
/* Not usable memory */
+ if (start > min_low_pfn && end < max_low_pfn)
+ reserve_bootmem(boot_mem_map.map[i].addr,
+ boot_mem_map.map[i].size,
+ BOOTMEM_DEFAULT);
continue;
}
diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
index 6d0f132..47c9646 100644
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -587,11 +587,11 @@ void __init bmips_cpu_setup(void)
/* Flush and enable RAC */
cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
- __raw_writel(cfg | 0x100, BMIPS_RAC_CONFIG);
+ __raw_writel(cfg | 0x100, cbr + BMIPS_RAC_CONFIG);
__raw_readl(cbr + BMIPS_RAC_CONFIG);
cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
- __raw_writel(cfg | 0xf, BMIPS_RAC_CONFIG);
+ __raw_writel(cfg | 0xf, cbr + BMIPS_RAC_CONFIG);
__raw_readl(cbr + BMIPS_RAC_CONFIG);
cfg = __raw_readl(cbr + BMIPS_RAC_ADDRESS_RANGE);
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 7ebb191..95ba427 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -68,6 +68,9 @@ EXPORT_SYMBOL(cpu_sibling_map);
cpumask_t cpu_core_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_core_map);
+static DECLARE_COMPLETION(cpu_starting);
+static DECLARE_COMPLETION(cpu_running);
+
/*
* A logcal cpu mask containing only one VPE per core to
* reduce the number of IPIs on large MT systems.
@@ -369,9 +372,12 @@ asmlinkage void start_secondary(void)
cpumask_set_cpu(cpu, &cpu_coherent_mask);
notify_cpu_starting(cpu);
- cpumask_set_cpu(cpu, &cpu_callin_map);
+ /* Notify boot CPU that we're starting & ready to sync counters */
+ complete(&cpu_starting);
+
synchronise_count_slave(cpu);
+ /* The CPU is running and counters synchronised, now mark it online */
set_cpu_online(cpu, true);
set_cpu_sibling_map(cpu);
@@ -380,6 +386,12 @@ asmlinkage void start_secondary(void)
calculate_cpu_foreign_map();
/*
+ * Notify boot CPU that we're up & online and it can safely return
+ * from __cpu_up
+ */
+ complete(&cpu_running);
+
+ /*
* irq will be enabled in ->smp_finish(), enabling it too early
* is dangerous.
*/
@@ -430,22 +442,23 @@ void smp_prepare_boot_cpu(void)
{
set_cpu_possible(0, true);
set_cpu_online(0, true);
- cpumask_set_cpu(0, &cpu_callin_map);
}
int __cpu_up(unsigned int cpu, struct task_struct *tidle)
{
mp_ops->boot_secondary(cpu, tidle);
- /*
- * Trust is futile. We should really have timeouts ...
- */
- while (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
- udelay(100);
- schedule();
+ /* Wait for CPU to start and be ready to sync counters */
+ if (!wait_for_completion_timeout(&cpu_starting,
+ msecs_to_jiffies(1000))) {
+ pr_crit("CPU%u: failed to start\n", cpu);
+ return -EIO;
}
synchronise_count_master(cpu);
+
+ /* Wait for CPU to finish startup & mark itself online before return */
+ wait_for_completion(&cpu_running);
return 0;
}
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index b0b29cb..bb1d9ff 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -51,6 +51,7 @@
#include <asm/idle.h>
#include <asm/mips-cm.h>
#include <asm/mips-r2-to-r6-emul.h>
+#include <asm/mips-cm.h>
#include <asm/mipsregs.h>
#include <asm/mipsmtregs.h>
#include <asm/module.h>
@@ -1646,6 +1647,65 @@ __setup("nol2par", nol2parity);
*/
static inline void parity_protection_init(void)
{
+#define ERRCTL_PE 0x80000000
+#define ERRCTL_L2P 0x00800000
+
+ if (mips_cm_revision() >= CM_REV_CM3) {
+ ulong gcr_ectl, cp0_ectl;
+
+ /*
+ * With CM3 systems we need to ensure that the L1 & L2
+ * parity enables are set to the same value, since this
+ * is presumed by the hardware engineers.
+ *
+ * If the user disabled either of L1 or L2 ECC checking,
+ * disable both.
+ */
+ l1parity &= l2parity;
+ l2parity &= l1parity;
+
+ /* Probe L1 ECC support */
+ cp0_ectl = read_c0_ecc();
+ write_c0_ecc(cp0_ectl | ERRCTL_PE);
+ back_to_back_c0_hazard();
+ cp0_ectl = read_c0_ecc();
+
+ /* Probe L2 ECC support */
+ gcr_ectl = read_gcr_err_control();
+
+ if (!(gcr_ectl & CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT_MSK) ||
+ !(cp0_ectl & ERRCTL_PE)) {
+ /*
+ * One of L1 or L2 ECC checking isn't supported,
+ * so we cannot enable either.
+ */
+ l1parity = l2parity = 0;
+ }
+
+ /* Configure L1 ECC checking */
+ if (l1parity)
+ cp0_ectl |= ERRCTL_PE;
+ else
+ cp0_ectl &= ~ERRCTL_PE;
+ write_c0_ecc(cp0_ectl);
+ back_to_back_c0_hazard();
+ WARN_ON(!!(read_c0_ecc() & ERRCTL_PE) != l1parity);
+
+ /* Configure L2 ECC checking */
+ if (l2parity)
+ gcr_ectl |= CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK;
+ else
+ gcr_ectl &= ~CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK;
+ write_gcr_err_control(gcr_ectl);
+ gcr_ectl = read_gcr_err_control();
+ gcr_ectl &= CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK;
+ WARN_ON(!!gcr_ectl != l2parity);
+
+ pr_info("Cache parity protection %sabled\n",
+ l1parity ? "en" : "dis");
+ return;
+ }
+
switch (current_cpu_type()) {
case CPU_24K:
case CPU_34K:
@@ -1656,11 +1716,8 @@ static inline void parity_protection_init(void)
case CPU_PROAPTIV:
case CPU_P5600:
case CPU_QEMU_GENERIC:
- case CPU_I6400:
case CPU_P6600:
{
-#define ERRCTL_PE 0x80000000
-#define ERRCTL_L2P 0x00800000
unsigned long errctl;
unsigned int l1parity_present, l2parity_present;
diff --git a/arch/mips/mm/uasm-micromips.c b/arch/mips/mm/uasm-micromips.c
index 277cf52..6c17cba 100644
--- a/arch/mips/mm/uasm-micromips.c
+++ b/arch/mips/mm/uasm-micromips.c
@@ -80,7 +80,7 @@ static struct insn insn_table_MM[] = {
{ insn_jr, M(mm_pool32a_op, 0, 0, 0, mm_jalr_op, mm_pool32axf_op), RS },
{ insn_lb, M(mm_lb32_op, 0, 0, 0, 0, 0), RT | RS | SIMM },
{ insn_ld, 0, 0 },
- { insn_lh, M(mm_lh32_op, 0, 0, 0, 0, 0), RS | RS | SIMM },
+ { insn_lh, M(mm_lh32_op, 0, 0, 0, 0, 0), RT | RS | SIMM },
{ insn_ll, M(mm_pool32c_op, 0, 0, (mm_ll_func << 1), 0, 0), RS | RT | SIMM },
{ insn_lld, 0, 0 },
{ insn_lui, M(mm_pool32i_op, mm_lui_op, 0, 0, 0, 0), RS | SIMM },
diff --git a/arch/mips/netlogic/common/irq.c b/arch/mips/netlogic/common/irq.c
index 3660dc6..f4961bc 100644
--- a/arch/mips/netlogic/common/irq.c
+++ b/arch/mips/netlogic/common/irq.c
@@ -275,7 +275,7 @@ asmlinkage void plat_irq_dispatch(void)
do_IRQ(nlm_irq_to_xirq(node, i));
}
-#ifdef CONFIG_OF
+#ifdef CONFIG_CPU_XLP
static const struct irq_domain_ops xlp_pic_irq_domain_ops = {
.xlate = irq_domain_xlate_onetwocell,
};
@@ -348,7 +348,7 @@ void __init arch_init_irq(void)
#if defined(CONFIG_CPU_XLR)
nlm_setup_fmn_irq();
#endif
-#if defined(CONFIG_OF)
+#ifdef CONFIG_CPU_XLP
of_irq_init(xlp_pic_irq_ids);
#endif
}
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b4758f5..acb6026 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -1088,11 +1088,6 @@
source "security/Kconfig"
-config KEYS_COMPAT
- bool
- depends on COMPAT && KEYS
- default y
-
source "crypto/Kconfig"
config PPC_LIB_RHEAP
diff --git a/arch/powerpc/boot/dts/fsl/kmcoge4.dts b/arch/powerpc/boot/dts/fsl/kmcoge4.dts
index ae70a24..e103c0f 100644
--- a/arch/powerpc/boot/dts/fsl/kmcoge4.dts
+++ b/arch/powerpc/boot/dts/fsl/kmcoge4.dts
@@ -83,6 +83,10 @@
};
};
+ sdhc@114000 {
+ status = "disabled";
+ };
+
i2c@119000 {
status = "disabled";
};
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index bc3f7d0..f1d7e99 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -407,6 +407,7 @@ void arch_vtime_task_switch(struct task_struct *prev)
struct cpu_accounting_data *acct = get_accounting(current);
acct->starttime = get_accounting(prev)->starttime;
+ acct->startspurr = get_accounting(prev)->startspurr;
acct->system_time = 0;
acct->user_time = 0;
}
diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c
index a0ea63a..a8e3498 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_xics.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c
@@ -376,6 +376,7 @@ static void icp_rm_deliver_irq(struct kvmppc_xics *xics, struct kvmppc_icp *icp,
*/
if (reject && reject != XICS_IPI) {
arch_spin_unlock(&ics->lock);
+ icp->n_reject++;
new_irq = reject;
goto again;
}
@@ -707,10 +708,8 @@ int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr)
state = &ics->irq_state[src];
/* Still asserted, resend it */
- if (state->asserted) {
- icp->n_reject++;
+ if (state->asserted)
icp_rm_deliver_irq(xics, icp, irq);
- }
if (!hlist_empty(&vcpu->kvm->irq_ack_notifier_list)) {
icp->rm_action |= XICS_RM_NOTIFY_EOI;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index d5ce34d..1e28747 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -42,6 +42,8 @@
#include <linux/memblock.h>
#include <linux/hugetlb.h>
#include <linux/slab.h>
+#include <linux/of_fdt.h>
+#include <linux/libfdt.h>
#include <asm/pgalloc.h>
#include <asm/page.h>
@@ -421,6 +423,28 @@ static int __init parse_disable_radix(char *p)
}
early_param("disable_radix", parse_disable_radix);
+/*
+ * If we're running under a hypervisor, we currently can't do radix
+ * since we don't have the code to do the H_REGISTER_PROC_TBL hcall.
+ * We tell that we're running under a hypervisor by looking for the
+ * /chosen/ibm,architecture-vec-5 property.
+ */
+static void early_check_vec5(void)
+{
+ unsigned long root, chosen;
+ int size;
+ const u8 *vec5;
+
+ root = of_get_flat_dt_root();
+ chosen = of_get_flat_dt_subnode_by_name(root, "chosen");
+ if (chosen == -FDT_ERR_NOTFOUND)
+ return;
+ vec5 = of_get_flat_dt_prop(chosen, "ibm,architecture-vec-5", &size);
+ if (!vec5)
+ return;
+ cur_cpu_spec->mmu_features &= ~MMU_FTR_TYPE_RADIX;
+}
+
void __init mmu_early_init_devtree(void)
{
/* Disable radix mode based on kernel command line. */
@@ -428,6 +452,15 @@ void __init mmu_early_init_devtree(void)
if (disable_radix || !(mfmsr() & MSR_HV))
cur_cpu_spec->mmu_features &= ~MMU_FTR_TYPE_RADIX;
+ /*
+ * Check /chosen/ibm,architecture-vec-5 if running as a guest.
+ * When running bare-metal, we can use radix if we like
+ * even though the ibm,architecture-vec-5 property created by
+ * skiboot doesn't have the necessary bits set.
+ */
+ if (early_radix_enabled() && !(mfmsr() & MSR_HV))
+ early_check_vec5();
+
if (early_radix_enabled())
radix__early_init_devtree();
else
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 426481d..9aa0d04 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -359,9 +359,6 @@
config SYSVIPC_COMPAT
def_bool y if COMPAT && SYSVIPC
-config KEYS_COMPAT
- def_bool y if COMPAT && KEYS
-
config SMP
def_bool y
prompt "Symmetric multi-processing support"
diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
index 303d28e..591cbdf6 100644
--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -28,6 +28,7 @@
#include <linux/cpufeature.h>
#include <linux/init.h>
#include <linux/spinlock.h>
+#include <linux/fips.h>
#include <crypto/xts.h>
#include <asm/cpacf.h>
@@ -501,6 +502,12 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
if (err)
return err;
+ /* In fips mode only 128 bit or 256 bit keys are valid */
+ if (fips_enabled && key_len != 32 && key_len != 64) {
+ tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
/* Pick the correct function code based on the key length */
fc = (key_len == 32) ? CPACF_KM_XTS_128 :
(key_len == 64) ? CPACF_KM_XTS_256 : 0;
diff --git a/arch/s390/crypto/prng.c b/arch/s390/crypto/prng.c
index 1113389..fe7368a 100644
--- a/arch/s390/crypto/prng.c
+++ b/arch/s390/crypto/prng.c
@@ -110,22 +110,30 @@ static const u8 initial_parm_block[32] __initconst = {
/*** helper functions ***/
+/*
+ * generate_entropy:
+ * This algorithm produces 64 bytes of entropy data based on 1024
+ * individual stckf() invocations assuming that each stckf() value
+ * contributes 0.25 bits of entropy. So the caller gets 256 bit
+ * entropy per 64 byte or 4 bits entropy per byte.
+ */
static int generate_entropy(u8 *ebuf, size_t nbytes)
{
int n, ret = 0;
- u8 *pg, *h, hash[32];
+ u8 *pg, *h, hash[64];
- pg = (u8 *) __get_free_page(GFP_KERNEL);
+ /* allocate 2 pages */
+ pg = (u8 *) __get_free_pages(GFP_KERNEL, 1);
if (!pg) {
prng_errorflag = PRNG_GEN_ENTROPY_FAILED;
return -ENOMEM;
}
while (nbytes) {
- /* fill page with urandom bytes */
- get_random_bytes(pg, PAGE_SIZE);
- /* exor page with stckf values */
- for (n = 0; n < PAGE_SIZE / sizeof(u64); n++) {
+ /* fill pages with urandom bytes */
+ get_random_bytes(pg, 2*PAGE_SIZE);
+ /* exor pages with 1024 stckf values */
+ for (n = 0; n < 2 * PAGE_SIZE / sizeof(u64); n++) {
u64 *p = ((u64 *)pg) + n;
*p ^= get_tod_clock_fast();
}
@@ -134,8 +142,8 @@ static int generate_entropy(u8 *ebuf, size_t nbytes)
h = hash;
else
h = ebuf;
- /* generate sha256 from this page */
- cpacf_kimd(CPACF_KIMD_SHA_256, h, pg, PAGE_SIZE);
+ /* hash over the filled pages */
+ cpacf_kimd(CPACF_KIMD_SHA_512, h, pg, 2*PAGE_SIZE);
if (n < sizeof(hash))
memcpy(ebuf, hash, n);
ret += n;
@@ -143,7 +151,7 @@ static int generate_entropy(u8 *ebuf, size_t nbytes)
nbytes -= n;
}
- free_page((unsigned long)pg);
+ free_pages((unsigned long)pg, 1);
return ret;
}
@@ -334,7 +342,7 @@ static int __init prng_sha512_selftest(void)
static int __init prng_sha512_instantiate(void)
{
int ret, datalen;
- u8 seed[64];
+ u8 seed[64 + 32 + 16];
pr_debug("prng runs in SHA-512 mode "
"with chunksize=%d and reseed_limit=%u\n",
@@ -357,12 +365,12 @@ static int __init prng_sha512_instantiate(void)
if (ret)
goto outfree;
- /* generate initial seed bytestring, first 48 bytes of entropy */
- ret = generate_entropy(seed, 48);
- if (ret != 48)
+ /* generate initial seed bytestring, with 256 + 128 bits entropy */
+ ret = generate_entropy(seed, 64 + 32);
+ if (ret != 64 + 32)
goto outfree;
/* followed by 16 bytes of unique nonce */
- get_tod_clock_ext(seed + 48);
+ get_tod_clock_ext(seed + 64 + 32);
/* initial seed of the ppno drng */
cpacf_ppno(CPACF_PPNO_SHA512_DRNG_SEED,
@@ -395,9 +403,9 @@ static void prng_sha512_deinstantiate(void)
static int prng_sha512_reseed(void)
{
int ret;
- u8 seed[32];
+ u8 seed[64];
- /* generate 32 bytes of fresh entropy */
+ /* fetch 256 bits of fresh entropy */
ret = generate_entropy(seed, sizeof(seed));
if (ret != sizeof(seed))
return ret;
diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
index 2374c5b..0c19686 100644
--- a/arch/s390/kernel/early.c
+++ b/arch/s390/kernel/early.c
@@ -363,6 +363,18 @@ static inline void save_vector_registers(void)
#endif
}
+static int __init topology_setup(char *str)
+{
+ bool enabled;
+ int rc;
+
+ rc = kstrtobool(str, &enabled);
+ if (!rc && !enabled)
+ S390_lowcore.machine_flags &= ~MACHINE_HAS_TOPOLOGY;
+ return rc;
+}
+early_param("topology", topology_setup);
+
static int __init disable_vector_extension(char *str)
{
S390_lowcore.machine_flags &= ~MACHINE_FLAG_VX;
diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
index 8705ee6..239f295 100644
--- a/arch/s390/kernel/topology.c
+++ b/arch/s390/kernel/topology.c
@@ -37,7 +37,6 @@ static void set_topology_timer(void);
static void topology_work_fn(struct work_struct *work);
static struct sysinfo_15_1_x *tl_info;
-static bool topology_enabled = true;
static DECLARE_WORK(topology_work, topology_work_fn);
/*
@@ -56,7 +55,7 @@ static cpumask_t cpu_group_map(struct mask_info *info, unsigned int cpu)
cpumask_t mask;
cpumask_copy(&mask, cpumask_of(cpu));
- if (!topology_enabled || !MACHINE_HAS_TOPOLOGY)
+ if (!MACHINE_HAS_TOPOLOGY)
return mask;
for (; info; info = info->next) {
if (cpumask_test_cpu(cpu, &info->mask))
@@ -71,7 +70,7 @@ static cpumask_t cpu_thread_map(unsigned int cpu)
int i;
cpumask_copy(&mask, cpumask_of(cpu));
- if (!topology_enabled || !MACHINE_HAS_TOPOLOGY)
+ if (!MACHINE_HAS_TOPOLOGY)
return mask;
cpu -= cpu % (smp_cpu_mtid + 1);
for (i = 0; i <= smp_cpu_mtid; i++)
@@ -413,12 +412,6 @@ static const struct cpumask *cpu_drawer_mask(int cpu)
return &per_cpu(cpu_topology, cpu).drawer_mask;
}
-static int __init early_parse_topology(char *p)
-{
- return kstrtobool(p, &topology_enabled);
-}
-early_param("topology", early_parse_topology);
-
static struct sched_domain_topology_level s390_topology[] = {
{ cpu_thread_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
diff --git a/arch/sh/kernel/cpu/sh3/setup-sh770x.c b/arch/sh/kernel/cpu/sh3/setup-sh770x.c
index 538c10d..8dc315b 100644
--- a/arch/sh/kernel/cpu/sh3/setup-sh770x.c
+++ b/arch/sh/kernel/cpu/sh3/setup-sh770x.c
@@ -165,7 +165,6 @@ static struct plat_sci_port scif2_platform_data = {
.scscr = SCSCR_TE | SCSCR_RE,
.type = PORT_IRDA,
.ops = &sh770x_sci_port_ops,
- .regshift = 1,
};
static struct resource scif2_resources[] = {
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index b27e48e..8b4152f 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -568,9 +568,6 @@
depends on COMPAT && SYSVIPC
default y
-config KEYS_COMPAT
- def_bool y if COMPAT && KEYS
-
endmenu
source "net/Kconfig"
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3735222..64e9609 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2733,10 +2733,6 @@
config SYSVIPC_COMPAT
def_bool y
depends on SYSVIPC
-
-config KEYS_COMPAT
- def_bool y
- depends on KEYS
endif
endmenu
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
index 96df6a3..a2ae689 100644
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
+++ b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
@@ -157,8 +157,8 @@
.endr
# Find min length
- vmovdqa _lens+0*16(state), %xmm0
- vmovdqa _lens+1*16(state), %xmm1
+ vmovdqu _lens+0*16(state), %xmm0
+ vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
@@ -178,8 +178,8 @@
vpsubd %xmm2, %xmm0, %xmm0
vpsubd %xmm2, %xmm1, %xmm1
- vmovdqa %xmm0, _lens+0*16(state)
- vmovdqa %xmm1, _lens+1*16(state)
+ vmovdqu %xmm0, _lens+0*16(state)
+ vmovdqu %xmm1, _lens+1*16(state)
# "state" and "args" are the same address, arg1
# len is arg2
@@ -235,8 +235,8 @@
jc .return_null
# Find min length
- vmovdqa _lens(state), %xmm0
- vmovdqa _lens+1*16(state), %xmm1
+ vmovdqu _lens(state), %xmm0
+ vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
index a78a069..ec9bee6 100644
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
@@ -155,8 +155,8 @@
.endr
# Find min length
- vmovdqa _lens+0*16(state), %xmm0
- vmovdqa _lens+1*16(state), %xmm1
+ vmovdqu _lens+0*16(state), %xmm0
+ vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
@@ -176,8 +176,8 @@
vpsubd %xmm2, %xmm0, %xmm0
vpsubd %xmm2, %xmm1, %xmm1
- vmovdqa %xmm0, _lens+0*16(state)
- vmovdqa %xmm1, _lens+1*16(state)
+ vmovdqu %xmm0, _lens+0*16(state)
+ vmovdqu %xmm1, _lens+1*16(state)
# "state" and "args" are the same address, arg1
# len is arg2
@@ -234,8 +234,8 @@
jc .return_null
# Find min length
- vmovdqa _lens(state), %xmm0
- vmovdqa _lens+1*16(state), %xmm1
+ vmovdqu _lens(state), %xmm0
+ vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index a300aa1..dead0f3 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -68,6 +68,12 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
__chk_range_not_ok((unsigned long __force)(addr), size, limit); \
})
+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
+# define WARN_ON_IN_IRQ() WARN_ON_ONCE(!in_task())
+#else
+# define WARN_ON_IN_IRQ()
+#endif
+
/**
* access_ok: - Checks if a user space pointer is valid
* @type: Type of access: %VERIFY_READ or %VERIFY_WRITE. Note that
@@ -88,8 +94,11 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
* checks that the pointer is in the user space range - after calling
* this function, memory access functions may still return -EFAULT.
*/
-#define access_ok(type, addr, size) \
- likely(!__range_not_ok(addr, size, user_addr_max()))
+#define access_ok(type, addr, size) \
+({ \
+ WARN_ON_IN_IRQ(); \
+ likely(!__range_not_ok(addr, size, user_addr_max())); \
+})
/*
* These are the main single-value transfer routines. They automatically
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index e2ead34..c6583ef 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1863,14 +1863,14 @@ static void __smp_spurious_interrupt(u8 vector)
"should never happen.\n", vector, smp_processor_id());
}
-__visible void smp_spurious_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs)
{
entering_irq();
__smp_spurious_interrupt(~regs->orig_ax);
exiting_irq();
}
-__visible void smp_trace_spurious_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_trace_spurious_interrupt(struct pt_regs *regs)
{
u8 vector = ~regs->orig_ax;
@@ -1921,14 +1921,14 @@ static void __smp_error_interrupt(struct pt_regs *regs)
}
-__visible void smp_error_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_error_interrupt(struct pt_regs *regs)
{
entering_irq();
__smp_error_interrupt(regs);
exiting_irq();
}
-__visible void smp_trace_error_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_trace_error_interrupt(struct pt_regs *regs)
{
entering_irq();
trace_error_apic_entry(ERROR_APIC_VECTOR);
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index 5d30c5e..f3557a1 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -559,7 +559,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
__send_cleanup_vector(data);
}
-asmlinkage __visible void smp_irq_move_cleanup_interrupt(void)
+asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void)
{
unsigned vector, me;
diff --git a/arch/x86/kernel/cpu/mcheck/mce-severity.c b/arch/x86/kernel/cpu/mcheck/mce-severity.c
index 631356c..f46071c 100644
--- a/arch/x86/kernel/cpu/mcheck/mce-severity.c
+++ b/arch/x86/kernel/cpu/mcheck/mce-severity.c
@@ -245,6 +245,9 @@ static int mce_severity_amd(struct mce *m, int tolerant, char **msg, bool is_exc
if (m->status & MCI_STATUS_UC) {
+ if (ctx == IN_KERNEL)
+ return MCE_PANIC_SEVERITY;
+
/*
* On older systems where overflow_recov flag is not present, we
* should simply panic if an error overflow occurs. If
@@ -255,10 +258,6 @@ static int mce_severity_amd(struct mce *m, int tolerant, char **msg, bool is_exc
if (mce_flags.smca)
return mce_severity_amd_smca(m, ctx);
- /* software can try to contain */
- if (!(m->mcgstatus & MCG_STATUS_RIPV) && (ctx == IN_KERNEL))
- return MCE_PANIC_SEVERITY;
-
/* kill current process */
return MCE_AR_SEVERITY;
} else {
diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c
index a5b47c1..39526e1 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c
@@ -593,14 +593,14 @@ static inline void __smp_deferred_error_interrupt(void)
deferred_error_int_vector();
}
-asmlinkage __visible void smp_deferred_error_interrupt(void)
+asmlinkage __visible void __irq_entry smp_deferred_error_interrupt(void)
{
entering_irq();
__smp_deferred_error_interrupt();
exiting_ack_irq();
}
-asmlinkage __visible void smp_trace_deferred_error_interrupt(void)
+asmlinkage __visible void __irq_entry smp_trace_deferred_error_interrupt(void)
{
entering_irq();
trace_deferred_error_apic_entry(DEFERRED_ERROR_VECTOR);
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index 6b9dc4d..c460c91 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -431,14 +431,16 @@ static inline void __smp_thermal_interrupt(void)
smp_thermal_vector();
}
-asmlinkage __visible void smp_thermal_interrupt(struct pt_regs *regs)
+asmlinkage __visible void __irq_entry
+smp_thermal_interrupt(struct pt_regs *regs)
{
entering_irq();
__smp_thermal_interrupt();
exiting_ack_irq();
}
-asmlinkage __visible void smp_trace_thermal_interrupt(struct pt_regs *regs)
+asmlinkage __visible void __irq_entry
+smp_trace_thermal_interrupt(struct pt_regs *regs)
{
entering_irq();
trace_thermal_apic_entry(THERMAL_APIC_VECTOR);
diff --git a/arch/x86/kernel/cpu/mcheck/threshold.c b/arch/x86/kernel/cpu/mcheck/threshold.c
index fcf9ae9..9760423 100644
--- a/arch/x86/kernel/cpu/mcheck/threshold.c
+++ b/arch/x86/kernel/cpu/mcheck/threshold.c
@@ -24,14 +24,14 @@ static inline void __smp_threshold_interrupt(void)
mce_threshold_vector();
}
-asmlinkage __visible void smp_threshold_interrupt(void)
+asmlinkage __visible void __irq_entry smp_threshold_interrupt(void)
{
entering_irq();
__smp_threshold_interrupt();
exiting_ack_irq();
}
-asmlinkage __visible void smp_trace_threshold_interrupt(void)
+asmlinkage __visible void __irq_entry smp_trace_threshold_interrupt(void)
{
entering_irq();
trace_threshold_apic_entry(THRESHOLD_APIC_VECTOR);
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 9f669fd..8a7ad9f 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -265,7 +265,7 @@ void __smp_x86_platform_ipi(void)
x86_platform_ipi_callback();
}
-__visible void smp_x86_platform_ipi(struct pt_regs *regs)
+__visible void __irq_entry smp_x86_platform_ipi(struct pt_regs *regs)
{
struct pt_regs *old_regs = set_irq_regs(regs);
@@ -316,7 +316,7 @@ __visible void smp_kvm_posted_intr_wakeup_ipi(struct pt_regs *regs)
}
#endif
-__visible void smp_trace_x86_platform_ipi(struct pt_regs *regs)
+__visible void __irq_entry smp_trace_x86_platform_ipi(struct pt_regs *regs)
{
struct pt_regs *old_regs = set_irq_regs(regs);
diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
index 3512ba6..2754878 100644
--- a/arch/x86/kernel/irq_work.c
+++ b/arch/x86/kernel/irq_work.c
@@ -9,6 +9,7 @@
#include <linux/hardirq.h>
#include <asm/apic.h>
#include <asm/trace/irq_vectors.h>
+#include <linux/interrupt.h>
static inline void __smp_irq_work_interrupt(void)
{
@@ -16,14 +17,14 @@ static inline void __smp_irq_work_interrupt(void)
irq_work_run();
}
-__visible void smp_irq_work_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_irq_work_interrupt(struct pt_regs *regs)
{
ipi_entering_ack_irq();
__smp_irq_work_interrupt();
exiting_irq();
}
-__visible void smp_trace_irq_work_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_trace_irq_work_interrupt(struct pt_regs *regs)
{
ipi_entering_ack_irq();
trace_irq_work_entry(IRQ_WORK_VECTOR);
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index c00cb64..ca69967 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -259,7 +259,7 @@ static inline void __smp_reschedule_interrupt(void)
scheduler_ipi();
}
-__visible void smp_reschedule_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_reschedule_interrupt(struct pt_regs *regs)
{
irq_enter();
ack_APIC_irq();
@@ -270,7 +270,7 @@ __visible void smp_reschedule_interrupt(struct pt_regs *regs)
*/
}
-__visible void smp_trace_reschedule_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_trace_reschedule_interrupt(struct pt_regs *regs)
{
/*
* Need to call irq_enter() before calling the trace point.
@@ -294,14 +294,15 @@ static inline void __smp_call_function_interrupt(void)
inc_irq_stat(irq_call_count);
}
-__visible void smp_call_function_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_call_function_interrupt(struct pt_regs *regs)
{
ipi_entering_ack_irq();
__smp_call_function_interrupt();
exiting_irq();
}
-__visible void smp_trace_call_function_interrupt(struct pt_regs *regs)
+__visible void __irq_entry
+smp_trace_call_function_interrupt(struct pt_regs *regs)
{
ipi_entering_ack_irq();
trace_call_function_entry(CALL_FUNCTION_VECTOR);
@@ -316,14 +317,16 @@ static inline void __smp_call_function_single_interrupt(void)
inc_irq_stat(irq_call_count);
}
-__visible void smp_call_function_single_interrupt(struct pt_regs *regs)
+__visible void __irq_entry
+smp_call_function_single_interrupt(struct pt_regs *regs)
{
ipi_entering_ack_irq();
__smp_call_function_single_interrupt();
exiting_irq();
}
-__visible void smp_trace_call_function_single_interrupt(struct pt_regs *regs)
+__visible void __irq_entry
+smp_trace_call_function_single_interrupt(struct pt_regs *regs)
{
ipi_entering_ack_irq();
trace_call_function_single_entry(CALL_FUNCTION_SINGLE_VECTOR);
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 36171bc..9fe7b9e 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -181,6 +181,12 @@ static void smp_callin(void)
smp_store_cpu_info(cpuid);
/*
+ * The topology information must be up to date before
+ * calibrate_delay() and notify_cpu_starting().
+ */
+ set_cpu_sibling_map(raw_smp_processor_id());
+
+ /*
* Get our bogomips.
* Update loops_per_jiffy in cpu_data. Previous call to
* smp_store_cpu_info() stored a value that is close but not as
@@ -190,11 +196,6 @@ static void smp_callin(void)
cpu_data(cpuid).loops_per_jiffy = loops_per_jiffy;
pr_debug("Stack at about %p\n", &cpuid);
- /*
- * This must be done before setting cpu_online_mask
- * or calling notify_cpu_starting.
- */
- set_cpu_sibling_map(raw_smp_processor_id());
wmb();
notify_cpu_starting(cpuid);
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 6e57edf..44bf5cf 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1382,12 +1382,10 @@ void __init tsc_init(void)
unsigned long calibrate_delay_is_known(void)
{
int sibling, cpu = smp_processor_id();
- struct cpumask *mask = topology_core_cpumask(cpu);
+ int constant_tsc = cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC);
+ const struct cpumask *mask = topology_core_cpumask(cpu);
- if (!tsc_disabled && !cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC))
- return 0;
-
- if (!mask)
+ if (tsc_disabled || !constant_tsc || !mask)
return 0;
sibling = cpumask_any_but(mask, cpu);
diff --git a/arch/x86/oprofile/op_model_ppro.c b/arch/x86/oprofile/op_model_ppro.c
index 350f709..7913b69 100644
--- a/arch/x86/oprofile/op_model_ppro.c
+++ b/arch/x86/oprofile/op_model_ppro.c
@@ -212,8 +212,8 @@ static void arch_perfmon_setup_counters(void)
eax.full = cpuid_eax(0xa);
/* Workaround for BIOS bugs in 6/15. Taken from perfmon2 */
- if (eax.split.version_id == 0 && __this_cpu_read(cpu_info.x86) == 6 &&
- __this_cpu_read(cpu_info.x86_model) == 15) {
+ if (eax.split.version_id == 0 && boot_cpu_data.x86 == 6 &&
+ boot_cpu_data.x86_model == 15) {
eax.split.version_id = 2;
eax.split.num_counters = 2;
eax.split.bit_width = 40;
diff --git a/block/bio.c b/block/bio.c
index 07f287b..e14a897 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -589,7 +589,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
bio->bi_opf = bio_src->bi_opf;
bio->bi_iter = bio_src->bi_iter;
bio->bi_io_vec = bio_src->bi_io_vec;
-
+ bio->bi_dio_inode = bio_src->bi_dio_inode;
bio_clone_blkcg_association(bio, bio_src);
}
EXPORT_SYMBOL(__bio_clone_fast);
diff --git a/block/blk-merge.c b/block/blk-merge.c
index abde370..0272fac 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -6,7 +6,7 @@
#include <linux/bio.h>
#include <linux/blkdev.h>
#include <linux/scatterlist.h>
-
+#include <linux/pfk.h>
#include <trace/events/block.h>
#include "blk.h"
@@ -725,6 +725,11 @@ static void blk_account_io_merge(struct request *req)
}
}
+static bool crypto_not_mergeable(const struct bio *bio, const struct bio *nxt)
+{
+ return (!pfk_allow_merge_bio(bio, nxt));
+}
+
/*
* Has to be called with the request spinlock acquired
*/
@@ -752,6 +757,8 @@ static int attempt_merge(struct request_queue *q, struct request *req,
!blk_write_same_mergeable(req->bio, next->bio))
return 0;
+ if (crypto_not_mergeable(req->bio, next->bio))
+ return 0;
/*
* If we are allowed to merge, then append bio list
* from next to rq and release next. merge_requests_fn
@@ -862,6 +869,8 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
!blk_write_same_mergeable(rq->bio, bio))
return false;
+ if (crypto_not_mergeable(rq->bio, bio))
+ return false;
return true;
}
diff --git a/crypto/Kconfig b/crypto/Kconfig
index fa98ad7..84d7148 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -360,7 +360,6 @@
select CRYPTO_BLKCIPHER
select CRYPTO_MANAGER
select CRYPTO_GF128MUL
- select CRYPTO_ECB
help
XTS: IEEE1619/D16 narrow block cipher use with aes-xts-plain,
key size 256, 384 or 512 bits. This implementation currently
diff --git a/crypto/ccm.c b/crypto/ccm.c
index 006d857..b3ace63 100644
--- a/crypto/ccm.c
+++ b/crypto/ccm.c
@@ -413,7 +413,7 @@ static int crypto_ccm_decrypt(struct aead_request *req)
unsigned int cryptlen = req->cryptlen;
u8 *authtag = pctx->auth_tag;
u8 *odata = pctx->odata;
- u8 *iv = req->iv;
+ u8 *iv = pctx->idata;
int err;
cryptlen -= authsize;
@@ -429,6 +429,8 @@ static int crypto_ccm_decrypt(struct aead_request *req)
if (req->src != req->dst)
dst = pctx->dst;
+ memcpy(iv, req->iv, 16);
+
skcipher_request_set_tfm(skreq, ctx->ctr);
skcipher_request_set_callback(skreq, pctx->flags,
crypto_ccm_decrypt_done, req);
diff --git a/crypto/dh.c b/crypto/dh.c
index 9d19360..99e20fc 100644
--- a/crypto/dh.c
+++ b/crypto/dh.c
@@ -21,19 +21,12 @@ struct dh_ctx {
MPI xa;
};
-static inline void dh_clear_params(struct dh_ctx *ctx)
+static void dh_clear_ctx(struct dh_ctx *ctx)
{
mpi_free(ctx->p);
mpi_free(ctx->g);
- ctx->p = NULL;
- ctx->g = NULL;
-}
-
-static void dh_free_ctx(struct dh_ctx *ctx)
-{
- dh_clear_params(ctx);
mpi_free(ctx->xa);
- ctx->xa = NULL;
+ memset(ctx, 0, sizeof(*ctx));
}
/*
@@ -71,10 +64,8 @@ static int dh_set_params(struct dh_ctx *ctx, struct dh *params)
return -EINVAL;
ctx->g = mpi_read_raw_data(params->g, params->g_size);
- if (!ctx->g) {
- mpi_free(ctx->p);
+ if (!ctx->g)
return -EINVAL;
- }
return 0;
}
@@ -84,19 +75,24 @@ static int dh_set_secret(struct crypto_kpp *tfm, void *buf, unsigned int len)
struct dh_ctx *ctx = dh_get_ctx(tfm);
struct dh params;
+ /* Free the old MPI key if any */
+ dh_clear_ctx(ctx);
+
if (crypto_dh_decode_key(buf, len, ¶ms) < 0)
- return -EINVAL;
+ goto err_clear_ctx;
if (dh_set_params(ctx, ¶ms) < 0)
- return -EINVAL;
+ goto err_clear_ctx;
ctx->xa = mpi_read_raw_data(params.key, params.key_size);
- if (!ctx->xa) {
- dh_clear_params(ctx);
- return -EINVAL;
- }
+ if (!ctx->xa)
+ goto err_clear_ctx;
return 0;
+
+err_clear_ctx:
+ dh_clear_ctx(ctx);
+ return -EINVAL;
}
static int dh_compute_value(struct kpp_request *req)
@@ -154,7 +150,7 @@ static void dh_exit_tfm(struct crypto_kpp *tfm)
{
struct dh_ctx *ctx = dh_get_ctx(tfm);
- dh_free_ctx(ctx);
+ dh_clear_ctx(ctx);
}
static struct kpp_alg dh = {
diff --git a/crypto/dh_helper.c b/crypto/dh_helper.c
index 02db76b..1453990 100644
--- a/crypto/dh_helper.c
+++ b/crypto/dh_helper.c
@@ -83,6 +83,14 @@ int crypto_dh_decode_key(const char *buf, unsigned int len, struct dh *params)
if (secret.len != crypto_dh_key_len(params))
return -EINVAL;
+ /*
+ * Don't permit the buffer for 'key' or 'g' to be larger than 'p', since
+ * some drivers assume otherwise.
+ */
+ if (params->key_size > params->p_size ||
+ params->g_size > params->p_size)
+ return -EINVAL;
+
/* Don't allocate memory. Set pointers to data within
* the given buffer
*/
@@ -90,6 +98,14 @@ int crypto_dh_decode_key(const char *buf, unsigned int len, struct dh *params)
params->p = (void *)(ptr + params->key_size);
params->g = (void *)(ptr + params->key_size + params->p_size);
+ /*
+ * Don't permit 'p' to be 0. It's not a prime number, and it's subject
+ * to corner cases such as 'mod 0' being undefined or
+ * crypto_kpp_maxsize() returning 0.
+ */
+ if (memchr_inv(params->p, 0, params->p_size) == NULL)
+ return -EINVAL;
+
return 0;
}
EXPORT_SYMBOL_GPL(crypto_dh_decode_key);
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 256a1d5..1ef2f68 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -465,9 +465,8 @@ struct binder_ref {
};
enum binder_deferred_state {
- BINDER_DEFERRED_PUT_FILES = 0x01,
- BINDER_DEFERRED_FLUSH = 0x02,
- BINDER_DEFERRED_RELEASE = 0x04,
+ BINDER_DEFERRED_FLUSH = 0x01,
+ BINDER_DEFERRED_RELEASE = 0x02,
};
/**
@@ -504,8 +503,6 @@ struct binder_priority {
* (invariant after initialized)
* @tsk task_struct for group_leader of process
* (invariant after initialized)
- * @files files_struct for process
- * (invariant after initialized)
* @deferred_work_node: element for binder_deferred_list
* (protected by binder_deferred_lock)
* @deferred_work: bitmap of deferred work to perform
@@ -552,7 +549,6 @@ struct binder_proc {
struct list_head waiting_threads;
int pid;
struct task_struct *tsk;
- struct files_struct *files;
struct hlist_node deferred_work_node;
int deferred_work;
bool is_dead;
@@ -600,6 +596,8 @@ enum {
* (protected by @proc->inner_lock)
* @todo: list of work to do for this thread
* (protected by @proc->inner_lock)
+ * @process_todo: whether work in @todo should be processed
+ * (protected by @proc->inner_lock)
* @return_error: transaction errors reported by this thread
* (only accessed by this thread)
* @reply_error: transaction errors reported by target thread
@@ -626,6 +624,7 @@ struct binder_thread {
bool looper_need_return; /* can be written by other thread */
struct binder_transaction *transaction_stack;
struct list_head todo;
+ bool process_todo;
struct binder_error return_error;
struct binder_error reply_error;
wait_queue_head_t wait;
@@ -813,6 +812,16 @@ static bool binder_worklist_empty(struct binder_proc *proc,
return ret;
}
+/**
+ * binder_enqueue_work_ilocked() - Add an item to the work list
+ * @work: struct binder_work to add to list
+ * @target_list: list to add work to
+ *
+ * Adds the work to the specified list. Asserts that work
+ * is not already on a list.
+ *
+ * Requires the proc->inner_lock to be held.
+ */
static void
binder_enqueue_work_ilocked(struct binder_work *work,
struct list_head *target_list)
@@ -823,22 +832,56 @@ binder_enqueue_work_ilocked(struct binder_work *work,
}
/**
- * binder_enqueue_work() - Add an item to the work list
- * @proc: binder_proc associated with list
+ * binder_enqueue_deferred_thread_work_ilocked() - Add deferred thread work
+ * @thread: thread to queue work to
* @work: struct binder_work to add to list
- * @target_list: list to add work to
*
- * Adds the work to the specified list. Asserts that work
- * is not already on a list.
+ * Adds the work to the todo list of the thread. Doesn't set the process_todo
+ * flag, which means that (if it wasn't already set) the thread will go to
+ * sleep without handling this work when it calls read.
+ *
+ * Requires the proc->inner_lock to be held.
*/
static void
-binder_enqueue_work(struct binder_proc *proc,
- struct binder_work *work,
- struct list_head *target_list)
+binder_enqueue_deferred_thread_work_ilocked(struct binder_thread *thread,
+ struct binder_work *work)
{
- binder_inner_proc_lock(proc);
- binder_enqueue_work_ilocked(work, target_list);
- binder_inner_proc_unlock(proc);
+ binder_enqueue_work_ilocked(work, &thread->todo);
+}
+
+/**
+ * binder_enqueue_thread_work_ilocked() - Add an item to the thread work list
+ * @thread: thread to queue work to
+ * @work: struct binder_work to add to list
+ *
+ * Adds the work to the todo list of the thread, and enables processing
+ * of the todo queue.
+ *
+ * Requires the proc->inner_lock to be held.
+ */
+static void
+binder_enqueue_thread_work_ilocked(struct binder_thread *thread,
+ struct binder_work *work)
+{
+ binder_enqueue_work_ilocked(work, &thread->todo);
+ thread->process_todo = true;
+}
+
+/**
+ * binder_enqueue_thread_work() - Add an item to the thread work list
+ * @thread: thread to queue work to
+ * @work: struct binder_work to add to list
+ *
+ * Adds the work to the todo list of the thread, and enables processing
+ * of the todo queue.
+ */
+static void
+binder_enqueue_thread_work(struct binder_thread *thread,
+ struct binder_work *work)
+{
+ binder_inner_proc_lock(thread->proc);
+ binder_enqueue_thread_work_ilocked(thread, work);
+ binder_inner_proc_unlock(thread->proc);
}
static void
@@ -901,22 +944,34 @@ static void binder_free_thread(struct binder_thread *thread);
static void binder_free_proc(struct binder_proc *proc);
static void binder_inc_node_tmpref_ilocked(struct binder_node *node);
+struct files_struct *binder_get_files_struct(struct binder_proc *proc)
+{
+ return get_files_struct(proc->tsk);
+}
+
static int task_get_unused_fd_flags(struct binder_proc *proc, int flags)
{
- struct files_struct *files = proc->files;
+ struct files_struct *files;
unsigned long rlim_cur;
unsigned long irqs;
+ int ret;
+ files = binder_get_files_struct(proc);
if (files == NULL)
return -ESRCH;
- if (!lock_task_sighand(proc->tsk, &irqs))
- return -EMFILE;
+ if (!lock_task_sighand(proc->tsk, &irqs)) {
+ ret = -EMFILE;
+ goto err;
+ }
rlim_cur = task_rlimit(proc->tsk, RLIMIT_NOFILE);
unlock_task_sighand(proc->tsk, &irqs);
- return __alloc_fd(files, 0, rlim_cur, flags);
+ ret = __alloc_fd(files, 0, rlim_cur, flags);
+err:
+ put_files_struct(files);
+ return ret;
}
/*
@@ -925,8 +980,12 @@ static int task_get_unused_fd_flags(struct binder_proc *proc, int flags)
static void task_fd_install(
struct binder_proc *proc, unsigned int fd, struct file *file)
{
- if (proc->files)
- __fd_install(proc->files, fd, file);
+ struct files_struct *files = binder_get_files_struct(proc);
+
+ if (files) {
+ __fd_install(files, fd, file);
+ put_files_struct(files);
+ }
}
/*
@@ -934,18 +993,20 @@ static void task_fd_install(
*/
static long task_close_fd(struct binder_proc *proc, unsigned int fd)
{
+ struct files_struct *files = binder_get_files_struct(proc);
int retval;
- if (proc->files == NULL)
+ if (files == NULL)
return -ESRCH;
- retval = __close_fd(proc->files, fd);
+ retval = __close_fd(files, fd);
/* can't restart close syscall because file table entry was cleared */
if (unlikely(retval == -ERESTARTSYS ||
retval == -ERESTARTNOINTR ||
retval == -ERESTARTNOHAND ||
retval == -ERESTART_RESTARTBLOCK))
retval = -EINTR;
+ put_files_struct(files);
return retval;
}
@@ -953,7 +1014,7 @@ static long task_close_fd(struct binder_proc *proc, unsigned int fd)
static bool binder_has_work_ilocked(struct binder_thread *thread,
bool do_proc_work)
{
- return !binder_worklist_empty_ilocked(&thread->todo) ||
+ return thread->process_todo ||
thread->looper_need_return ||
(do_proc_work &&
!binder_worklist_empty_ilocked(&thread->proc->todo));
@@ -1188,7 +1249,7 @@ static void binder_transaction_priority(struct task_struct *task,
struct binder_priority node_prio,
bool inherit_rt)
{
- struct binder_priority desired_prio;
+ struct binder_priority desired_prio = t->priority;
if (t->set_priority_called)
return;
@@ -1200,9 +1261,6 @@ static void binder_transaction_priority(struct task_struct *task,
if (!inherit_rt && is_rt_policy(desired_prio.sched_policy)) {
desired_prio.prio = NICE_TO_PRIO(0);
desired_prio.sched_policy = SCHED_NORMAL;
- } else {
- desired_prio.prio = t->priority.prio;
- desired_prio.sched_policy = t->priority.sched_policy;
}
if (node_prio.prio < t->priority.prio ||
@@ -1305,7 +1363,7 @@ static struct binder_node *binder_init_node_ilocked(
node->cookie = cookie;
node->work.type = BINDER_WORK_NODE;
priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
- node->sched_policy = (flags & FLAT_BINDER_FLAG_PRIORITY_MASK) >>
+ node->sched_policy = (flags & FLAT_BINDER_FLAG_SCHED_POLICY_MASK) >>
FLAT_BINDER_FLAG_SCHED_POLICY_SHIFT;
node->min_priority = to_kernel_prio(node->sched_policy, priority);
node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
@@ -1373,6 +1431,17 @@ static int binder_inc_node_nilocked(struct binder_node *node, int strong,
node->local_strong_refs++;
if (!node->has_strong_ref && target_list) {
binder_dequeue_work_ilocked(&node->work);
+ /*
+ * Note: this function is the only place where we queue
+ * directly to a thread->todo without using the
+ * corresponding binder_enqueue_thread_work() helper
+ * functions; in this case it's ok to not set the
+ * process_todo flag, since we know this node work will
+ * always be followed by other work that starts queue
+ * processing: in case of synchronous transactions, a
+ * BR_REPLY or BR_ERROR; in case of oneway
+ * transactions, a BR_TRANSACTION_COMPLETE.
+ */
binder_enqueue_work_ilocked(&node->work, target_list);
}
} else {
@@ -1384,6 +1453,9 @@ static int binder_inc_node_nilocked(struct binder_node *node, int strong,
node->debug_id);
return -EINVAL;
}
+ /*
+ * See comment above
+ */
binder_enqueue_work_ilocked(&node->work, target_list);
}
}
@@ -2073,9 +2145,9 @@ static void binder_send_failed_reply(struct binder_transaction *t,
binder_pop_transaction_ilocked(target_thread, t);
if (target_thread->reply_error.cmd == BR_OK) {
target_thread->reply_error.cmd = error_code;
- binder_enqueue_work_ilocked(
- &target_thread->reply_error.work,
- &target_thread->todo);
+ binder_enqueue_thread_work_ilocked(
+ target_thread,
+ &target_thread->reply_error.work);
wake_up_interruptible(&target_thread->wait);
} else {
WARN(1, "Unexpected reply error: %u\n",
@@ -2395,7 +2467,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
debug_id, (u64)fda->num_fds);
continue;
}
- fd_array = (u32 *)(parent_buffer + fda->parent_offset);
+ fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset);
for (fd_index = 0; fd_index < fda->num_fds; fd_index++)
task_close_fd(proc, fd_array[fd_index]);
} break;
@@ -2619,7 +2691,7 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
*/
parent_buffer = parent->buffer -
binder_alloc_get_user_buffer_offset(&target_proc->alloc);
- fd_array = (u32 *)(parent_buffer + fda->parent_offset);
+ fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset);
if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) {
binder_user_error("%d:%d parent offset not aligned correctly.\n",
proc->pid, thread->pid);
@@ -2685,7 +2757,7 @@ static int binder_fixup_parent(struct binder_transaction *t,
proc->pid, thread->pid);
return -EINVAL;
}
- parent_buffer = (u8 *)(parent->buffer -
+ parent_buffer = (u8 *)((uintptr_t)parent->buffer -
binder_alloc_get_user_buffer_offset(
&target_proc->alloc));
*(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer;
@@ -2714,11 +2786,10 @@ static bool binder_proc_transaction(struct binder_transaction *t,
struct binder_proc *proc,
struct binder_thread *thread)
{
- struct list_head *target_list = NULL;
struct binder_node *node = t->buffer->target_node;
struct binder_priority node_prio;
bool oneway = !!(t->flags & TF_ONE_WAY);
- bool wakeup = true;
+ bool pending_async = false;
BUG_ON(!node);
binder_node_lock(node);
@@ -2728,8 +2799,7 @@ static bool binder_proc_transaction(struct binder_transaction *t,
if (oneway) {
BUG_ON(thread);
if (node->has_async_transaction) {
- target_list = &node->async_todo;
- wakeup = false;
+ pending_async = true;
} else {
node->has_async_transaction = 1;
}
@@ -2743,22 +2813,20 @@ static bool binder_proc_transaction(struct binder_transaction *t,
return false;
}
- if (!thread && !target_list)
+ if (!thread && !pending_async)
thread = binder_select_thread_ilocked(proc);
if (thread) {
- target_list = &thread->todo;
binder_transaction_priority(thread->task, t, node_prio,
node->inherit_rt);
- } else if (!target_list) {
- target_list = &proc->todo;
+ binder_enqueue_thread_work_ilocked(thread, &t->work);
+ } else if (!pending_async) {
+ binder_enqueue_work_ilocked(&t->work, &proc->todo);
} else {
- BUG_ON(target_list != &node->async_todo);
+ binder_enqueue_work_ilocked(&t->work, &node->async_todo);
}
- binder_enqueue_work_ilocked(&t->work, target_list);
-
- if (wakeup)
+ if (!pending_async)
binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */);
binder_inner_proc_unlock(proc);
@@ -3260,10 +3328,10 @@ static void binder_transaction(struct binder_proc *proc,
}
}
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
- binder_enqueue_work(proc, tcomplete, &thread->todo);
t->work.type = BINDER_WORK_TRANSACTION;
if (reply) {
+ binder_enqueue_thread_work(thread, tcomplete);
binder_inner_proc_lock(target_proc);
if (target_thread->is_dead) {
binder_inner_proc_unlock(target_proc);
@@ -3271,7 +3339,7 @@ static void binder_transaction(struct binder_proc *proc,
}
BUG_ON(t->buffer->async_transaction != 0);
binder_pop_transaction_ilocked(target_thread, in_reply_to);
- binder_enqueue_work_ilocked(&t->work, &target_thread->todo);
+ binder_enqueue_thread_work_ilocked(target_thread, &t->work);
binder_inner_proc_unlock(target_proc);
wake_up_interruptible_sync(&target_thread->wait);
binder_restore_priority(current, in_reply_to->saved_priority);
@@ -3279,6 +3347,14 @@ static void binder_transaction(struct binder_proc *proc,
} else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0);
binder_inner_proc_lock(proc);
+ /*
+ * Defer the TRANSACTION_COMPLETE, so we don't return to
+ * userspace immediately; this allows the target process to
+ * immediately start processing this transaction, reducing
+ * latency. We will then return the TRANSACTION_COMPLETE when
+ * the target replies (or there is an error).
+ */
+ binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
@@ -3292,6 +3368,7 @@ static void binder_transaction(struct binder_proc *proc,
} else {
BUG_ON(target_node == NULL);
BUG_ON(t->buffer->async_transaction != 1);
+ binder_enqueue_thread_work(thread, tcomplete);
if (!binder_proc_transaction(t, target_proc, NULL))
goto err_dead_proc_or_thread;
}
@@ -3371,15 +3448,11 @@ static void binder_transaction(struct binder_proc *proc,
if (in_reply_to) {
binder_restore_priority(current, in_reply_to->saved_priority);
thread->return_error.cmd = BR_TRANSACTION_COMPLETE;
- binder_enqueue_work(thread->proc,
- &thread->return_error.work,
- &thread->todo);
+ binder_enqueue_thread_work(thread, &thread->return_error.work);
binder_send_failed_reply(in_reply_to, return_error);
} else {
thread->return_error.cmd = return_error;
- binder_enqueue_work(thread->proc,
- &thread->return_error.work,
- &thread->todo);
+ binder_enqueue_thread_work(thread, &thread->return_error.work);
}
}
@@ -3683,10 +3756,9 @@ static int binder_thread_write(struct binder_proc *proc,
WARN_ON(thread->return_error.cmd !=
BR_OK);
thread->return_error.cmd = BR_ERROR;
- binder_enqueue_work(
- thread->proc,
- &thread->return_error.work,
- &thread->todo);
+ binder_enqueue_thread_work(
+ thread,
+ &thread->return_error.work);
binder_debug(
BINDER_DEBUG_FAILED_TRANSACTION,
"%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n",
@@ -3766,9 +3838,9 @@ static int binder_thread_write(struct binder_proc *proc,
if (thread->looper &
(BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))
- binder_enqueue_work_ilocked(
- &death->work,
- &thread->todo);
+ binder_enqueue_thread_work_ilocked(
+ thread,
+ &death->work);
else {
binder_enqueue_work_ilocked(
&death->work,
@@ -3823,8 +3895,8 @@ static int binder_thread_write(struct binder_proc *proc,
if (thread->looper &
(BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))
- binder_enqueue_work_ilocked(
- &death->work, &thread->todo);
+ binder_enqueue_thread_work_ilocked(
+ thread, &death->work);
else {
binder_enqueue_work_ilocked(
&death->work,
@@ -3998,6 +4070,8 @@ static int binder_thread_read(struct binder_proc *proc,
break;
}
w = binder_dequeue_work_head_ilocked(list);
+ if (binder_worklist_empty_ilocked(&thread->todo))
+ thread->process_todo = false;
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
@@ -4757,7 +4831,6 @@ static void binder_vma_close(struct vm_area_struct *vma)
(vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
(unsigned long)pgprot_val(vma->vm_page_prot));
binder_alloc_vma_close(&proc->alloc);
- binder_defer_work(proc, BINDER_DEFERRED_PUT_FILES);
}
static int binder_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
@@ -4799,10 +4872,8 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
vma->vm_private_data = proc;
ret = binder_alloc_mmap_handler(&proc->alloc, vma);
- if (ret)
- return ret;
- proc->files = get_files_struct(current);
- return 0;
+
+ return ret;
err_bad_arg:
pr_err("binder_mmap: %d %lx-%lx %s failed %d\n",
@@ -4981,8 +5052,6 @@ static void binder_deferred_release(struct binder_proc *proc)
struct rb_node *n;
int threads, nodes, incoming_refs, outgoing_refs, active_transactions;
- BUG_ON(proc->files);
-
mutex_lock(&binder_procs_lock);
hlist_del(&proc->proc_node);
mutex_unlock(&binder_procs_lock);
@@ -5064,8 +5133,6 @@ static void binder_deferred_release(struct binder_proc *proc)
static void binder_deferred_func(struct work_struct *work)
{
struct binder_proc *proc;
- struct files_struct *files;
-
int defer;
do {
@@ -5082,21 +5149,11 @@ static void binder_deferred_func(struct work_struct *work)
}
mutex_unlock(&binder_deferred_lock);
- files = NULL;
- if (defer & BINDER_DEFERRED_PUT_FILES) {
- files = proc->files;
- if (files)
- proc->files = NULL;
- }
-
if (defer & BINDER_DEFERRED_FLUSH)
binder_deferred_flush(proc);
if (defer & BINDER_DEFERRED_RELEASE)
binder_deferred_release(proc); /* frees proc */
-
- if (files)
- put_files_struct(files);
} while (proc);
}
static DECLARE_WORK(binder_deferred_work, binder_deferred_func);
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index 2c8be74..5d16fc4 100644
--- a/drivers/ata/Kconfig
+++ b/drivers/ata/Kconfig
@@ -289,6 +289,7 @@
config ATA_BMDMA
bool "ATA BMDMA support"
+ depends on HAS_DMA
default y
help
This option adds support for SFF ATA controllers with BMDMA
@@ -344,6 +345,7 @@
config SATA_HIGHBANK
tristate "Calxeda Highbank SATA support"
+ depends on HAS_DMA
depends on ARCH_HIGHBANK || COMPILE_TEST
help
This option enables support for the Calxeda Highbank SoC's
@@ -353,6 +355,7 @@
config SATA_MV
tristate "Marvell SATA support"
+ depends on HAS_DMA
depends on PCI || ARCH_DOVE || ARCH_MV78XX0 || \
ARCH_MVEBU || ARCH_ORION5X || COMPILE_TEST
select GENERIC_PHY
diff --git a/drivers/base/power/opp/of.c b/drivers/base/power/opp/of.c
index 5552211..b52c617 100644
--- a/drivers/base/power/opp/of.c
+++ b/drivers/base/power/opp/of.c
@@ -386,7 +386,7 @@ static int _of_add_opp_table_v1(struct device *dev)
{
const struct property *prop;
const __be32 *val;
- int nr;
+ int nr, ret;
prop = of_find_property(dev->of_node, "operating-points", NULL);
if (!prop)
@@ -409,9 +409,13 @@ static int _of_add_opp_table_v1(struct device *dev)
unsigned long freq = be32_to_cpup(val++) * 1000;
unsigned long volt = be32_to_cpup(val++);
- if (_opp_add_v1(dev, freq, volt, false))
- dev_warn(dev, "%s: Failed to add OPP %ld\n",
- __func__, freq);
+ ret = _opp_add_v1(dev, freq, volt, false);
+ if (ret) {
+ dev_err(dev, "%s: Failed to add OPP %ld (%d)\n",
+ __func__, freq, ret);
+ dev_pm_opp_of_remove_table(dev);
+ return ret;
+ }
nr -= 2;
}
diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
index 404d94c..feba1b2 100644
--- a/drivers/base/power/wakeirq.c
+++ b/drivers/base/power/wakeirq.c
@@ -141,6 +141,13 @@ static irqreturn_t handle_threaded_wake_irq(int irq, void *_wirq)
struct wake_irq *wirq = _wirq;
int res;
+ /* Maybe abort suspend? */
+ if (irqd_is_wakeup_set(irq_get_irq_data(irq))) {
+ pm_wakeup_event(wirq->dev, 0);
+
+ return IRQ_HANDLED;
+ }
+
/* We don't want RPM_ASYNC or RPM_NOWAIT here */
res = pm_runtime_resume(wirq->dev);
if (res < 0)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 7b274ff..24f4b54 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -2788,7 +2788,7 @@ static int rbd_img_obj_parent_read_full(struct rbd_obj_request *obj_request)
* from the parent.
*/
page_count = (u32)calc_pages_for(0, length);
- pages = ceph_alloc_page_vector(page_count, GFP_KERNEL);
+ pages = ceph_alloc_page_vector(page_count, GFP_NOIO);
if (IS_ERR(pages)) {
result = PTR_ERR(pages);
pages = NULL;
@@ -2922,7 +2922,7 @@ static int rbd_img_obj_exists_submit(struct rbd_obj_request *obj_request)
*/
size = sizeof (__le64) + sizeof (__le32) + sizeof (__le32);
page_count = (u32)calc_pages_for(0, size);
- pages = ceph_alloc_page_vector(page_count, GFP_KERNEL);
+ pages = ceph_alloc_page_vector(page_count, GFP_NOIO);
if (IS_ERR(pages)) {
ret = PTR_ERR(pages);
goto fail_stat_request;
diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
index b793853..3880c90 100644
--- a/drivers/bluetooth/ath3k.c
+++ b/drivers/bluetooth/ath3k.c
@@ -212,15 +212,28 @@ static int ath3k_load_firmware(struct usb_device *udev,
const struct firmware *firmware)
{
u8 *send_buf;
- int len = 0;
- int err, pipe, size, sent = 0;
- int count = firmware->size;
+ int err, pipe, len, size, sent = 0;
+ int count;
BT_DBG("udev %p", udev);
+ if (!firmware || !firmware->data || firmware->size <= 0) {
+ err = -EINVAL;
+ BT_ERR("Not a valid FW file");
+ return err;
+ }
+
+ count = firmware->size;
+
+ if (count < FW_HDR_SIZE) {
+ err = -EINVAL;
+ BT_ERR("ath3k loading invalid size of file");
+ return err;
+ }
+
pipe = usb_sndctrlpipe(udev, 0);
- send_buf = kmalloc(BULK_SIZE, GFP_KERNEL);
+ send_buf = kzalloc(BULK_SIZE, GFP_KERNEL);
if (!send_buf) {
BT_ERR("Can't allocate memory chunk for firmware");
return -ENOMEM;
diff --git a/drivers/bluetooth/btfm_slim.c b/drivers/bluetooth/btfm_slim.c
index 8f0e632..64d7ac7 100644
--- a/drivers/bluetooth/btfm_slim.c
+++ b/drivers/bluetooth/btfm_slim.c
@@ -130,17 +130,25 @@ int btfm_slim_enable_ch(struct btfmslim *btfmslim, struct btfmslim_ch *ch,
BTFMSLIM_DBG("port: %d ch: %d", ch->port, ch->ch);
/* Define the channel with below parameters */
- prop.prot = SLIM_AUTO_ISO;
- prop.baser = SLIM_RATE_4000HZ;
- prop.dataf = (rates == 48000) ? SLIM_CH_DATAF_NOT_DEFINED
- : SLIM_CH_DATAF_LPCM_AUDIO;
+ prop.prot = SLIM_AUTO_ISO;
+ prop.baser = ((rates == 44100) || (rates == 88200)) ?
+ SLIM_RATE_11025HZ : SLIM_RATE_4000HZ;
+ prop.dataf = ((rates == 48000) || (rates == 44100) ||
+ (rates == 88200) || (rates == 96000)) ?
+ SLIM_CH_DATAF_NOT_DEFINED : SLIM_CH_DATAF_LPCM_AUDIO;
prop.auxf = SLIM_CH_AUXF_NOT_APPLICABLE;
- prop.ratem = (rates/4000);
+ prop.ratem = ((rates == 44100) || (rates == 88200)) ?
+ (rates/11025) : (rates/4000);
prop.sampleszbits = 16;
ch_h[0] = ch->ch_hdl;
ch_h[1] = (grp) ? (ch+1)->ch_hdl : 0;
+ BTFMSLIM_INFO("channel define - prot:%d, dataf:%d, auxf:%d",
+ prop.prot, prop.dataf, prop.auxf);
+ BTFMSLIM_INFO("channel define - rates:%d, baser:%d, ratem:%d",
+ rates, prop.baser, prop.ratem);
+
ret = slim_define_ch(btfmslim->slim_pgd, &prop, ch_h, nchan, grp,
&ch->grph);
if (ret < 0) {
diff --git a/drivers/bluetooth/btfm_slim_codec.c b/drivers/bluetooth/btfm_slim_codec.c
index 309648f..53388ed 100644
--- a/drivers/bluetooth/btfm_slim_codec.c
+++ b/drivers/bluetooth/btfm_slim_codec.c
@@ -385,10 +385,12 @@ static struct snd_soc_dai_driver btfmslim_dai[] = {
.id = BTFM_BT_SCO_A2DP_SLIM_RX,
.playback = {
.stream_name = "SCO A2DP RX Playback",
+ /* 8/16/44.1/48/88.2/96 Khz */
.rates = SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000
- | SNDRV_PCM_RATE_48000, /* 8 or 16 or 48 Khz*/
+ | SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000
+ | SNDRV_PCM_RATE_88200 | SNDRV_PCM_RATE_96000,
.formats = SNDRV_PCM_FMTBIT_S16_LE, /* 16 bits */
- .rate_max = 48000,
+ .rate_max = 96000,
.rate_min = 8000,
.channels_min = 1,
.channels_max = 1,
diff --git a/drivers/bluetooth/btfm_slim_wcn3990.c b/drivers/bluetooth/btfm_slim_wcn3990.c
index f0a6d9e..2dbba83 100644
--- a/drivers/bluetooth/btfm_slim_wcn3990.c
+++ b/drivers/bluetooth/btfm_slim_wcn3990.c
@@ -83,22 +83,31 @@ int btfm_slim_chrk_enable_port(struct btfmslim *btfmslim, uint8_t port_num,
{
int ret = 0;
uint8_t reg_val = 0, en;
- uint8_t port_bit = 0;
+ uint8_t rxport_num = 0;
uint16_t reg;
BTFMSLIM_DBG("port(%d) enable(%d)", port_num, enable);
if (rxport) {
if (enable) {
/* For SCO Rx, A2DP Rx */
- reg_val = 0x1;
- port_bit = port_num - 0x10;
- reg = CHRK_SB_PGD_RX_PORTn_MULTI_CHNL_0(port_bit);
+ if (port_num < 24) {
+ rxport_num = port_num - 16;
+ reg_val = 0x01 << rxport_num;
+ reg = CHRK_SB_PGD_RX_PORTn_MULTI_CHNL_0(
+ rxport_num);
+ } else {
+ rxport_num = port_num - 24;
+ reg_val = 0x01 << rxport_num;
+ reg = CHRK_SB_PGD_RX_PORTn_MULTI_CHNL_1(
+ rxport_num);
+ }
+
BTFMSLIM_DBG("writing reg_val (%d) to reg(%x)",
- reg_val, reg);
+ reg_val, reg);
ret = btfm_slim_write(btfmslim, reg, 1, ®_val, IFD);
if (ret) {
BTFMSLIM_ERR("failed to write (%d) reg 0x%x",
- ret, reg);
+ ret, reg);
goto error;
}
}
diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
index 28afd5d..f64e86f 100644
--- a/drivers/bluetooth/btqca.c
+++ b/drivers/bluetooth/btqca.c
@@ -1,7 +1,7 @@
/*
* Bluetooth supports for Qualcomm Atheros chips
*
- * Copyright (c) 2015 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2
@@ -27,6 +27,9 @@
#define VERSION "0.1"
+#define MAX_PATCH_FILE_SIZE (100*1024)
+#define MAX_NVM_FILE_SIZE (10*1024)
+
static int rome_patch_ver_req(struct hci_dev *hdev, u32 *rome_version)
{
struct sk_buff *skb;
@@ -285,27 +288,63 @@ static int rome_download_firmware(struct hci_dev *hdev,
struct rome_config *config)
{
const struct firmware *fw;
+ u32 type_len, length;
+ struct tlv_type_hdr *tlv;
int ret;
- BT_INFO("%s: ROME Downloading %s", hdev->name, config->fwname);
-
+ BT_INFO("%s: ROME Downloading file: %s", hdev->name, config->fwname);
ret = request_firmware(&fw, config->fwname, &hdev->dev);
- if (ret) {
- BT_ERR("%s: Failed to request file: %s (%d)", hdev->name,
- config->fwname, ret);
+
+ if (ret || !fw || !fw->data || fw->size <= 0) {
+ BT_ERR("Failed to request file: err = (%d)", ret);
+ ret = ret ? ret : -EINVAL;
return ret;
}
- rome_tlv_check_data(config, fw);
-
- ret = rome_tlv_download_request(hdev, fw);
- if (ret) {
- BT_ERR("%s: Failed to download file: %s (%d)", hdev->name,
- config->fwname, ret);
+ if (config->type != TLV_TYPE_NVM &&
+ config->type != TLV_TYPE_PATCH) {
+ ret = -EINVAL;
+ BT_ERR("TLV_NVM dload: wrong config type selected");
+ goto exit;
}
- release_firmware(fw);
+ if (config->type == TLV_TYPE_PATCH &&
+ (fw->size > MAX_PATCH_FILE_SIZE)) {
+ ret = -EINVAL;
+ BT_ERR("TLV_PATCH dload: wrong patch file sizes");
+ goto exit;
+ } else if (config->type == TLV_TYPE_NVM &&
+ (fw->size > MAX_NVM_FILE_SIZE)) {
+ ret = -EINVAL;
+ BT_ERR("TLV_NVM dload: wrong NVM file sizes");
+ goto exit;
+ }
+ if (fw->size < sizeof(struct tlv_type_hdr)) {
+ ret = -EINVAL;
+ BT_ERR("Firware size smaller to fit minimum value");
+ goto exit;
+ }
+
+ tlv = (struct tlv_type_hdr *)fw->data;
+ type_len = le32_to_cpu(tlv->type_len);
+ length = (type_len >> 8) & 0x00ffffff;
+
+ if (fw->size - 4 != length) {
+ ret = -EINVAL;
+ BT_ERR("Requested size not matching size in header");
+ goto exit;
+ }
+
+ rome_tlv_check_data(config, fw);
+ ret = rome_tlv_download_request(hdev, fw);
+
+ if (ret) {
+ BT_ERR("Failed to download FW: error = (%d)", ret);
+ }
+
+exit:
+ release_firmware(fw);
return ret;
}
@@ -316,8 +355,9 @@ int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr)
int err;
cmd[0] = EDL_NVM_ACCESS_SET_REQ_CMD;
- cmd[1] = 0x02; /* TAG ID */
- cmd[2] = sizeof(bdaddr_t); /* size */
+ /* Set the TAG ID of 0x02 for NVM set and size of tag */
+ cmd[1] = 0x02;
+ cmd[2] = sizeof(bdaddr_t);
memcpy(cmd + 3, bdaddr, sizeof(bdaddr_t));
skb = __hci_cmd_sync_ev(hdev, EDL_NVM_ACCESS_OPCODE, sizeof(cmd), cmd,
HCI_VENDOR_PKT, HCI_INIT_TIMEOUT);
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 74e677a..6930286 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -2925,6 +2925,12 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
+
+ /* QCA Rome devices lose their updated firmware over suspend,
+ * but the USB hub doesn't notice any status change.
+ * Explicitly request a device reset on resume.
+ */
+ set_bit(BTUSB_RESET_RESUME, &data->flags);
}
#ifdef CONFIG_BT_HCIBTUSB_RTL
diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index 49fb8e5..1ea2053 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -582,6 +582,16 @@
source "drivers/s390/char/Kconfig"
+config MSM_SMD_PKT
+ bool "Enable device interface for some SMD packet ports"
+ default n
+ depends on MSM_SMD
+ help
+ smd_pkt driver provides the interface for the userspace clients
+ to communicate over smd via device nodes. This enable the
+ usersapce clients to read and write to some smd packets channel
+ for MSM chipset.
+
config TILE_SROM
bool "Character-device access via hypervisor to the Tilera SPI ROM"
depends on TILE
diff --git a/drivers/char/Makefile b/drivers/char/Makefile
index 19c3c98..81283c4 100644
--- a/drivers/char/Makefile
+++ b/drivers/char/Makefile
@@ -9,6 +9,7 @@
obj-$(CONFIG_VIRTIO_CONSOLE) += virtio_console.o
obj-$(CONFIG_RAW_DRIVER) += raw.o
obj-$(CONFIG_SGI_SNSC) += snsc.o snsc_event.o
+obj-$(CONFIG_MSM_SMD_PKT) += msm_smd_pkt.o
obj-$(CONFIG_MSPEC) += mspec.o
obj-$(CONFIG_MMTIMER) += mmtimer.o
obj-$(CONFIG_UV_MMTIMER) += uv_mmtimer.o
diff --git a/drivers/char/adsprpc.c b/drivers/char/adsprpc.c
index 122ebd2..35eea02 100644
--- a/drivers/char/adsprpc.c
+++ b/drivers/char/adsprpc.c
@@ -75,6 +75,8 @@
#define FASTRPC_LINK_CONNECTING (0x1)
#define FASTRPC_LINK_CONNECTED (0x3)
#define FASTRPC_LINK_DISCONNECTING (0x7)
+#define FASTRPC_LINK_REMOTE_DISCONNECTING (0x8)
+#define FASTRPC_GLINK_INTENT_LEN (64)
#define PERF_KEYS "count:flush:map:copy:glink:getargs:putargs:invalidate:invoke"
#define FASTRPC_STATIC_HANDLE_LISTENER (3)
@@ -232,16 +234,17 @@ struct fastrpc_channel_ctx {
int prevssrcount;
int issubsystemup;
int vmid;
+ int rhvmid;
int ramdumpenabled;
void *remoteheap_ramdump_dev;
struct fastrpc_glink_info link;
+ struct mutex mut;
};
struct fastrpc_apps {
struct fastrpc_channel_ctx *channel;
struct cdev cdev;
struct class *class;
- struct mutex smd_mutex;
struct smq_phy_page range;
struct hlist_head maps;
uint32_t staticpd_flags;
@@ -520,7 +523,7 @@ static int fastrpc_mmap_remove(struct fastrpc_file *fl, uintptr_t va,
return -ENOTTY;
}
-static void fastrpc_mmap_free(struct fastrpc_mmap *map)
+static void fastrpc_mmap_free(struct fastrpc_mmap *map, uint32_t flags)
{
struct fastrpc_apps *me = &gfa;
struct fastrpc_file *fl;
@@ -537,15 +540,17 @@ static void fastrpc_mmap_free(struct fastrpc_mmap *map)
if (!map->refs)
hlist_del_init(&map->hn);
spin_unlock(&me->hlock);
+ if (map->refs > 0)
+ return;
} else {
spin_lock(&fl->hlock);
map->refs--;
if (!map->refs)
hlist_del_init(&map->hn);
spin_unlock(&fl->hlock);
+ if (map->refs > 0 && !flags)
+ return;
}
- if (map->refs > 0)
- return;
if (map->flags == ADSP_MMAP_HEAP_ADDR ||
map->flags == ADSP_MMAP_REMOTE_HEAP_ADDR) {
@@ -635,6 +640,11 @@ static int fastrpc_mmap_create(struct fastrpc_file *fl, int fd,
map->size = len;
map->va = (uintptr_t __user)map->phys;
} else {
+ if (map->attr && (map->attr & FASTRPC_ATTR_KEEP_MAP)) {
+ pr_info("adsprpc: buffer mapped with persist attr %x\n",
+ (unsigned int)map->attr);
+ map->refs = 2;
+ }
VERIFY(err, !IS_ERR_OR_NULL(map->handle =
ion_import_dma_buf_fd(fl->apps->client, fd)));
if (err)
@@ -724,7 +734,7 @@ static int fastrpc_mmap_create(struct fastrpc_file *fl, int fd,
bail:
if (err && map)
- fastrpc_mmap_free(map);
+ fastrpc_mmap_free(map, 0);
return err;
}
@@ -995,7 +1005,7 @@ static void context_free(struct smq_invoke_ctx *ctx)
hlist_del_init(&ctx->hn);
spin_unlock(&ctx->fl->hlock);
for (i = 0; i < nbufs; ++i)
- fastrpc_mmap_free(ctx->maps[i]);
+ fastrpc_mmap_free(ctx->maps[i], 0);
fastrpc_buf_free(ctx->buf, 1);
ctx->magic = 0;
kfree(ctx);
@@ -1345,7 +1355,7 @@ static int put_args(uint32_t kernel, struct smq_invoke_ctx *ctx,
if (err)
goto bail;
} else {
- fastrpc_mmap_free(ctx->maps[i]);
+ fastrpc_mmap_free(ctx->maps[i], 0);
ctx->maps[i] = NULL;
}
}
@@ -1355,7 +1365,7 @@ static int put_args(uint32_t kernel, struct smq_invoke_ctx *ctx,
break;
if (!fastrpc_mmap_find(ctx->fl, (int)fdlist[i], 0, 0,
0, 0, &mmap))
- fastrpc_mmap_free(mmap);
+ fastrpc_mmap_free(mmap, 0);
}
}
if (ctx->crc && crclist && rpra)
@@ -1486,12 +1496,12 @@ static void fastrpc_init(struct fastrpc_apps *me)
INIT_HLIST_HEAD(&me->drivers);
spin_lock_init(&me->hlock);
- mutex_init(&me->smd_mutex);
me->channel = &gcinfo[0];
for (i = 0; i < NUM_CHANNELS; i++) {
init_completion(&me->channel[i].work);
init_completion(&me->channel[i].workport);
me->channel[i].sesscount = 0;
+ mutex_init(&me->channel[i].mut);
}
}
@@ -1605,7 +1615,7 @@ static int fastrpc_init_process(struct fastrpc_file *fl,
struct fastrpc_mmap *file = NULL, *mem = NULL;
char *proc_name = NULL;
int srcVM[1] = {VMID_HLOS};
- int destVM[1] = {VMID_ADSP_Q6};
+ int destVM[1] = {me->channel[fl->cid].rhvmid};
int destVMperm[1] = {PERM_READ | PERM_WRITE | PERM_EXEC};
int hlosVMperm[1] = {PERM_READ | PERM_WRITE | PERM_EXEC};
@@ -1782,10 +1792,10 @@ static int fastrpc_init_process(struct fastrpc_file *fl,
if (mem->flags == ADSP_MMAP_REMOTE_HEAP_ADDR)
hyp_assign_phys(mem->phys, (uint64_t)mem->size,
destVM, 1, srcVM, hlosVMperm, 1);
- fastrpc_mmap_free(mem);
+ fastrpc_mmap_free(mem, 0);
}
if (file)
- fastrpc_mmap_free(file);
+ fastrpc_mmap_free(file, 0);
return err;
}
@@ -1821,6 +1831,7 @@ static int fastrpc_mmap_on_dsp(struct fastrpc_file *fl, uint32_t flags,
struct fastrpc_mmap *map)
{
struct fastrpc_ioctl_invoke_crc ioctl;
+ struct fastrpc_apps *me = &gfa;
struct smq_phy_page page;
int num = 1;
remote_arg_t ra[3];
@@ -1875,7 +1886,7 @@ static int fastrpc_mmap_on_dsp(struct fastrpc_file *fl, uint32_t flags,
} else if (flags == ADSP_MMAP_REMOTE_HEAP_ADDR) {
int srcVM[1] = {VMID_HLOS};
- int destVM[1] = {VMID_ADSP_Q6};
+ int destVM[1] = {me->channel[fl->cid].rhvmid};
int destVMperm[1] = {PERM_READ | PERM_WRITE | PERM_EXEC};
VERIFY(err, !hyp_assign_phys(map->phys, (uint64_t)map->size,
@@ -1891,7 +1902,8 @@ static int fastrpc_munmap_on_dsp_rh(struct fastrpc_file *fl,
struct fastrpc_mmap *map)
{
int err = 0;
- int srcVM[1] = {VMID_ADSP_Q6};
+ struct fastrpc_apps *me = &gfa;
+ int srcVM[1] = {me->channel[fl->cid].rhvmid};
int destVM[1] = {VMID_HLOS};
int destVMperm[1] = {PERM_READ | PERM_WRITE | PERM_EXEC};
@@ -2016,7 +2028,7 @@ static int fastrpc_mmap_remove_ssr(struct fastrpc_file *fl)
kfree(ramdump_segments_rh);
}
}
- fastrpc_mmap_free(match);
+ fastrpc_mmap_free(match, 0);
}
} while (match);
bail:
@@ -2042,13 +2054,36 @@ static int fastrpc_internal_munmap(struct fastrpc_file *fl,
VERIFY(err, !fastrpc_munmap_on_dsp(fl, map));
if (err)
goto bail;
- fastrpc_mmap_free(map);
+ fastrpc_mmap_free(map, 0);
bail:
if (err && map)
fastrpc_mmap_add(map);
return err;
}
+static int fastrpc_internal_munmap_fd(struct fastrpc_file *fl,
+ struct fastrpc_ioctl_munmap_fd *ud) {
+ int err = 0;
+ struct fastrpc_mmap *map = NULL;
+
+ VERIFY(err, (fl && ud));
+ if (err)
+ goto bail;
+
+ if (!fastrpc_mmap_find(fl, ud->fd, ud->va, ud->len, 0, 0, &map)) {
+ pr_err("mapping not found to unamp %x va %llx %x\n",
+ ud->fd, (unsigned long long)ud->va,
+ (unsigned int)ud->len);
+ err = -1;
+ goto bail;
+ }
+ if (map)
+ fastrpc_mmap_free(map, 0);
+bail:
+ return err;
+}
+
+
static int fastrpc_internal_mmap(struct fastrpc_file *fl,
struct fastrpc_ioctl_mmap *ud)
{
@@ -2071,7 +2106,7 @@ static int fastrpc_internal_mmap(struct fastrpc_file *fl,
ud->vaddrout = map->raddr;
bail:
if (err && map)
- fastrpc_mmap_free(map);
+ fastrpc_mmap_free(map, 0);
return err;
}
@@ -2087,7 +2122,7 @@ static void fastrpc_channel_close(struct kref *kref)
ctx->chan = NULL;
glink_unregister_link_state_cb(ctx->link.link_notify_handle);
ctx->link.link_notify_handle = NULL;
- mutex_unlock(&me->smd_mutex);
+ mutex_unlock(&ctx->mut);
pr_info("'closed /dev/%s c %d %d'\n", gcinfo[cid].name,
MAJOR(me->dev_no), cid);
}
@@ -2180,10 +2215,15 @@ static void fastrpc_glink_notify_state(void *handle, const void *priv,
link->port_state = FASTRPC_LINK_DISCONNECTED;
break;
case GLINK_REMOTE_DISCONNECTED:
+ mutex_lock(&me->channel[cid].mut);
if (me->channel[cid].chan) {
+ link->port_state = FASTRPC_LINK_REMOTE_DISCONNECTING;
fastrpc_glink_close(me->channel[cid].chan, cid);
me->channel[cid].chan = NULL;
+ } else {
+ link->port_state = FASTRPC_LINK_DISCONNECTED;
}
+ mutex_unlock(&me->channel[cid].mut);
break;
default:
break;
@@ -2194,23 +2234,20 @@ static int fastrpc_session_alloc(struct fastrpc_channel_ctx *chan, int secure,
struct fastrpc_session_ctx **session)
{
int err = 0;
- struct fastrpc_apps *me = &gfa;
- mutex_lock(&me->smd_mutex);
+ mutex_lock(&chan->mut);
if (!*session)
err = fastrpc_session_alloc_locked(chan, secure, session);
- mutex_unlock(&me->smd_mutex);
+ mutex_unlock(&chan->mut);
return err;
}
static void fastrpc_session_free(struct fastrpc_channel_ctx *chan,
struct fastrpc_session_ctx *session)
{
- struct fastrpc_apps *me = &gfa;
-
- mutex_lock(&me->smd_mutex);
+ mutex_lock(&chan->mut);
session->used = 0;
- mutex_unlock(&me->smd_mutex);
+ mutex_unlock(&chan->mut);
}
static int fastrpc_file_free(struct fastrpc_file *fl)
@@ -2239,11 +2276,11 @@ static int fastrpc_file_free(struct fastrpc_file *fl)
fastrpc_context_list_dtor(fl);
fastrpc_buf_list_free(fl);
hlist_for_each_entry_safe(map, n, &fl->maps, hn) {
- fastrpc_mmap_free(map);
+ fastrpc_mmap_free(map, 1);
}
if (fl->ssrcount == fl->apps->channel[cid].ssrcount)
kref_put_mutex(&fl->apps->channel[cid].kref,
- fastrpc_channel_close, &fl->apps->smd_mutex);
+ fastrpc_channel_close, &fl->apps->channel[cid].mut);
if (fl->sctx)
fastrpc_session_free(&fl->apps->channel[cid], fl->sctx);
if (fl->secsctx)
@@ -2320,6 +2357,20 @@ static int fastrpc_glink_register(int cid, struct fastrpc_apps *me)
return err;
}
+static void fastrpc_glink_stop(int cid)
+{
+ int err = 0;
+ struct fastrpc_glink_info *link;
+
+ VERIFY(err, (cid >= 0 && cid < NUM_CHANNELS));
+ if (err)
+ return;
+ link = &gfa.channel[cid].link;
+
+ if (link->port_state == FASTRPC_LINK_CONNECTED)
+ link->port_state = FASTRPC_LINK_REMOTE_DISCONNECTING;
+}
+
static void fastrpc_glink_close(void *chan, int cid)
{
int err = 0;
@@ -2330,7 +2381,8 @@ static void fastrpc_glink_close(void *chan, int cid)
return;
link = &gfa.channel[cid].link;
- if (link->port_state == FASTRPC_LINK_CONNECTED) {
+ if (link->port_state == FASTRPC_LINK_CONNECTED ||
+ link->port_state == FASTRPC_LINK_REMOTE_DISCONNECTING) {
link->port_state = FASTRPC_LINK_DISCONNECTING;
glink_close(chan);
}
@@ -2496,12 +2548,14 @@ static int fastrpc_channel_open(struct fastrpc_file *fl)
struct fastrpc_apps *me = &gfa;
int cid, err = 0;
- mutex_lock(&me->smd_mutex);
-
VERIFY(err, fl && fl->sctx);
if (err)
- goto bail;
+ return err;
cid = fl->cid;
+ VERIFY(err, cid >= 0 && cid < NUM_CHANNELS);
+ if (err)
+ goto bail;
+ mutex_lock(&me->channel[cid].mut);
if (me->channel[cid].ssrcount !=
me->channel[cid].prevssrcount) {
if (!me->channel[cid].issubsystemup) {
@@ -2510,9 +2564,6 @@ static int fastrpc_channel_open(struct fastrpc_file *fl)
goto bail;
}
}
- VERIFY(err, cid >= 0 && cid < NUM_CHANNELS);
- if (err)
- goto bail;
fl->ssrcount = me->channel[cid].ssrcount;
if ((kref_get_unless_zero(&me->channel[cid].kref) == 0) ||
(me->channel[cid].chan == NULL)) {
@@ -2523,9 +2574,11 @@ static int fastrpc_channel_open(struct fastrpc_file *fl)
if (err)
goto bail;
+ mutex_unlock(&me->channel[cid].mut);
VERIFY(err,
wait_for_completion_timeout(&me->channel[cid].workport,
RPC_TIMEOUT));
+ mutex_lock(&me->channel[cid].mut);
if (err) {
me->channel[cid].chan = NULL;
goto bail;
@@ -2533,8 +2586,10 @@ static int fastrpc_channel_open(struct fastrpc_file *fl)
kref_init(&me->channel[cid].kref);
pr_info("'opened /dev/%s c %d %d'\n", gcinfo[cid].name,
MAJOR(me->dev_no), cid);
- err = glink_queue_rx_intent(me->channel[cid].chan, NULL, 16);
- err |= glink_queue_rx_intent(me->channel[cid].chan, NULL, 64);
+ err = glink_queue_rx_intent(me->channel[cid].chan, NULL,
+ FASTRPC_GLINK_INTENT_LEN);
+ err |= glink_queue_rx_intent(me->channel[cid].chan, NULL,
+ FASTRPC_GLINK_INTENT_LEN);
if (err)
pr_warn("adsprpc: initial intent fail for %d err %d\n",
cid, err);
@@ -2548,7 +2603,7 @@ static int fastrpc_channel_open(struct fastrpc_file *fl)
}
bail:
- mutex_unlock(&me->smd_mutex);
+ mutex_unlock(&me->channel[cid].mut);
return err;
}
@@ -2655,6 +2710,7 @@ static long fastrpc_device_ioctl(struct file *file, unsigned int ioctl_num,
struct fastrpc_ioctl_invoke_crc inv;
struct fastrpc_ioctl_mmap mmap;
struct fastrpc_ioctl_munmap munmap;
+ struct fastrpc_ioctl_munmap_fd munmap_fd;
struct fastrpc_ioctl_init_attrs init;
struct fastrpc_ioctl_perf perf;
struct fastrpc_ioctl_control cp;
@@ -2721,6 +2777,16 @@ static long fastrpc_device_ioctl(struct file *file, unsigned int ioctl_num,
if (err)
goto bail;
break;
+ case FASTRPC_IOCTL_MUNMAP_FD:
+ K_COPY_FROM_USER(err, 0, &p.munmap_fd, param,
+ sizeof(p.munmap_fd));
+ if (err)
+ goto bail;
+ VERIFY(err, 0 == (err = fastrpc_internal_munmap_fd(fl,
+ &p.munmap_fd)));
+ if (err)
+ goto bail;
+ break;
case FASTRPC_IOCTL_SETMODE:
switch ((uint32_t)ioctl_param) {
case FASTRPC_MODE_PARALLEL:
@@ -2826,16 +2892,14 @@ static int fastrpc_restart_notifier_cb(struct notifier_block *nb,
ctx = container_of(nb, struct fastrpc_channel_ctx, nb);
cid = ctx - &me->channel[0];
if (code == SUBSYS_BEFORE_SHUTDOWN) {
- mutex_lock(&me->smd_mutex);
+ mutex_lock(&ctx->mut);
ctx->ssrcount++;
ctx->issubsystemup = 0;
- if (ctx->chan) {
- fastrpc_glink_close(ctx->chan, cid);
- ctx->chan = NULL;
- pr_info("'restart notifier: closed /dev/%s c %d %d'\n",
- gcinfo[cid].name, MAJOR(me->dev_no), cid);
- }
- mutex_unlock(&me->smd_mutex);
+ pr_info("'restart notifier: /dev/%s c %d %d'\n",
+ gcinfo[cid].name, MAJOR(me->dev_no), cid);
+ if (ctx->chan)
+ fastrpc_glink_stop(cid);
+ mutex_unlock(&ctx->mut);
if (cid == 0)
me->staticpd_flags = 0;
fastrpc_notify_drivers(me, cid);
@@ -2941,6 +3005,17 @@ static int fastrpc_probe(struct platform_device *pdev)
struct cma *cma;
uint32_t val;
+
+ if (of_device_is_compatible(dev->of_node,
+ "qcom,msm-fastrpc-compute")) {
+ of_property_read_u32(dev->of_node, "qcom,adsp-remoteheap-vmid",
+ &gcinfo[0].rhvmid);
+
+ pr_info("ADSPRPC : vmids adsp=%d\n", gcinfo[0].rhvmid);
+
+ of_property_read_u32(dev->of_node, "qcom,rpc-latency-us",
+ &me->latency);
+ }
if (of_device_is_compatible(dev->of_node,
"qcom,msm-fastrpc-compute-cb"))
return fastrpc_cb_probe(dev);
@@ -2985,10 +3060,6 @@ static int fastrpc_probe(struct platform_device *pdev)
return 0;
}
- err = of_property_read_u32(dev->of_node, "qcom,rpc-latency-us",
- &me->latency);
- if (err)
- me->latency = 0;
VERIFY(err, !of_platform_populate(pdev->dev.of_node,
fastrpc_match_table,
NULL, &pdev->dev));
@@ -3000,15 +3071,15 @@ static int fastrpc_probe(struct platform_device *pdev)
static void fastrpc_deinit(void)
{
- struct fastrpc_apps *me = &gfa;
struct fastrpc_channel_ctx *chan = gcinfo;
int i, j;
for (i = 0; i < NUM_CHANNELS; i++, chan++) {
if (chan->chan) {
kref_put_mutex(&chan->kref,
- fastrpc_channel_close, &me->smd_mutex);
+ fastrpc_channel_close, &chan->mut);
chan->chan = NULL;
+ mutex_destroy(&chan->mut);
}
for (j = 0; j < NUM_SESSIONS; j++) {
struct fastrpc_session_ctx *sess = &chan->session[j];
diff --git a/drivers/char/adsprpc_shared.h b/drivers/char/adsprpc_shared.h
index 43edf71..e2f8983 100644
--- a/drivers/char/adsprpc_shared.h
+++ b/drivers/char/adsprpc_shared.h
@@ -29,6 +29,7 @@
#define FASTRPC_IOCTL_INIT_ATTRS _IOWR('R', 10, struct fastrpc_ioctl_init_attrs)
#define FASTRPC_IOCTL_INVOKE_CRC _IOWR('R', 11, struct fastrpc_ioctl_invoke_crc)
#define FASTRPC_IOCTL_CONTROL _IOWR('R', 12, struct fastrpc_ioctl_control)
+#define FASTRPC_IOCTL_MUNMAP_FD _IOWR('R', 13, struct fastrpc_ioctl_munmap_fd)
#define FASTRPC_GLINK_GUID "fastrpcglink-apps-dsp"
#define FASTRPC_SMD_GUID "fastrpcsmd-apps-dsp"
@@ -43,6 +44,9 @@
/* Set for buffers that are dma coherent */
#define FASTRPC_ATTR_COHERENT 0x4
+/* Fastrpc attribute for keeping the map persistent */
+#define FASTRPC_ATTR_KEEP_MAP 0x8
+
/* Driver should operate in parallel with the co-processor */
#define FASTRPC_MODE_PARALLEL 0
@@ -204,6 +208,13 @@ struct fastrpc_ioctl_mmap {
uintptr_t vaddrout; /* dsps virtual address */
};
+struct fastrpc_ioctl_munmap_fd {
+ int fd; /* fd */
+ uint32_t flags; /* control flags */
+ uintptr_t va; /* va */
+ ssize_t len; /* length */
+};
+
struct fastrpc_ioctl_perf { /* kernel performance data */
uintptr_t __user data;
uint32_t numkeys;
diff --git a/drivers/char/diag/diag_debugfs.c b/drivers/char/diag/diag_debugfs.c
index 40bfd74..0a3faba 100644
--- a/drivers/char/diag/diag_debugfs.c
+++ b/drivers/char/diag/diag_debugfs.c
@@ -77,7 +77,8 @@ static ssize_t diag_dbgfs_read_status(struct file *file, char __user *ubuf,
"Time Sync Enabled: %d\n"
"MD session mode: %d\n"
"MD session mask: %d\n"
- "Uses Time API: %d\n",
+ "Uses Time API: %d\n"
+ "Supports PD buffering: %d\n",
chk_config_get_id(),
chk_polling_response(),
driver->polling_reg_flag,
@@ -92,11 +93,12 @@ static ssize_t diag_dbgfs_read_status(struct file *file, char __user *ubuf,
driver->time_sync_enabled,
driver->md_session_mode,
driver->md_session_mask,
- driver->uses_time_api);
+ driver->uses_time_api,
+ driver->supports_pd_buffering);
for (i = 0; i < NUM_PERIPHERALS; i++) {
ret += scnprintf(buf+ret, buf_size-ret,
- "p: %s Feature: %02x %02x |%c%c%c%c%c%c%c%c%c|\n",
+ "p: %s Feature: %02x %02x |%c%c%c%c%c%c%c%c%c%c|\n",
PERIPHERAL_STRING(i),
driver->feature[i].feature_mask[0],
driver->feature[i].feature_mask[1],
@@ -105,6 +107,7 @@ static ssize_t diag_dbgfs_read_status(struct file *file, char __user *ubuf,
driver->feature[i].encode_hdlc ? 'H':'h',
driver->feature[i].peripheral_buffering ? 'B':'b',
driver->feature[i].mask_centralization ? 'M':'m',
+ driver->feature[i].pd_buffering ? 'P':'p',
driver->feature[i].stm_support ? 'Q':'q',
driver->feature[i].sockets_enabled ? 'S':'s',
driver->feature[i].sent_feature_mask ? 'T':'t',
diff --git a/drivers/char/diag/diag_masks.c b/drivers/char/diag/diag_masks.c
index b30bfad..f510c14 100644
--- a/drivers/char/diag/diag_masks.c
+++ b/drivers/char/diag/diag_masks.c
@@ -554,6 +554,11 @@ static int diag_cmd_get_ssid_range(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
if (!diag_apps_responds())
return 0;
@@ -655,7 +660,11 @@ static int diag_cmd_get_msg_mask(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
-
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
if (!diag_apps_responds())
return 0;
@@ -668,6 +677,12 @@ static int diag_cmd_get_msg_mask(unsigned char *src_buf, int src_len,
rsp.status = MSG_STATUS_FAIL;
rsp.padding = 0;
mask = (struct diag_msg_mask_t *)mask_info->ptr;
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ mutex_unlock(&driver->msg_mask_lock);
+ return -EINVAL;
+ }
for (i = 0; i < driver->msg_mask_tbl_count; i++, mask++) {
if ((req->ssid_first < mask->ssid_first) ||
(req->ssid_first > mask->ssid_last_tools)) {
@@ -710,11 +725,23 @@ static int diag_cmd_set_msg_mask(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
req = (struct diag_msg_build_mask_t *)src_buf;
mutex_lock(&mask_info->lock);
mutex_lock(&driver->msg_mask_lock);
mask = (struct diag_msg_mask_t *)mask_info->ptr;
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ mutex_unlock(&driver->msg_mask_lock);
+ mutex_unlock(&mask_info->lock);
+ return -EINVAL;
+ }
for (i = 0; i < driver->msg_mask_tbl_count; i++, mask++) {
if (i < (driver->msg_mask_tbl_count - 1)) {
mask_next = mask;
@@ -833,6 +860,11 @@ static int diag_cmd_set_all_msg_mask(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
req = (struct diag_msg_config_rsp_t *)src_buf;
@@ -840,6 +872,13 @@ static int diag_cmd_set_all_msg_mask(unsigned char *src_buf, int src_len,
mutex_lock(&driver->msg_mask_lock);
mask = (struct diag_msg_mask_t *)mask_info->ptr;
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ mutex_unlock(&driver->msg_mask_lock);
+ mutex_unlock(&mask_info->lock);
+ return -EINVAL;
+ }
mask_info->status = (req->rt_mask) ? DIAG_CTRL_MASK_ALL_ENABLED :
DIAG_CTRL_MASK_ALL_DISABLED;
for (i = 0; i < driver->msg_mask_tbl_count; i++, mask++) {
@@ -937,7 +976,11 @@ static int diag_cmd_update_event_mask(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
-
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
req = (struct diag_event_mask_config_t *)src_buf;
mask_len = EVENT_COUNT_TO_BYTES(req->num_bits);
if (mask_len <= 0 || mask_len > event_mask.mask_len) {
@@ -1000,6 +1043,11 @@ static int diag_cmd_toggle_events(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
toggle = *(src_buf + 1);
mutex_lock(&mask_info->lock);
@@ -1063,6 +1111,11 @@ static int diag_cmd_get_log_mask(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
if (!diag_apps_responds())
return 0;
@@ -1082,6 +1135,11 @@ static int diag_cmd_get_log_mask(unsigned char *src_buf, int src_len,
write_len += rsp_header_len;
log_item = (struct diag_log_mask_t *)mask_info->ptr;
+ if (!log_item->ptr) {
+ pr_err("diag: Invalid input in %s, mask: %pK\n",
+ __func__, log_item);
+ return -EINVAL;
+ }
for (i = 0; i < MAX_EQUIP_ID; i++, log_item++) {
if (log_item->equip_id != req->equip_id)
continue;
@@ -1187,11 +1245,20 @@ static int diag_cmd_set_log_mask(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
req = (struct diag_log_config_req_t *)src_buf;
read_len += req_header_len;
mask = (struct diag_log_mask_t *)mask_info->ptr;
-
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ return -EINVAL;
+ }
if (req->equip_id >= MAX_EQUIP_ID) {
pr_err("diag: In %s, Invalid logging mask request, equip_id: %d\n",
__func__, req->equip_id);
@@ -1314,9 +1381,17 @@ static int diag_cmd_disable_log_mask(unsigned char *src_buf, int src_len,
mask_info);
return -EINVAL;
}
-
+ if (!mask_info->ptr) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK\n",
+ __func__, mask_info->ptr);
+ return -EINVAL;
+ }
mask = (struct diag_log_mask_t *)mask_info->ptr;
-
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ return -EINVAL;
+ }
for (i = 0; i < MAX_EQUIP_ID; i++, mask++) {
mutex_lock(&mask->lock);
memset(mask->ptr, 0, mask->range);
@@ -1586,7 +1661,7 @@ static int __diag_mask_init(struct diag_mask_info *mask_info, int mask_len,
static void __diag_mask_exit(struct diag_mask_info *mask_info)
{
- if (!mask_info)
+ if (!mask_info || !mask_info->ptr)
return;
mutex_lock(&mask_info->lock);
@@ -1642,11 +1717,17 @@ void diag_log_mask_free(struct diag_mask_info *mask_info)
int i;
struct diag_log_mask_t *mask = NULL;
- if (!mask_info)
+ if (!mask_info || !mask_info->ptr)
return;
mutex_lock(&mask_info->lock);
mask = (struct diag_log_mask_t *)mask_info->ptr;
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ mutex_unlock(&mask_info->lock);
+ return;
+ }
for (i = 0; i < MAX_EQUIP_ID; i++, mask++) {
kfree(mask->ptr);
mask->ptr = NULL;
@@ -1722,11 +1803,18 @@ void diag_msg_mask_free(struct diag_mask_info *mask_info)
int i;
struct diag_msg_mask_t *mask = NULL;
- if (!mask_info)
+ if (!mask_info || !mask_info->ptr)
return;
mutex_lock(&mask_info->lock);
mutex_lock(&driver->msg_mask_lock);
mask = (struct diag_msg_mask_t *)mask_info->ptr;
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ mutex_unlock(&driver->msg_mask_lock);
+ mutex_unlock(&mask_info->lock);
+ return;
+ }
for (i = 0; i < driver->msg_mask_tbl_count; i++, mask++) {
kfree(mask->ptr);
mask->ptr = NULL;
@@ -1888,6 +1976,11 @@ int diag_copy_to_user_msg_mask(char __user *buf, size_t count,
if (!mask_info)
return -EIO;
+ if (!mask_info->ptr || !mask_info->update_buf) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK, mask_info->update_buf: %pK\n",
+ __func__, mask_info->ptr, mask_info->update_buf);
+ return -EINVAL;
+ }
mutex_lock(&driver->diag_maskclear_mutex);
if (driver->mask_clear) {
DIAG_LOG(DIAG_DEBUG_PERIPHERALS,
@@ -1900,6 +1993,13 @@ int diag_copy_to_user_msg_mask(char __user *buf, size_t count,
mutex_lock(&driver->msg_mask_lock);
mask = (struct diag_msg_mask_t *)(mask_info->ptr);
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ mutex_unlock(&driver->msg_mask_lock);
+ mutex_unlock(&mask_info->lock);
+ return -EINVAL;
+ }
for (i = 0; i < driver->msg_mask_tbl_count; i++, mask++) {
ptr = mask_info->update_buf;
len = 0;
@@ -1957,8 +2057,20 @@ int diag_copy_to_user_log_mask(char __user *buf, size_t count,
if (!mask_info)
return -EIO;
+ if (!mask_info->ptr || !mask_info->update_buf) {
+ pr_err("diag: In %s, invalid input mask_info->ptr: %pK, mask_info->update_buf: %pK\n",
+ __func__, mask_info->ptr, mask_info->update_buf);
+ return -EINVAL;
+ }
+
mutex_lock(&mask_info->lock);
mask = (struct diag_log_mask_t *)(mask_info->ptr);
+ if (!mask->ptr) {
+ pr_err("diag: Invalid input in %s, mask->ptr: %pK\n",
+ __func__, mask->ptr);
+ mutex_unlock(&mask_info->lock);
+ return -EINVAL;
+ }
for (i = 0; i < MAX_EQUIP_ID; i++, mask++) {
ptr = mask_info->update_buf;
len = 0;
@@ -2021,6 +2133,14 @@ void diag_send_updates_peripheral(uint8_t peripheral)
driver->real_time_mode[DIAG_LOCAL_PROC]);
diag_send_peripheral_buffering_mode(
&driver->buffering_mode[peripheral]);
+
+ /*
+ * Clear mask_update variable afer updating
+ * logging masks to peripheral.
+ */
+ mutex_lock(&driver->cntl_lock);
+ driver->mask_update ^= PERIPHERAL_MASK(peripheral);
+ mutex_unlock(&driver->cntl_lock);
}
}
diff --git a/drivers/char/diag/diag_memorydevice.c b/drivers/char/diag/diag_memorydevice.c
index 6377677..9cecb03 100644
--- a/drivers/char/diag/diag_memorydevice.c
+++ b/drivers/char/diag/diag_memorydevice.c
@@ -198,8 +198,10 @@ int diag_md_write(int id, unsigned char *buf, int len, int ctx)
continue;
found = 1;
- driver->data_ready[i] |= USER_SPACE_DATA_TYPE;
- atomic_inc(&driver->data_ready_notif[i]);
+ if (!(driver->data_ready[i] & USER_SPACE_DATA_TYPE)) {
+ driver->data_ready[i] |= USER_SPACE_DATA_TYPE;
+ atomic_inc(&driver->data_ready_notif[i]);
+ }
pr_debug("diag: wake up logging process\n");
wake_up_interruptible(&driver->wait_q);
}
diff --git a/drivers/char/diag/diagchar.h b/drivers/char/diag/diagchar.h
index 74180e5..9de40b0 100644
--- a/drivers/char/diag/diagchar.h
+++ b/drivers/char/diag/diagchar.h
@@ -519,6 +519,7 @@ struct diag_feature_t {
uint8_t encode_hdlc;
uint8_t untag_header;
uint8_t peripheral_buffering;
+ uint8_t pd_buffering;
uint8_t mask_centralization;
uint8_t stm_support;
uint8_t sockets_enabled;
@@ -552,6 +553,7 @@ struct diagchar_dev {
int supports_separate_cmdrsp;
int supports_apps_hdlc_encoding;
int supports_apps_header_untagging;
+ int supports_pd_buffering;
int peripheral_untag[NUM_PERIPHERALS];
int supports_sockets;
/* The state requested in the STM command */
@@ -605,8 +607,8 @@ struct diagchar_dev {
struct diagfwd_info *diagfwd_cmd[NUM_PERIPHERALS];
struct diagfwd_info *diagfwd_dci_cmd[NUM_PERIPHERALS];
struct diag_feature_t feature[NUM_PERIPHERALS];
- struct diag_buffering_mode_t buffering_mode[NUM_PERIPHERALS];
- uint8_t buffering_flag[NUM_PERIPHERALS];
+ struct diag_buffering_mode_t buffering_mode[NUM_MD_SESSIONS];
+ uint8_t buffering_flag[NUM_MD_SESSIONS];
struct mutex mode_lock;
unsigned char *user_space_data_buf;
uint8_t user_space_data_busy;
diff --git a/drivers/char/diag/diagchar_core.c b/drivers/char/diag/diagchar_core.c
index 354e676..a1c9d68 100644
--- a/drivers/char/diag/diagchar_core.c
+++ b/drivers/char/diag/diagchar_core.c
@@ -1848,9 +1848,10 @@ static int diag_ioctl_lsm_deinit(void)
mutex_unlock(&driver->diagchar_mutex);
return -EINVAL;
}
-
- driver->data_ready[i] |= DEINIT_TYPE;
- atomic_inc(&driver->data_ready_notif[i]);
+ if (!(driver->data_ready[i] & DEINIT_TYPE)) {
+ driver->data_ready[i] |= DEINIT_TYPE;
+ atomic_inc(&driver->data_ready_notif[i]);
+ }
mutex_unlock(&driver->diagchar_mutex);
wake_up_interruptible(&driver->wait_q);
@@ -1960,12 +1961,33 @@ static int diag_ioctl_get_real_time(unsigned long ioarg)
static int diag_ioctl_set_buffering_mode(unsigned long ioarg)
{
struct diag_buffering_mode_t params;
+ int peripheral = 0;
+ uint8_t diag_id = 0;
if (copy_from_user(¶ms, (void __user *)ioarg, sizeof(params)))
return -EFAULT;
- if (params.peripheral >= NUM_PERIPHERALS)
- return -EINVAL;
+ diag_map_pd_to_diagid(params.peripheral, &diag_id, &peripheral);
+
+ if ((peripheral < 0) ||
+ peripheral >= NUM_PERIPHERALS) {
+ pr_err("diag: In %s, invalid peripheral = %d\n", __func__,
+ peripheral);
+ return -EIO;
+ }
+
+ if (params.peripheral > NUM_PERIPHERALS &&
+ !driver->feature[peripheral].pd_buffering) {
+ pr_err("diag: In %s, pd buffering not supported for peripheral:%d\n",
+ __func__, peripheral);
+ return -EIO;
+ }
+
+ if (!driver->feature[peripheral].peripheral_buffering) {
+ pr_err("diag: In %s, peripheral %d doesn't support buffering\n",
+ __func__, peripheral);
+ return -EIO;
+ }
mutex_lock(&driver->mode_lock);
driver->buffering_flag[params.peripheral] = 1;
@@ -1976,24 +1998,29 @@ static int diag_ioctl_set_buffering_mode(unsigned long ioarg)
static int diag_ioctl_peripheral_drain_immediate(unsigned long ioarg)
{
- uint8_t peripheral;
+ uint8_t pd, diag_id = 0;
+ int peripheral = 0;
- if (copy_from_user(&peripheral, (void __user *)ioarg, sizeof(uint8_t)))
+ if (copy_from_user(&pd, (void __user *)ioarg, sizeof(uint8_t)))
return -EFAULT;
- if (peripheral >= NUM_PERIPHERALS) {
+ diag_map_pd_to_diagid(pd, &diag_id, &peripheral);
+
+ if ((peripheral < 0) ||
+ peripheral >= NUM_PERIPHERALS) {
pr_err("diag: In %s, invalid peripheral %d\n", __func__,
peripheral);
return -EINVAL;
}
- if (!driver->feature[peripheral].peripheral_buffering) {
- pr_err("diag: In %s, peripheral %d doesn't support buffering\n",
- __func__, peripheral);
+ if (pd > NUM_PERIPHERALS &&
+ !driver->feature[peripheral].pd_buffering) {
+ pr_err("diag: In %s, pd buffering not supported for peripheral:%d\n",
+ __func__, peripheral);
return -EIO;
}
- return diag_send_peripheral_drain_immediate(peripheral);
+ return diag_send_peripheral_drain_immediate(pd, diag_id, peripheral);
}
static int diag_ioctl_dci_support(unsigned long ioarg)
diff --git a/drivers/char/diag/diagfwd.c b/drivers/char/diag/diagfwd.c
index 34624ad..da13912 100644
--- a/drivers/char/diag/diagfwd.c
+++ b/drivers/char/diag/diagfwd.c
@@ -492,7 +492,8 @@ void diag_update_userspace_clients(unsigned int type)
mutex_lock(&driver->diagchar_mutex);
for (i = 0; i < driver->num_clients; i++)
- if (driver->client_map[i].pid != 0) {
+ if (driver->client_map[i].pid != 0 &&
+ !(driver->data_ready[i] & type)) {
driver->data_ready[i] |= type;
atomic_inc(&driver->data_ready_notif[i]);
}
@@ -511,9 +512,11 @@ void diag_update_md_clients(unsigned int type)
if (driver->client_map[j].pid != 0 &&
driver->client_map[j].pid ==
driver->md_session_map[i]->pid) {
- driver->data_ready[j] |= type;
- atomic_inc(
+ if (!(driver->data_ready[i] & type)) {
+ driver->data_ready[j] |= type;
+ atomic_inc(
&driver->data_ready_notif[j]);
+ }
break;
}
}
@@ -528,8 +531,10 @@ void diag_update_sleeping_process(int process_id, int data_type)
mutex_lock(&driver->diagchar_mutex);
for (i = 0; i < driver->num_clients; i++)
if (driver->client_map[i].pid == process_id) {
- driver->data_ready[i] |= data_type;
- atomic_inc(&driver->data_ready_notif[i]);
+ if (!(driver->data_ready[i] & data_type)) {
+ driver->data_ready[i] |= data_type;
+ atomic_inc(&driver->data_ready_notif[i]);
+ }
break;
}
wake_up_interruptible(&driver->wait_q);
@@ -1725,6 +1730,7 @@ int diagfwd_init(void)
driver->supports_separate_cmdrsp = 1;
driver->supports_apps_hdlc_encoding = 1;
driver->supports_apps_header_untagging = 1;
+ driver->supports_pd_buffering = 1;
for (i = 0; i < NUM_PERIPHERALS; i++)
driver->peripheral_untag[i] = 0;
mutex_init(&driver->diag_hdlc_mutex);
@@ -1755,6 +1761,7 @@ int diagfwd_init(void)
driver->feature[i].stm_support = DISABLE_STM;
driver->feature[i].rcvd_feature_mask = 0;
driver->feature[i].peripheral_buffering = 0;
+ driver->feature[i].pd_buffering = 0;
driver->feature[i].encode_hdlc = 0;
driver->feature[i].untag_header =
DISABLE_PKT_HEADER_UNTAGGING;
@@ -1762,6 +1769,9 @@ int diagfwd_init(void)
driver->feature[i].log_on_demand = 0;
driver->feature[i].sent_feature_mask = 0;
driver->feature[i].diag_id_support = 0;
+ }
+
+ for (i = 0; i < NUM_MD_SESSIONS; i++) {
driver->buffering_mode[i].peripheral = i;
driver->buffering_mode[i].mode = DIAG_BUFFERING_MODE_STREAMING;
driver->buffering_mode[i].high_wm_val = DEFAULT_HIGH_WM_VAL;
diff --git a/drivers/char/diag/diagfwd_cntl.c b/drivers/char/diag/diagfwd_cntl.c
index 26661e6..162d53f 100644
--- a/drivers/char/diag/diagfwd_cntl.c
+++ b/drivers/char/diag/diagfwd_cntl.c
@@ -39,9 +39,6 @@ static void diag_mask_update_work_fn(struct work_struct *work)
for (peripheral = 0; peripheral <= NUM_PERIPHERALS; peripheral++) {
if (!(driver->mask_update & PERIPHERAL_MASK(peripheral)))
continue;
- mutex_lock(&driver->cntl_lock);
- driver->mask_update ^= PERIPHERAL_MASK(peripheral);
- mutex_unlock(&driver->cntl_lock);
diag_send_updates_peripheral(peripheral);
}
}
@@ -423,6 +420,8 @@ static void process_incoming_feature_mask(uint8_t *buf, uint32_t len,
enable_socket_feature(peripheral);
if (FEATURE_SUPPORTED(F_DIAG_DIAGID_SUPPORT))
driver->feature[peripheral].diag_id_support = 1;
+ if (FEATURE_SUPPORTED(F_DIAG_PD_BUFFERING))
+ driver->feature[peripheral].pd_buffering = 1;
}
process_socket_feature(peripheral);
@@ -832,7 +831,7 @@ static void process_diagid(uint8_t *buf, uint32_t len,
*/
if (root_str) {
driver->diag_id_sent[peripheral] = 1;
- diag_send_updates_peripheral(peripheral);
+ queue_work(driver->cntl_wq, &driver->mask_update_work);
}
fwd_info = &peripheral_info[TYPE_DATA][peripheral];
diagfwd_buffers_init(fwd_info);
@@ -947,32 +946,54 @@ static int diag_compute_real_time(int idx)
}
static void diag_create_diag_mode_ctrl_pkt(unsigned char *dest_buf,
- int real_time)
+ uint8_t diag_id, int real_time)
{
struct diag_ctrl_msg_diagmode diagmode;
+ struct diag_ctrl_msg_diagmode_v2 diagmode_v2;
int msg_size = sizeof(struct diag_ctrl_msg_diagmode);
+ int msg_size_2 = sizeof(struct diag_ctrl_msg_diagmode_v2);
if (!dest_buf)
return;
- diagmode.ctrl_pkt_id = DIAG_CTRL_MSG_DIAGMODE;
- diagmode.ctrl_pkt_data_len = DIAG_MODE_PKT_LEN;
- diagmode.version = 1;
- diagmode.sleep_vote = real_time ? 1 : 0;
- /*
- * 0 - Disables real-time logging (to prevent
- * frequent APPS wake-ups, etc.).
- * 1 - Enable real-time logging
- */
- diagmode.real_time = real_time;
- diagmode.use_nrt_values = 0;
- diagmode.commit_threshold = 0;
- diagmode.sleep_threshold = 0;
- diagmode.sleep_time = 0;
- diagmode.drain_timer_val = 0;
- diagmode.event_stale_timer_val = 0;
-
- memcpy(dest_buf, &diagmode, msg_size);
+ if (diag_id) {
+ diagmode_v2.ctrl_pkt_id = DIAG_CTRL_MSG_DIAGMODE;
+ diagmode_v2.ctrl_pkt_data_len = DIAG_MODE_PKT_LEN_V2;
+ diagmode_v2.version = 2;
+ diagmode_v2.sleep_vote = real_time ? 1 : 0;
+ /*
+ * 0 - Disables real-time logging (to prevent
+ * frequent APPS wake-ups, etc.).
+ * 1 - Enable real-time logging
+ */
+ diagmode_v2.real_time = real_time;
+ diagmode_v2.use_nrt_values = 0;
+ diagmode_v2.commit_threshold = 0;
+ diagmode_v2.sleep_threshold = 0;
+ diagmode_v2.sleep_time = 0;
+ diagmode_v2.drain_timer_val = 0;
+ diagmode_v2.event_stale_timer_val = 0;
+ diagmode_v2.diag_id = diag_id;
+ memcpy(dest_buf, &diagmode_v2, msg_size_2);
+ } else {
+ diagmode.ctrl_pkt_id = DIAG_CTRL_MSG_DIAGMODE;
+ diagmode.ctrl_pkt_data_len = DIAG_MODE_PKT_LEN;
+ diagmode.version = 1;
+ diagmode.sleep_vote = real_time ? 1 : 0;
+ /*
+ * 0 - Disables real-time logging (to prevent
+ * frequent APPS wake-ups, etc.).
+ * 1 - Enable real-time logging
+ */
+ diagmode.real_time = real_time;
+ diagmode.use_nrt_values = 0;
+ diagmode.commit_threshold = 0;
+ diagmode.sleep_threshold = 0;
+ diagmode.sleep_time = 0;
+ diagmode.drain_timer_val = 0;
+ diagmode.event_stale_timer_val = 0;
+ memcpy(dest_buf, &diagmode, msg_size);
+ }
}
void diag_update_proc_vote(uint16_t proc, uint8_t vote, int index)
@@ -1057,7 +1078,7 @@ static void diag_send_diag_mode_update_remote(int token, int real_time)
memcpy(buf + write_len, &dci_header, dci_header_size);
write_len += dci_header_size;
- diag_create_diag_mode_ctrl_pkt(buf + write_len, real_time);
+ diag_create_diag_mode_ctrl_pkt(buf + write_len, 0, real_time);
write_len += msg_size;
*(buf + write_len) = CONTROL_CHAR; /* End Terminator */
write_len += sizeof(uint8_t);
@@ -1163,14 +1184,18 @@ void diag_real_time_work_fn(struct work_struct *work)
}
#endif
-static int __diag_send_real_time_update(uint8_t peripheral, int real_time)
+static int __diag_send_real_time_update(uint8_t peripheral, int real_time,
+ uint8_t diag_id)
{
- char buf[sizeof(struct diag_ctrl_msg_diagmode)];
- int msg_size = sizeof(struct diag_ctrl_msg_diagmode);
+ char buf[sizeof(struct diag_ctrl_msg_diagmode_v2)];
+ int msg_size = 0;
int err = 0;
- if (peripheral >= NUM_PERIPHERALS)
+ if (peripheral >= NUM_PERIPHERALS) {
+ pr_err("diag: In %s, invalid peripheral %d\n", __func__,
+ peripheral);
return -EINVAL;
+ }
if (!driver->diagfwd_cntl[peripheral] ||
!driver->diagfwd_cntl[peripheral]->ch_open) {
@@ -1185,12 +1210,17 @@ static int __diag_send_real_time_update(uint8_t peripheral, int real_time)
return -EINVAL;
}
- diag_create_diag_mode_ctrl_pkt(buf, real_time);
+ msg_size = (diag_id ? sizeof(struct diag_ctrl_msg_diagmode_v2) :
+ sizeof(struct diag_ctrl_msg_diagmode));
+
+ diag_create_diag_mode_ctrl_pkt(buf, diag_id, real_time);
mutex_lock(&driver->diag_cntl_mutex);
+
err = diagfwd_write(peripheral, TYPE_CNTL, buf, msg_size);
+
if (err && err != -ENODEV) {
- pr_err("diag: In %s, unable to write to socket, peripheral: %d, type: %d, len: %d, err: %d\n",
+ pr_err("diag: In %s, unable to write, peripheral: %d, type: %d, len: %d, err: %d\n",
__func__, peripheral, TYPE_CNTL,
msg_size, err);
} else {
@@ -1216,27 +1246,56 @@ int diag_send_real_time_update(uint8_t peripheral, int real_time)
return -EINVAL;
}
- return __diag_send_real_time_update(peripheral, real_time);
+ return __diag_send_real_time_update(peripheral, real_time, 0);
+}
+
+void diag_map_pd_to_diagid(uint8_t pd, uint8_t *diag_id, int *peripheral)
+{
+ if (!diag_search_diagid_by_pd(pd, (void *)diag_id,
+ (void *)peripheral)) {
+ *diag_id = 0;
+ if ((pd >= 0) && pd < NUM_PERIPHERALS)
+ *peripheral = pd;
+ else
+ *peripheral = -EINVAL;
+ }
+
+ if (*peripheral >= 0)
+ if (!driver->feature[*peripheral].pd_buffering)
+ *diag_id = 0;
}
int diag_send_peripheral_buffering_mode(struct diag_buffering_mode_t *params)
{
int err = 0;
int mode = MODE_REALTIME;
- uint8_t peripheral = 0;
+ int peripheral = 0;
+ uint8_t diag_id = 0;
if (!params)
return -EIO;
- peripheral = params->peripheral;
- if (peripheral >= NUM_PERIPHERALS) {
+ diag_map_pd_to_diagid(params->peripheral,
+ &diag_id, &peripheral);
+
+ if ((peripheral < 0) ||
+ peripheral >= NUM_PERIPHERALS) {
pr_err("diag: In %s, invalid peripheral %d\n", __func__,
peripheral);
return -EINVAL;
}
- if (!driver->buffering_flag[peripheral])
+ if (!driver->buffering_flag[params->peripheral]) {
+ pr_err("diag: In %s, buffering flag not set for %d\n", __func__,
+ params->peripheral);
return -EINVAL;
+ }
+
+ if (!driver->feature[peripheral].peripheral_buffering) {
+ pr_err("diag: In %s, peripheral %d doesn't support buffering\n",
+ __func__, peripheral);
+ return -EIO;
+ }
switch (params->mode) {
case DIAG_BUFFERING_MODE_STREAMING:
@@ -1255,7 +1314,7 @@ int diag_send_peripheral_buffering_mode(struct diag_buffering_mode_t *params)
if (!driver->feature[peripheral].peripheral_buffering) {
pr_debug("diag: In %s, peripheral %d doesn't support buffering\n",
__func__, peripheral);
- driver->buffering_flag[peripheral] = 0;
+ driver->buffering_flag[params->peripheral] = 0;
return -EIO;
}
@@ -1270,35 +1329,39 @@ int diag_send_peripheral_buffering_mode(struct diag_buffering_mode_t *params)
(params->low_wm_val != DIAG_MIN_WM_VAL))) {
pr_err("diag: In %s, invalid watermark values, high: %d, low: %d, peripheral: %d\n",
__func__, params->high_wm_val, params->low_wm_val,
- peripheral);
+ params->peripheral);
return -EINVAL;
}
mutex_lock(&driver->mode_lock);
- err = diag_send_buffering_tx_mode_pkt(peripheral, params);
+ err = diag_send_buffering_tx_mode_pkt(peripheral, diag_id, params);
if (err) {
pr_err("diag: In %s, unable to send buffering mode packet to peripheral %d, err: %d\n",
__func__, peripheral, err);
goto fail;
}
- err = diag_send_buffering_wm_values(peripheral, params);
+ err = diag_send_buffering_wm_values(peripheral, diag_id, params);
if (err) {
pr_err("diag: In %s, unable to send buffering wm value packet to peripheral %d, err: %d\n",
__func__, peripheral, err);
goto fail;
}
- err = __diag_send_real_time_update(peripheral, mode);
+ err = __diag_send_real_time_update(peripheral, mode, diag_id);
if (err) {
pr_err("diag: In %s, unable to send mode update to peripheral %d, mode: %d, err: %d\n",
__func__, peripheral, mode, err);
goto fail;
}
- driver->buffering_mode[peripheral].peripheral = peripheral;
- driver->buffering_mode[peripheral].mode = params->mode;
- driver->buffering_mode[peripheral].low_wm_val = params->low_wm_val;
- driver->buffering_mode[peripheral].high_wm_val = params->high_wm_val;
+ driver->buffering_mode[params->peripheral].peripheral =
+ params->peripheral;
+ driver->buffering_mode[params->peripheral].mode =
+ params->mode;
+ driver->buffering_mode[params->peripheral].low_wm_val =
+ params->low_wm_val;
+ driver->buffering_mode[params->peripheral].high_wm_val =
+ params->high_wm_val;
if (params->mode == DIAG_BUFFERING_MODE_STREAMING)
- driver->buffering_flag[peripheral] = 0;
+ driver->buffering_flag[params->peripheral] = 0;
fail:
mutex_unlock(&driver->mode_lock);
return err;
@@ -1337,10 +1400,12 @@ int diag_send_stm_state(uint8_t peripheral, uint8_t stm_control_data)
return err;
}
-int diag_send_peripheral_drain_immediate(uint8_t peripheral)
+int diag_send_peripheral_drain_immediate(uint8_t pd,
+ uint8_t diag_id, int peripheral)
{
int err = 0;
struct diag_ctrl_drain_immediate ctrl_pkt;
+ struct diag_ctrl_drain_immediate_v2 ctrl_pkt_v2;
if (!driver->feature[peripheral].peripheral_buffering) {
pr_debug("diag: In %s, peripheral %d doesn't support buffering\n",
@@ -1355,32 +1420,57 @@ int diag_send_peripheral_drain_immediate(uint8_t peripheral)
return -ENODEV;
}
- ctrl_pkt.pkt_id = DIAG_CTRL_MSG_PERIPHERAL_BUF_DRAIN_IMM;
- /* The length of the ctrl pkt is size of version and stream id */
- ctrl_pkt.len = sizeof(uint32_t) + sizeof(uint8_t);
- ctrl_pkt.version = 1;
- ctrl_pkt.stream_id = 1;
-
- err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt, sizeof(ctrl_pkt));
- if (err && err != -ENODEV) {
- pr_err("diag: Unable to send drain immediate ctrl packet to peripheral %d, err: %d\n",
- peripheral, err);
+ if (diag_id && driver->feature[peripheral].pd_buffering) {
+ ctrl_pkt_v2.pkt_id = DIAG_CTRL_MSG_PERIPHERAL_BUF_DRAIN_IMM;
+ /*
+ * The length of the ctrl pkt is size of version,
+ * diag_id and stream id
+ */
+ ctrl_pkt_v2.len = sizeof(uint32_t) + (2 * sizeof(uint8_t));
+ ctrl_pkt_v2.version = 2;
+ ctrl_pkt_v2.diag_id = diag_id;
+ ctrl_pkt_v2.stream_id = 1;
+ err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt_v2,
+ sizeof(ctrl_pkt_v2));
+ if (err && err != -ENODEV) {
+ pr_err("diag: Unable to send drain immediate ctrl packet to peripheral %d, err: %d\n",
+ peripheral, err);
+ }
+ } else {
+ ctrl_pkt.pkt_id = DIAG_CTRL_MSG_PERIPHERAL_BUF_DRAIN_IMM;
+ /*
+ * The length of the ctrl pkt is
+ * size of version and stream id
+ */
+ ctrl_pkt.len = sizeof(uint32_t) + sizeof(uint8_t);
+ ctrl_pkt.version = 1;
+ ctrl_pkt.stream_id = 1;
+ err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt,
+ sizeof(ctrl_pkt));
+ if (err && err != -ENODEV) {
+ pr_err("diag: Unable to send drain immediate ctrl packet to peripheral %d, err: %d\n",
+ peripheral, err);
+ }
}
return err;
}
int diag_send_buffering_tx_mode_pkt(uint8_t peripheral,
- struct diag_buffering_mode_t *params)
+ uint8_t diag_id, struct diag_buffering_mode_t *params)
{
int err = 0;
struct diag_ctrl_peripheral_tx_mode ctrl_pkt;
+ struct diag_ctrl_peripheral_tx_mode_v2 ctrl_pkt_v2;
if (!params)
return -EIO;
- if (peripheral >= NUM_PERIPHERALS)
+ if (peripheral >= NUM_PERIPHERALS) {
+ pr_err("diag: In %s, invalid peripheral %d\n", __func__,
+ peripheral);
return -EINVAL;
+ }
if (!driver->feature[peripheral].peripheral_buffering) {
pr_debug("diag: In %s, peripheral %d doesn't support buffering\n",
@@ -1388,9 +1478,6 @@ int diag_send_buffering_tx_mode_pkt(uint8_t peripheral,
return -EINVAL;
}
- if (params->peripheral != peripheral)
- return -EINVAL;
-
switch (params->mode) {
case DIAG_BUFFERING_MODE_STREAMING:
case DIAG_BUFFERING_MODE_THRESHOLD:
@@ -1402,36 +1489,67 @@ int diag_send_buffering_tx_mode_pkt(uint8_t peripheral,
return -EINVAL;
}
- ctrl_pkt.pkt_id = DIAG_CTRL_MSG_CONFIG_PERIPHERAL_TX_MODE;
- /* Control packet length is size of version, stream_id and tx_mode */
- ctrl_pkt.len = sizeof(uint32_t) + (2 * sizeof(uint8_t));
- ctrl_pkt.version = 1;
- ctrl_pkt.stream_id = 1;
- ctrl_pkt.tx_mode = params->mode;
+ if (diag_id &&
+ driver->feature[peripheral].pd_buffering) {
- err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt, sizeof(ctrl_pkt));
- if (err && err != -ENODEV) {
- pr_err("diag: Unable to send tx_mode ctrl packet to peripheral %d, err: %d\n",
- peripheral, err);
- goto fail;
+ ctrl_pkt_v2.pkt_id = DIAG_CTRL_MSG_CONFIG_PERIPHERAL_TX_MODE;
+ /*
+ * Control packet length is size of version, diag_id,
+ * stream_id and tx_mode
+ */
+ ctrl_pkt_v2.len = sizeof(uint32_t) + (3 * sizeof(uint8_t));
+ ctrl_pkt_v2.version = 2;
+ ctrl_pkt_v2.diag_id = diag_id;
+ ctrl_pkt_v2.stream_id = 1;
+ ctrl_pkt_v2.tx_mode = params->mode;
+
+ err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt_v2,
+ sizeof(ctrl_pkt_v2));
+ if (err && err != -ENODEV) {
+ pr_err("diag: Unable to send tx_mode ctrl packet to peripheral %d, err: %d\n",
+ peripheral, err);
+ goto fail;
+ }
+ } else {
+ ctrl_pkt.pkt_id = DIAG_CTRL_MSG_CONFIG_PERIPHERAL_TX_MODE;
+ /*
+ * Control packet length is size of version,
+ * stream_id and tx_mode
+ */
+ ctrl_pkt.len = sizeof(uint32_t) + (2 * sizeof(uint8_t));
+ ctrl_pkt.version = 1;
+ ctrl_pkt.stream_id = 1;
+ ctrl_pkt.tx_mode = params->mode;
+
+ err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt,
+ sizeof(ctrl_pkt));
+ if (err && err != -ENODEV) {
+ pr_err("diag: Unable to send tx_mode ctrl packet to peripheral %d, err: %d\n",
+ peripheral, err);
+ goto fail;
+ }
}
- driver->buffering_mode[peripheral].mode = params->mode;
+ driver->buffering_mode[params->peripheral].mode = params->mode;
fail:
return err;
}
int diag_send_buffering_wm_values(uint8_t peripheral,
- struct diag_buffering_mode_t *params)
+ uint8_t diag_id, struct diag_buffering_mode_t *params)
{
int err = 0;
struct diag_ctrl_set_wq_val ctrl_pkt;
+ struct diag_ctrl_set_wq_val_v2 ctrl_pkt_v2;
if (!params)
return -EIO;
- if (peripheral >= NUM_PERIPHERALS)
+ if (peripheral >= NUM_PERIPHERALS) {
+ pr_err("diag: In %s, invalid peripheral %d\n", __func__,
+ peripheral);
return -EINVAL;
+ }
if (!driver->feature[peripheral].peripheral_buffering) {
pr_debug("diag: In %s, peripheral %d doesn't support buffering\n",
@@ -1446,9 +1564,6 @@ int diag_send_buffering_wm_values(uint8_t peripheral,
return -ENODEV;
}
- if (params->peripheral != peripheral)
- return -EINVAL;
-
switch (params->mode) {
case DIAG_BUFFERING_MODE_STREAMING:
case DIAG_BUFFERING_MODE_THRESHOLD:
@@ -1460,21 +1575,45 @@ int diag_send_buffering_wm_values(uint8_t peripheral,
return -EINVAL;
}
- ctrl_pkt.pkt_id = DIAG_CTRL_MSG_CONFIG_PERIPHERAL_WMQ_VAL;
- /* Control packet length is size of version, stream_id and wmq values */
- ctrl_pkt.len = sizeof(uint32_t) + (3 * sizeof(uint8_t));
- ctrl_pkt.version = 1;
- ctrl_pkt.stream_id = 1;
- ctrl_pkt.high_wm_val = params->high_wm_val;
- ctrl_pkt.low_wm_val = params->low_wm_val;
+ if (diag_id &&
+ driver->feature[peripheral].pd_buffering) {
+ ctrl_pkt_v2.pkt_id = DIAG_CTRL_MSG_CONFIG_PERIPHERAL_WMQ_VAL;
+ /*
+ * Control packet length is size of version, diag_id,
+ * stream_id and wmq values
+ */
+ ctrl_pkt_v2.len = sizeof(uint32_t) + (4 * sizeof(uint8_t));
+ ctrl_pkt_v2.version = 2;
+ ctrl_pkt_v2.diag_id = diag_id;
+ ctrl_pkt_v2.stream_id = 1;
+ ctrl_pkt_v2.high_wm_val = params->high_wm_val;
+ ctrl_pkt_v2.low_wm_val = params->low_wm_val;
- err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt,
- sizeof(ctrl_pkt));
- if (err && err != -ENODEV) {
- pr_err("diag: Unable to send watermark values to peripheral %d, err: %d\n",
- peripheral, err);
+ err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt_v2,
+ sizeof(ctrl_pkt_v2));
+ if (err && err != -ENODEV) {
+ pr_err("diag: Unable to send watermark values to peripheral %d, err: %d\n",
+ peripheral, err);
+ }
+ } else {
+ ctrl_pkt.pkt_id = DIAG_CTRL_MSG_CONFIG_PERIPHERAL_WMQ_VAL;
+ /*
+ * Control packet length is size of version,
+ * stream_id and wmq values
+ */
+ ctrl_pkt.len = sizeof(uint32_t) + (3 * sizeof(uint8_t));
+ ctrl_pkt.version = 1;
+ ctrl_pkt.stream_id = 1;
+ ctrl_pkt.high_wm_val = params->high_wm_val;
+ ctrl_pkt.low_wm_val = params->low_wm_val;
+
+ err = diagfwd_write(peripheral, TYPE_CNTL, &ctrl_pkt,
+ sizeof(ctrl_pkt));
+ if (err && err != -ENODEV) {
+ pr_err("diag: Unable to send watermark values to peripheral %d, err: %d\n",
+ peripheral, err);
+ }
}
-
return err;
}
diff --git a/drivers/char/diag/diagfwd_cntl.h b/drivers/char/diag/diagfwd_cntl.h
index 1d8d167..848ad87 100644
--- a/drivers/char/diag/diagfwd_cntl.h
+++ b/drivers/char/diag/diagfwd_cntl.h
@@ -69,6 +69,7 @@
#define F_DIAG_DCI_EXTENDED_HEADER_SUPPORT 14
#define F_DIAG_DIAGID_SUPPORT 15
#define F_DIAG_PKT_HEADER_UNTAG 16
+#define F_DIAG_PD_BUFFERING 17
#define ENABLE_SEPARATE_CMDRSP 1
#define DISABLE_SEPARATE_CMDRSP 0
@@ -86,7 +87,8 @@
#define ENABLE_PKT_HEADER_UNTAGGING 1
#define DISABLE_PKT_HEADER_UNTAGGING 0
-#define DIAG_MODE_PKT_LEN 36
+#define DIAG_MODE_PKT_LEN 36
+#define DIAG_MODE_PKT_LEN_V2 37
struct diag_ctrl_pkt_header_t {
uint32_t pkt_id;
@@ -172,6 +174,21 @@ struct diag_ctrl_msg_diagmode {
uint32_t event_stale_timer_val;
} __packed;
+struct diag_ctrl_msg_diagmode_v2 {
+ uint32_t ctrl_pkt_id;
+ uint32_t ctrl_pkt_data_len;
+ uint32_t version;
+ uint32_t sleep_vote;
+ uint32_t real_time;
+ uint32_t use_nrt_values;
+ uint32_t commit_threshold;
+ uint32_t sleep_threshold;
+ uint32_t sleep_time;
+ uint32_t drain_timer_val;
+ uint32_t event_stale_timer_val;
+ uint8_t diag_id;
+} __packed;
+
struct diag_ctrl_msg_stm {
uint32_t ctrl_pkt_id;
uint32_t ctrl_pkt_data_len;
@@ -250,6 +267,15 @@ struct diag_ctrl_peripheral_tx_mode {
uint8_t tx_mode;
} __packed;
+struct diag_ctrl_peripheral_tx_mode_v2 {
+ uint32_t pkt_id;
+ uint32_t len;
+ uint32_t version;
+ uint8_t diag_id;
+ uint8_t stream_id;
+ uint8_t tx_mode;
+} __packed;
+
struct diag_ctrl_drain_immediate {
uint32_t pkt_id;
uint32_t len;
@@ -257,6 +283,14 @@ struct diag_ctrl_drain_immediate {
uint8_t stream_id;
} __packed;
+struct diag_ctrl_drain_immediate_v2 {
+ uint32_t pkt_id;
+ uint32_t len;
+ uint32_t version;
+ uint8_t diag_id;
+ uint8_t stream_id;
+} __packed;
+
struct diag_ctrl_set_wq_val {
uint32_t pkt_id;
uint32_t len;
@@ -266,6 +300,16 @@ struct diag_ctrl_set_wq_val {
uint8_t low_wm_val;
} __packed;
+struct diag_ctrl_set_wq_val_v2 {
+ uint32_t pkt_id;
+ uint32_t len;
+ uint32_t version;
+ uint8_t diag_id;
+ uint8_t stream_id;
+ uint8_t high_wm_val;
+ uint8_t low_wm_val;
+} __packed;
+
struct diag_ctrl_diagid {
uint32_t pkt_id;
uint32_t len;
@@ -290,9 +334,10 @@ void diag_update_proc_vote(uint16_t proc, uint8_t vote, int index);
void diag_update_real_time_vote(uint16_t proc, uint8_t real_time, int index);
void diag_real_time_work_fn(struct work_struct *work);
int diag_send_stm_state(uint8_t peripheral, uint8_t stm_control_data);
-int diag_send_peripheral_drain_immediate(uint8_t peripheral);
+int diag_send_peripheral_drain_immediate(uint8_t pd,
+ uint8_t diag_id, int peripheral);
int diag_send_buffering_tx_mode_pkt(uint8_t peripheral,
- struct diag_buffering_mode_t *params);
+ uint8_t diag_id, struct diag_buffering_mode_t *params);
int diag_send_buffering_wm_values(uint8_t peripheral,
- struct diag_buffering_mode_t *params);
+ uint8_t diag_id, struct diag_buffering_mode_t *params);
#endif
diff --git a/drivers/char/hw_random/msm_rng.c b/drivers/char/hw_random/msm_rng.c
index d5dd8ae..fdcef1d 100644
--- a/drivers/char/hw_random/msm_rng.c
+++ b/drivers/char/hw_random/msm_rng.c
@@ -53,6 +53,9 @@
#define MAX_HW_FIFO_DEPTH 16 /* FIFO is 16 words deep */
#define MAX_HW_FIFO_SIZE (MAX_HW_FIFO_DEPTH * 4) /* FIFO is 32 bits wide */
+#define RETRY_MAX_CNT 5 /* max retry times to read register */
+#define RETRY_DELAY_INTERVAL 440 /* retry delay interval in us */
+
struct msm_rng_device {
struct platform_device *pdev;
void __iomem *base;
@@ -96,7 +99,7 @@ static int msm_rng_direct_read(struct msm_rng_device *msm_rng_dev,
struct platform_device *pdev;
void __iomem *base;
size_t currsize = 0;
- u32 val;
+ u32 val = 0;
u32 *retdata = data;
int ret;
int failed = 0;
@@ -113,39 +116,41 @@ static int msm_rng_direct_read(struct msm_rng_device *msm_rng_dev,
if (msm_rng_dev->qrng_perf_client) {
ret = msm_bus_scale_client_update_request(
msm_rng_dev->qrng_perf_client, 1);
- if (ret)
+ if (ret) {
pr_err("bus_scale_client_update_req failed!\n");
+ goto bus_err;
+ }
}
/* enable PRNG clock */
ret = clk_prepare_enable(msm_rng_dev->prng_clk);
if (ret) {
- dev_err(&pdev->dev, "failed to enable clock in callback\n");
+ pr_err("failed to enable prng clock\n");
goto err;
}
/* read random data from h/w */
do {
/* check status bit if data is available */
- while (!(readl_relaxed(base + PRNG_STATUS_OFFSET)
+ if (!(readl_relaxed(base + PRNG_STATUS_OFFSET)
& 0x00000001)) {
- if (failed == 10) {
- pr_err("Data not available after retry\n");
+ if (failed++ == RETRY_MAX_CNT) {
+ if (currsize == 0)
+ pr_err("Data not available\n");
break;
}
- pr_err("msm_rng:Data not available!\n");
- msleep_interruptible(10);
- failed++;
+ udelay(RETRY_DELAY_INTERVAL);
+ } else {
+
+ /* read FIFO */
+ val = readl_relaxed(base + PRNG_DATA_OUT_OFFSET);
+
+ /* write data back to callers pointer */
+ *(retdata++) = val;
+ currsize += 4;
+ /* make sure we stay on 32bit boundary */
+ if ((max - currsize) < 4)
+ break;
}
- /* read FIFO */
- val = readl_relaxed(base + PRNG_DATA_OUT_OFFSET);
-
- /* write data back to callers pointer */
- *(retdata++) = val;
- currsize += 4;
- /* make sure we stay on 32bit boundary */
- if ((max - currsize) < 4)
- break;
-
} while (currsize < max);
/* vote to turn off clock */
@@ -157,6 +162,7 @@ static int msm_rng_direct_read(struct msm_rng_device *msm_rng_dev,
if (ret)
pr_err("bus_scale_client_update_req failed!\n");
}
+bus_err:
mutex_unlock(&msm_rng_dev->rng_lock);
val = 0L;
diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index 172a9dc..5d509cc 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -4029,7 +4029,8 @@ smi_from_recv_msg(ipmi_smi_t intf, struct ipmi_recv_msg *recv_msg,
}
static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
- struct list_head *timeouts, long timeout_period,
+ struct list_head *timeouts,
+ unsigned long timeout_period,
int slot, unsigned long *flags,
unsigned int *waiting_msgs)
{
@@ -4042,8 +4043,8 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
if (!ent->inuse)
return;
- ent->timeout -= timeout_period;
- if (ent->timeout > 0) {
+ if (timeout_period < ent->timeout) {
+ ent->timeout -= timeout_period;
(*waiting_msgs)++;
return;
}
@@ -4109,7 +4110,8 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
}
}
-static unsigned int ipmi_timeout_handler(ipmi_smi_t intf, long timeout_period)
+static unsigned int ipmi_timeout_handler(ipmi_smi_t intf,
+ unsigned long timeout_period)
{
struct list_head timeouts;
struct ipmi_recv_msg *msg, *msg2;
diff --git a/drivers/char/msm_smd_pkt.c b/drivers/char/msm_smd_pkt.c
new file mode 100644
index 0000000..ff77cb2
--- /dev/null
+++ b/drivers/char/msm_smd_pkt.c
@@ -0,0 +1,1397 @@
+/* Copyright (c) 2008-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+/*
+ * SMD Packet Driver -- Provides a binary SMD non-muxed packet port
+ * interface.
+ */
+
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/device.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/delay.h>
+#include <linux/uaccess.h>
+#include <linux/workqueue.h>
+#include <linux/platform_device.h>
+#include <linux/completion.h>
+#include <linux/msm_smd_pkt.h>
+#include <linux/poll.h>
+#include <soc/qcom/smd.h>
+#include <soc/qcom/smsm.h>
+#include <soc/qcom/subsystem_restart.h>
+#include <asm/ioctls.h>
+#include <linux/pm.h>
+#include <linux/of.h>
+#include <linux/ipc_logging.h>
+
+#define MODULE_NAME "msm_smdpkt"
+#define DEVICE_NAME "smdpkt"
+#define WAKEUPSOURCE_TIMEOUT (2000) /* two seconds */
+
+struct smd_pkt_dev {
+ struct list_head dev_list;
+ char dev_name[SMD_MAX_CH_NAME_LEN];
+ char ch_name[SMD_MAX_CH_NAME_LEN];
+ uint32_t edge;
+
+ struct cdev cdev;
+ struct device *devicep;
+ void *pil;
+
+ struct smd_channel *ch;
+ struct mutex ch_lock;
+ struct mutex rx_lock;
+ struct mutex tx_lock;
+ wait_queue_head_t ch_read_wait_queue;
+ wait_queue_head_t ch_write_wait_queue;
+ wait_queue_head_t ch_opened_wait_queue;
+
+ int i;
+ int ref_cnt;
+
+ int blocking_write;
+ int is_open;
+ int poll_mode;
+ unsigned int ch_size;
+ uint open_modem_wait;
+
+ int has_reset;
+ int do_reset_notification;
+ struct completion ch_allocated;
+ struct wakeup_source pa_ws; /* Packet Arrival Wakeup Source */
+ struct work_struct packet_arrival_work;
+ spinlock_t pa_spinlock;
+ int ws_locked;
+};
+
+
+struct smd_pkt_driver {
+ struct list_head list;
+ int ref_cnt;
+ char pdriver_name[SMD_MAX_CH_NAME_LEN];
+ struct platform_driver driver;
+};
+
+static DEFINE_MUTEX(smd_pkt_driver_lock_lha1);
+static LIST_HEAD(smd_pkt_driver_list);
+
+struct class *smd_pkt_classp;
+static dev_t smd_pkt_number;
+static struct delayed_work loopback_work;
+static void check_and_wakeup_reader(struct smd_pkt_dev *smd_pkt_devp);
+static void check_and_wakeup_writer(struct smd_pkt_dev *smd_pkt_devp);
+static uint32_t is_modem_smsm_inited(void);
+
+static DEFINE_MUTEX(smd_pkt_dev_lock_lha1);
+static LIST_HEAD(smd_pkt_dev_list);
+static int num_smd_pkt_ports;
+
+#define SMD_PKT_IPC_LOG_PAGE_CNT 2
+static void *smd_pkt_ilctxt;
+
+static int msm_smd_pkt_debug_mask;
+module_param_named(debug_mask, msm_smd_pkt_debug_mask, int, 0664);
+
+enum {
+ SMD_PKT_STATUS = 1U << 0,
+ SMD_PKT_READ = 1U << 1,
+ SMD_PKT_WRITE = 1U << 2,
+ SMD_PKT_POLL = 1U << 5,
+};
+
+#define DEBUG
+
+#ifdef DEBUG
+
+#define SMD_PKT_LOG_STRING(x...) \
+do { \
+ if (smd_pkt_ilctxt) \
+ ipc_log_string(smd_pkt_ilctxt, "<SMD_PKT>: "x); \
+} while (0)
+
+#define D_STATUS(x...) \
+do { \
+ if (msm_smd_pkt_debug_mask & SMD_PKT_STATUS) \
+ pr_info("Status: "x); \
+ SMD_PKT_LOG_STRING(x); \
+} while (0)
+
+#define D_READ(x...) \
+do { \
+ if (msm_smd_pkt_debug_mask & SMD_PKT_READ) \
+ pr_info("Read: "x); \
+ SMD_PKT_LOG_STRING(x); \
+} while (0)
+
+#define D_WRITE(x...) \
+do { \
+ if (msm_smd_pkt_debug_mask & SMD_PKT_WRITE) \
+ pr_info("Write: "x); \
+ SMD_PKT_LOG_STRING(x); \
+} while (0)
+
+#define D_POLL(x...) \
+do { \
+ if (msm_smd_pkt_debug_mask & SMD_PKT_POLL) \
+ pr_info("Poll: "x); \
+ SMD_PKT_LOG_STRING(x); \
+} while (0)
+
+#define E_SMD_PKT_SSR(x) \
+do { \
+ if (x->do_reset_notification) \
+ pr_err("%s notifying reset for smd_pkt_dev id:%d\n", \
+ __func__, x->i); \
+} while (0)
+#else
+#define D_STATUS(x...) do {} while (0)
+#define D_READ(x...) do {} while (0)
+#define D_WRITE(x...) do {} while (0)
+#define D_POLL(x...) do {} while (0)
+#define E_SMD_PKT_SSR(x) do {} while (0)
+#endif
+
+static ssize_t open_timeout_store(struct device *d,
+ struct device_attribute *attr,
+ const char *buf,
+ size_t n)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+ unsigned long tmp;
+
+ mutex_lock(&smd_pkt_dev_lock_lha1);
+ list_for_each_entry(smd_pkt_devp, &smd_pkt_dev_list, dev_list) {
+ if (smd_pkt_devp->devicep == d) {
+ if (!kstrtoul(buf, 10, &tmp)) {
+ smd_pkt_devp->open_modem_wait = tmp;
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ return n;
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ pr_err("%s: unable to convert: %s to an int\n",
+ __func__, buf);
+ return -EINVAL;
+ }
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+
+ pr_err("%s: unable to match device to valid smd_pkt port\n", __func__);
+ return -EINVAL;
+}
+
+static ssize_t open_timeout_show(struct device *d,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+
+ mutex_lock(&smd_pkt_dev_lock_lha1);
+ list_for_each_entry(smd_pkt_devp, &smd_pkt_dev_list, dev_list) {
+ if (smd_pkt_devp->devicep == d) {
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ return snprintf(buf, PAGE_SIZE, "%d\n",
+ smd_pkt_devp->open_modem_wait);
+ }
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ pr_err("%s: unable to match device to valid smd_pkt port\n", __func__);
+ return -EINVAL;
+
+}
+
+static DEVICE_ATTR(open_timeout, 0664, open_timeout_show, open_timeout_store);
+
+/**
+ * loopback_edge_store() - Set the edge type for loopback device
+ * @d: Linux device structure
+ * @attr: Device attribute structure
+ * @buf: Input string
+ * @n: Length of the input string
+ *
+ * This function is used to set the loopback device edge runtime
+ * by writing to the loopback_edge node.
+ */
+static ssize_t loopback_edge_store(struct device *d,
+ struct device_attribute *attr,
+ const char *buf,
+ size_t n)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+ unsigned long tmp;
+
+ mutex_lock(&smd_pkt_dev_lock_lha1);
+ list_for_each_entry(smd_pkt_devp, &smd_pkt_dev_list, dev_list) {
+ if (smd_pkt_devp->devicep == d) {
+ if (!kstrtoul(buf, 10, &tmp)) {
+ smd_pkt_devp->edge = tmp;
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ return n;
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ pr_err("%s: unable to convert: %s to an int\n",
+ __func__, buf);
+ return -EINVAL;
+ }
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ pr_err("%s: unable to match device to valid smd_pkt port\n", __func__);
+ return -EINVAL;
+}
+
+/**
+ * loopback_edge_show() - Get the edge type for loopback device
+ * @d: Linux device structure
+ * @attr: Device attribute structure
+ * @buf: Output buffer
+ *
+ * This function is used to get the loopback device edge runtime
+ * by reading the loopback_edge node.
+ */
+static ssize_t loopback_edge_show(struct device *d,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+
+ mutex_lock(&smd_pkt_dev_lock_lha1);
+ list_for_each_entry(smd_pkt_devp, &smd_pkt_dev_list, dev_list) {
+ if (smd_pkt_devp->devicep == d) {
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ return snprintf(buf, PAGE_SIZE, "%d\n",
+ smd_pkt_devp->edge);
+ }
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ pr_err("%s: unable to match device to valid smd_pkt port\n", __func__);
+ return -EINVAL;
+
+}
+
+static DEVICE_ATTR(loopback_edge, 0664, loopback_edge_show,
+ loopback_edge_store);
+
+static int notify_reset(struct smd_pkt_dev *smd_pkt_devp)
+{
+ smd_pkt_devp->do_reset_notification = 0;
+
+ return -ENETRESET;
+}
+
+static void clean_and_signal(struct smd_pkt_dev *smd_pkt_devp)
+{
+ smd_pkt_devp->do_reset_notification = 1;
+ smd_pkt_devp->has_reset = 1;
+
+ smd_pkt_devp->is_open = 0;
+
+ wake_up(&smd_pkt_devp->ch_read_wait_queue);
+ wake_up(&smd_pkt_devp->ch_write_wait_queue);
+ wake_up_interruptible(&smd_pkt_devp->ch_opened_wait_queue);
+ D_STATUS("%s smd_pkt_dev id:%d\n", __func__, smd_pkt_devp->i);
+}
+
+static void loopback_probe_worker(struct work_struct *work)
+{
+
+ /* Wait for the modem SMSM to be inited for the SMD
+ ** Loopback channel to be allocated at the modem. Since
+ ** the wait need to be done atmost once, using msleep
+ ** doesn't degrade the performance.
+ */
+ if (!is_modem_smsm_inited())
+ schedule_delayed_work(&loopback_work, msecs_to_jiffies(1000));
+ else
+ smsm_change_state(SMSM_APPS_STATE,
+ 0, SMSM_SMD_LOOPBACK);
+
+}
+
+static void packet_arrival_worker(struct work_struct *work)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+ unsigned long flags;
+
+ smd_pkt_devp = container_of(work, struct smd_pkt_dev,
+ packet_arrival_work);
+ mutex_lock(&smd_pkt_devp->ch_lock);
+ spin_lock_irqsave(&smd_pkt_devp->pa_spinlock, flags);
+ if (smd_pkt_devp->ch && smd_pkt_devp->ws_locked) {
+ D_READ("%s locking smd_pkt_dev id:%d wakeup source\n",
+ __func__, smd_pkt_devp->i);
+ /*
+ * Keep system awake long enough to allow userspace client
+ * to process the packet.
+ */
+ __pm_wakeup_event(&smd_pkt_devp->pa_ws, WAKEUPSOURCE_TIMEOUT);
+ }
+ spin_unlock_irqrestore(&smd_pkt_devp->pa_spinlock, flags);
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+}
+
+static long smd_pkt_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret;
+ struct smd_pkt_dev *smd_pkt_devp;
+ uint32_t val;
+
+ smd_pkt_devp = file->private_data;
+ if (!smd_pkt_devp)
+ return -EINVAL;
+
+ mutex_lock(&smd_pkt_devp->ch_lock);
+ switch (cmd) {
+ case TIOCMGET:
+ D_STATUS("%s TIOCMGET command on smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ ret = smd_tiocmget(smd_pkt_devp->ch);
+ break;
+ case TIOCMSET:
+ ret = get_user(val, (uint32_t *)arg);
+ if (ret) {
+ pr_err("Error getting TIOCMSET value\n");
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+ return ret;
+ }
+ D_STATUS("%s TIOCSET command on smd_pkt_dev id:%d arg[0x%x]\n",
+ __func__, smd_pkt_devp->i, val);
+ ret = smd_tiocmset(smd_pkt_devp->ch, val, ~val);
+ break;
+ case SMD_PKT_IOCTL_BLOCKING_WRITE:
+ ret = get_user(smd_pkt_devp->blocking_write, (int *)arg);
+ break;
+ default:
+ pr_err_ratelimited("%s: Unrecognized ioctl command %d\n",
+ __func__, cmd);
+ ret = -ENOIOCTLCMD;
+ }
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+
+ return ret;
+}
+
+ssize_t smd_pkt_read(struct file *file,
+ char __user *_buf,
+ size_t count,
+ loff_t *ppos)
+{
+ int r;
+ int bytes_read;
+ int pkt_size;
+ struct smd_pkt_dev *smd_pkt_devp;
+ unsigned long flags;
+ void *buf;
+
+ smd_pkt_devp = file->private_data;
+
+ if (!smd_pkt_devp) {
+ pr_err_ratelimited("%s on NULL smd_pkt_dev\n", __func__);
+ return -EINVAL;
+ }
+
+ if (!smd_pkt_devp->ch) {
+ pr_err_ratelimited("%s on a closed smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ return -EINVAL;
+ }
+
+ if (smd_pkt_devp->do_reset_notification) {
+ /* notify client that a reset occurred */
+ E_SMD_PKT_SSR(smd_pkt_devp);
+ return notify_reset(smd_pkt_devp);
+ }
+ D_READ("Begin %s on smd_pkt_dev id:%d buffer_size %zu\n",
+ __func__, smd_pkt_devp->i, count);
+
+ buf = kmalloc(count, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+wait_for_packet:
+ r = wait_event_interruptible(smd_pkt_devp->ch_read_wait_queue,
+ !smd_pkt_devp->ch ||
+ (smd_cur_packet_size(smd_pkt_devp->ch) > 0
+ && smd_read_avail(smd_pkt_devp->ch)) ||
+ smd_pkt_devp->has_reset);
+
+ mutex_lock(&smd_pkt_devp->rx_lock);
+ if (smd_pkt_devp->has_reset) {
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ E_SMD_PKT_SSR(smd_pkt_devp);
+ kfree(buf);
+ return notify_reset(smd_pkt_devp);
+ }
+
+ if (!smd_pkt_devp->ch) {
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ pr_err_ratelimited("%s on a closed smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ kfree(buf);
+ return -EINVAL;
+ }
+
+ if (r < 0) {
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ /* qualify error message */
+ if (r != -ERESTARTSYS) {
+ /* we get this anytime a signal comes in */
+ pr_err_ratelimited("%s: wait_event_interruptible on smd_pkt_dev id:%d ret %i\n",
+ __func__, smd_pkt_devp->i, r);
+ }
+ kfree(buf);
+ return r;
+ }
+
+ /* Here we have a whole packet waiting for us */
+ pkt_size = smd_cur_packet_size(smd_pkt_devp->ch);
+
+ if (!pkt_size) {
+ pr_err_ratelimited("%s: No data on smd_pkt_dev id:%d, False wakeup\n",
+ __func__, smd_pkt_devp->i);
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ goto wait_for_packet;
+ }
+
+ if (pkt_size < 0) {
+ pr_err_ratelimited("%s: Error %d obtaining packet size for Channel %s",
+ __func__, pkt_size, smd_pkt_devp->ch_name);
+ kfree(buf);
+ return pkt_size;
+ }
+
+ if ((uint32_t)pkt_size > count) {
+ pr_err_ratelimited("%s: failure on smd_pkt_dev id: %d - packet size %d > buffer size %zu,",
+ __func__, smd_pkt_devp->i,
+ pkt_size, count);
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ kfree(buf);
+ return -ETOOSMALL;
+ }
+
+ bytes_read = 0;
+ do {
+ r = smd_read(smd_pkt_devp->ch,
+ (buf + bytes_read),
+ (pkt_size - bytes_read));
+ if (r < 0) {
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ if (smd_pkt_devp->has_reset) {
+ E_SMD_PKT_SSR(smd_pkt_devp);
+ return notify_reset(smd_pkt_devp);
+ }
+ pr_err_ratelimited("%s Error while reading %d\n",
+ __func__, r);
+ kfree(buf);
+ return r;
+ }
+ bytes_read += r;
+ if (pkt_size != bytes_read)
+ wait_event(smd_pkt_devp->ch_read_wait_queue,
+ smd_read_avail(smd_pkt_devp->ch) ||
+ smd_pkt_devp->has_reset);
+ if (smd_pkt_devp->has_reset) {
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ E_SMD_PKT_SSR(smd_pkt_devp);
+ kfree(buf);
+ return notify_reset(smd_pkt_devp);
+ }
+ } while (pkt_size != bytes_read);
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+
+ mutex_lock(&smd_pkt_devp->ch_lock);
+ spin_lock_irqsave(&smd_pkt_devp->pa_spinlock, flags);
+ if (smd_pkt_devp->poll_mode &&
+ !smd_cur_packet_size(smd_pkt_devp->ch)) {
+ __pm_relax(&smd_pkt_devp->pa_ws);
+ smd_pkt_devp->ws_locked = 0;
+ smd_pkt_devp->poll_mode = 0;
+ D_READ("%s unlocked smd_pkt_dev id:%d wakeup_source\n",
+ __func__, smd_pkt_devp->i);
+ }
+ spin_unlock_irqrestore(&smd_pkt_devp->pa_spinlock, flags);
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+
+ r = copy_to_user(_buf, buf, bytes_read);
+ if (r) {
+ kfree(buf);
+ return -EFAULT;
+ }
+ D_READ("Finished %s on smd_pkt_dev id:%d %d bytes\n",
+ __func__, smd_pkt_devp->i, bytes_read);
+ kfree(buf);
+
+ /* check and wakeup read threads waiting on this device */
+ check_and_wakeup_reader(smd_pkt_devp);
+
+ return bytes_read;
+}
+
+ssize_t smd_pkt_write(struct file *file,
+ const char __user *_buf,
+ size_t count,
+ loff_t *ppos)
+{
+ int r = 0, bytes_written;
+ struct smd_pkt_dev *smd_pkt_devp;
+ DEFINE_WAIT(write_wait);
+ void *buf;
+
+ smd_pkt_devp = file->private_data;
+
+ if (!smd_pkt_devp) {
+ pr_err_ratelimited("%s on NULL smd_pkt_dev\n", __func__);
+ return -EINVAL;
+ }
+
+ if (!smd_pkt_devp->ch) {
+ pr_err_ratelimited("%s on a closed smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ return -EINVAL;
+ }
+
+ if (smd_pkt_devp->do_reset_notification || smd_pkt_devp->has_reset) {
+ E_SMD_PKT_SSR(smd_pkt_devp);
+ /* notify client that a reset occurred */
+ return notify_reset(smd_pkt_devp);
+ }
+ D_WRITE("Begin %s on smd_pkt_dev id:%d data_size %zu\n",
+ __func__, smd_pkt_devp->i, count);
+
+ buf = kmalloc(count, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ r = copy_from_user(buf, _buf, count);
+ if (r) {
+ kfree(buf);
+ return -EFAULT;
+ }
+
+ mutex_lock(&smd_pkt_devp->tx_lock);
+ if (!smd_pkt_devp->blocking_write) {
+ if (smd_write_avail(smd_pkt_devp->ch) < count) {
+ pr_err_ratelimited("%s: Not enough space in smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ mutex_unlock(&smd_pkt_devp->tx_lock);
+ kfree(buf);
+ return -ENOMEM;
+ }
+ }
+
+ r = smd_write_start(smd_pkt_devp->ch, count);
+ if (r < 0) {
+ mutex_unlock(&smd_pkt_devp->tx_lock);
+ pr_err_ratelimited("%s: Error:%d in smd_pkt_dev id:%d @ smd_write_start\n",
+ __func__, r, smd_pkt_devp->i);
+ kfree(buf);
+ return r;
+ }
+
+ bytes_written = 0;
+ do {
+ prepare_to_wait(&smd_pkt_devp->ch_write_wait_queue,
+ &write_wait, TASK_UNINTERRUPTIBLE);
+ if (!smd_write_segment_avail(smd_pkt_devp->ch) &&
+ !smd_pkt_devp->has_reset) {
+ smd_enable_read_intr(smd_pkt_devp->ch);
+ schedule();
+ }
+ finish_wait(&smd_pkt_devp->ch_write_wait_queue, &write_wait);
+ smd_disable_read_intr(smd_pkt_devp->ch);
+
+ if (smd_pkt_devp->has_reset) {
+ mutex_unlock(&smd_pkt_devp->tx_lock);
+ E_SMD_PKT_SSR(smd_pkt_devp);
+ kfree(buf);
+ return notify_reset(smd_pkt_devp);
+ }
+ r = smd_write_segment(smd_pkt_devp->ch,
+ (void *)(buf + bytes_written),
+ (count - bytes_written));
+ if (r < 0) {
+ mutex_unlock(&smd_pkt_devp->tx_lock);
+ if (smd_pkt_devp->has_reset) {
+ E_SMD_PKT_SSR(smd_pkt_devp);
+ return notify_reset(smd_pkt_devp);
+ }
+ pr_err_ratelimited("%s on smd_pkt_dev id:%d failed r:%d\n",
+ __func__, smd_pkt_devp->i, r);
+ kfree(buf);
+ return r;
+ }
+ bytes_written += r;
+ } while (bytes_written != count);
+ smd_write_end(smd_pkt_devp->ch);
+ mutex_unlock(&smd_pkt_devp->tx_lock);
+ D_WRITE("Finished %s on smd_pkt_dev id:%d %zu bytes\n",
+ __func__, smd_pkt_devp->i, count);
+
+ kfree(buf);
+ return count;
+}
+
+static unsigned int smd_pkt_poll(struct file *file, poll_table *wait)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+ unsigned int mask = 0;
+
+ smd_pkt_devp = file->private_data;
+ if (!smd_pkt_devp) {
+ pr_err_ratelimited("%s on a NULL device\n", __func__);
+ return POLLERR;
+ }
+
+ smd_pkt_devp->poll_mode = 1;
+ poll_wait(file, &smd_pkt_devp->ch_read_wait_queue, wait);
+ mutex_lock(&smd_pkt_devp->ch_lock);
+ if (smd_pkt_devp->has_reset || !smd_pkt_devp->ch) {
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+ return POLLERR;
+ }
+
+ if (smd_read_avail(smd_pkt_devp->ch)) {
+ mask |= POLLIN | POLLRDNORM;
+ D_POLL("%s sets POLLIN for smd_pkt_dev id: %d\n",
+ __func__, smd_pkt_devp->i);
+ }
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+
+ return mask;
+}
+
+static void check_and_wakeup_reader(struct smd_pkt_dev *smd_pkt_devp)
+{
+ int sz;
+ unsigned long flags;
+
+ if (!smd_pkt_devp) {
+ pr_err("%s on a NULL device\n", __func__);
+ return;
+ }
+
+ if (!smd_pkt_devp->ch) {
+ pr_err("%s on a closed smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ return;
+ }
+
+ sz = smd_cur_packet_size(smd_pkt_devp->ch);
+ if (sz == 0) {
+ D_READ("%s: No packet in smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ return;
+ }
+ if (!smd_read_avail(smd_pkt_devp->ch)) {
+ D_READ(
+ "%s: packet size is %d in smd_pkt_dev id:%d - but the data isn't here\n",
+ __func__, sz, smd_pkt_devp->i);
+ return;
+ }
+
+ /* here we have a packet of size sz ready */
+ spin_lock_irqsave(&smd_pkt_devp->pa_spinlock, flags);
+ __pm_stay_awake(&smd_pkt_devp->pa_ws);
+ smd_pkt_devp->ws_locked = 1;
+ spin_unlock_irqrestore(&smd_pkt_devp->pa_spinlock, flags);
+ wake_up(&smd_pkt_devp->ch_read_wait_queue);
+ schedule_work(&smd_pkt_devp->packet_arrival_work);
+ D_READ("%s: wake_up smd_pkt_dev id:%d\n", __func__, smd_pkt_devp->i);
+}
+
+static void check_and_wakeup_writer(struct smd_pkt_dev *smd_pkt_devp)
+{
+ int sz;
+
+ if (!smd_pkt_devp) {
+ pr_err("%s on a NULL device\n", __func__);
+ return;
+ }
+
+ if (!smd_pkt_devp->ch) {
+ pr_err("%s on a closed smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ return;
+ }
+
+ sz = smd_write_segment_avail(smd_pkt_devp->ch);
+ if (sz) {
+ D_WRITE("%s: %d bytes write space in smd_pkt_dev id:%d\n",
+ __func__, sz, smd_pkt_devp->i);
+ smd_disable_read_intr(smd_pkt_devp->ch);
+ wake_up(&smd_pkt_devp->ch_write_wait_queue);
+ }
+}
+
+static void ch_notify(void *priv, unsigned int event)
+{
+ struct smd_pkt_dev *smd_pkt_devp = priv;
+
+ if (smd_pkt_devp->ch == 0) {
+ if (event != SMD_EVENT_CLOSE)
+ pr_err("%s on a closed smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ return;
+ }
+
+ switch (event) {
+ case SMD_EVENT_DATA: {
+ D_STATUS("%s: DATA event in smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ check_and_wakeup_reader(smd_pkt_devp);
+ if (smd_pkt_devp->blocking_write)
+ check_and_wakeup_writer(smd_pkt_devp);
+ break;
+ }
+ case SMD_EVENT_OPEN:
+ D_STATUS("%s: OPEN event in smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ smd_pkt_devp->has_reset = 0;
+ smd_pkt_devp->is_open = 1;
+ wake_up_interruptible(&smd_pkt_devp->ch_opened_wait_queue);
+ break;
+ case SMD_EVENT_CLOSE:
+ D_STATUS("%s: CLOSE event in smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ smd_pkt_devp->is_open = 0;
+ /* put port into reset state */
+ clean_and_signal(smd_pkt_devp);
+ if (!strcmp(smd_pkt_devp->ch_name, "LOOPBACK"))
+ schedule_delayed_work(&loopback_work,
+ msecs_to_jiffies(1000));
+ break;
+ }
+}
+
+static int smd_pkt_dummy_probe(struct platform_device *pdev)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+
+ mutex_lock(&smd_pkt_dev_lock_lha1);
+ list_for_each_entry(smd_pkt_devp, &smd_pkt_dev_list, dev_list) {
+ if (smd_pkt_devp->edge == pdev->id
+ && !strcmp(pdev->name, smd_pkt_devp->ch_name)) {
+ complete_all(&smd_pkt_devp->ch_allocated);
+ D_STATUS("%s allocated SMD ch for smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ break;
+ }
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+ return 0;
+}
+
+static uint32_t is_modem_smsm_inited(void)
+{
+ uint32_t modem_state;
+ uint32_t ready_state = (SMSM_INIT | SMSM_SMDINIT);
+
+ modem_state = smsm_get_state(SMSM_MODEM_STATE);
+ return (modem_state & ready_state) == ready_state;
+}
+
+/**
+ * smd_pkt_add_driver() - Add platform drivers for smd pkt device
+ *
+ * @smd_pkt_devp: pointer to the smd pkt device structure
+ *
+ * @returns: 0 for success, standard Linux error code otherwise
+ *
+ * This function is used to register platform driver once for all
+ * smd pkt devices which have same names and increment the reference
+ * count for 2nd to nth devices.
+ */
+static int smd_pkt_add_driver(struct smd_pkt_dev *smd_pkt_devp)
+{
+ int r = 0;
+ struct smd_pkt_driver *smd_pkt_driverp;
+ struct smd_pkt_driver *item;
+
+ if (!smd_pkt_devp) {
+ pr_err("%s on a NULL device\n", __func__);
+ return -EINVAL;
+ }
+ D_STATUS("Begin %s on smd_pkt_ch[%s]\n", __func__,
+ smd_pkt_devp->ch_name);
+
+ mutex_lock(&smd_pkt_driver_lock_lha1);
+ list_for_each_entry(item, &smd_pkt_driver_list, list) {
+ if (!strcmp(item->pdriver_name, smd_pkt_devp->ch_name)) {
+ D_STATUS("%s:%s Already Platform driver reg. cnt:%d\n",
+ __func__, smd_pkt_devp->ch_name, item->ref_cnt);
+ ++item->ref_cnt;
+ goto exit;
+ }
+ }
+
+ smd_pkt_driverp = kzalloc(sizeof(*smd_pkt_driverp), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(smd_pkt_driverp)) {
+ pr_err("%s: kzalloc() failed for smd_pkt_driver[%s]\n",
+ __func__, smd_pkt_devp->ch_name);
+ r = -ENOMEM;
+ goto exit;
+ }
+
+ smd_pkt_driverp->driver.probe = smd_pkt_dummy_probe;
+ scnprintf(smd_pkt_driverp->pdriver_name, SMD_MAX_CH_NAME_LEN,
+ "%s", smd_pkt_devp->ch_name);
+ smd_pkt_driverp->driver.driver.name = smd_pkt_driverp->pdriver_name;
+ smd_pkt_driverp->driver.driver.owner = THIS_MODULE;
+ r = platform_driver_register(&smd_pkt_driverp->driver);
+ if (r) {
+ pr_err("%s: %s Platform driver reg. failed\n",
+ __func__, smd_pkt_devp->ch_name);
+ kfree(smd_pkt_driverp);
+ goto exit;
+ }
+ ++smd_pkt_driverp->ref_cnt;
+ list_add(&smd_pkt_driverp->list, &smd_pkt_driver_list);
+
+exit:
+ D_STATUS("End %s on smd_pkt_ch[%s]\n", __func__, smd_pkt_devp->ch_name);
+ mutex_unlock(&smd_pkt_driver_lock_lha1);
+ return r;
+}
+
+/**
+ * smd_pkt_remove_driver() - Remove the platform drivers for smd pkt device
+ *
+ * @smd_pkt_devp: pointer to the smd pkt device structure
+ *
+ * This function is used to decrement the reference count on
+ * platform drivers for smd pkt devices and removes the drivers
+ * when the reference count becomes zero.
+ */
+static void smd_pkt_remove_driver(struct smd_pkt_dev *smd_pkt_devp)
+{
+ struct smd_pkt_driver *smd_pkt_driverp;
+ bool found_item = false;
+
+ if (!smd_pkt_devp) {
+ pr_err("%s on a NULL device\n", __func__);
+ return;
+ }
+
+ D_STATUS("Begin %s on smd_pkt_ch[%s]\n", __func__,
+ smd_pkt_devp->ch_name);
+ mutex_lock(&smd_pkt_driver_lock_lha1);
+ list_for_each_entry(smd_pkt_driverp, &smd_pkt_driver_list, list) {
+ if (!strcmp(smd_pkt_driverp->pdriver_name,
+ smd_pkt_devp->ch_name)) {
+ found_item = true;
+ D_STATUS("%s:%s Platform driver cnt:%d\n",
+ __func__, smd_pkt_devp->ch_name,
+ smd_pkt_driverp->ref_cnt);
+ if (smd_pkt_driverp->ref_cnt > 0)
+ --smd_pkt_driverp->ref_cnt;
+ else
+ pr_warn("%s reference count <= 0\n", __func__);
+ break;
+ }
+ }
+ if (!found_item)
+ pr_err("%s:%s No item found in list.\n",
+ __func__, smd_pkt_devp->ch_name);
+
+ if (found_item && smd_pkt_driverp->ref_cnt == 0) {
+ platform_driver_unregister(&smd_pkt_driverp->driver);
+ smd_pkt_driverp->driver.probe = NULL;
+ list_del(&smd_pkt_driverp->list);
+ kfree(smd_pkt_driverp);
+ }
+ mutex_unlock(&smd_pkt_driver_lock_lha1);
+ D_STATUS("End %s on smd_pkt_ch[%s]\n", __func__, smd_pkt_devp->ch_name);
+}
+
+int smd_pkt_open(struct inode *inode, struct file *file)
+{
+ int r = 0;
+ struct smd_pkt_dev *smd_pkt_devp;
+ const char *peripheral = NULL;
+
+ smd_pkt_devp = container_of(inode->i_cdev, struct smd_pkt_dev, cdev);
+
+ if (!smd_pkt_devp) {
+ pr_err_ratelimited("%s on a NULL device\n", __func__);
+ return -EINVAL;
+ }
+ D_STATUS("Begin %s on smd_pkt_dev id:%d\n", __func__, smd_pkt_devp->i);
+
+ file->private_data = smd_pkt_devp;
+
+ mutex_lock(&smd_pkt_devp->ch_lock);
+ if (smd_pkt_devp->ch == 0) {
+ unsigned int open_wait_rem;
+
+ open_wait_rem = smd_pkt_devp->open_modem_wait * 1000;
+ reinit_completion(&smd_pkt_devp->ch_allocated);
+
+ r = smd_pkt_add_driver(smd_pkt_devp);
+ if (r) {
+ pr_err_ratelimited("%s: %s Platform driver reg. failed\n",
+ __func__, smd_pkt_devp->ch_name);
+ goto out;
+ }
+
+ peripheral = smd_edge_to_pil_str(smd_pkt_devp->edge);
+ if (!IS_ERR_OR_NULL(peripheral)) {
+ smd_pkt_devp->pil = subsystem_get(peripheral);
+ if (IS_ERR(smd_pkt_devp->pil)) {
+ r = PTR_ERR(smd_pkt_devp->pil);
+ pr_err_ratelimited("%s failed on smd_pkt_dev id:%d - subsystem_get failed for %s\n",
+ __func__, smd_pkt_devp->i, peripheral);
+ /*
+ * Sleep inorder to reduce the frequency of
+ * retry by user-space modules and to avoid
+ * possible watchdog bite.
+ */
+ msleep(open_wait_rem);
+ goto release_pd;
+ }
+ }
+
+ /* Wait for the modem SMSM to be inited for the SMD
+ ** Loopback channel to be allocated at the modem. Since
+ ** the wait need to be done atmost once, using msleep
+ ** doesn't degrade the performance.
+ */
+ if (!strcmp(smd_pkt_devp->ch_name, "LOOPBACK")) {
+ if (!is_modem_smsm_inited())
+ msleep(5000);
+ smsm_change_state(SMSM_APPS_STATE,
+ 0, SMSM_SMD_LOOPBACK);
+ msleep(100);
+ }
+
+ /*
+ * Wait for a packet channel to be allocated so we know
+ * the modem is ready enough.
+ */
+ if (open_wait_rem) {
+ r = wait_for_completion_interruptible_timeout(
+ &smd_pkt_devp->ch_allocated,
+ msecs_to_jiffies(open_wait_rem));
+ if (r >= 0)
+ open_wait_rem = jiffies_to_msecs(r);
+ if (r == 0)
+ r = -ETIMEDOUT;
+ if (r == -ERESTARTSYS) {
+ pr_info_ratelimited("%s: wait on smd_pkt_dev id:%d allocation interrupted\n",
+ __func__, smd_pkt_devp->i);
+ goto release_pil;
+ }
+ if (r < 0) {
+ pr_err_ratelimited("%s: wait on smd_pkt_dev id:%d allocation failed rc:%d\n",
+ __func__, smd_pkt_devp->i, r);
+ goto release_pil;
+ }
+ }
+
+ r = smd_named_open_on_edge(smd_pkt_devp->ch_name,
+ smd_pkt_devp->edge,
+ &smd_pkt_devp->ch,
+ smd_pkt_devp,
+ ch_notify);
+ if (r < 0) {
+ pr_err_ratelimited("%s: %s open failed %d\n", __func__,
+ smd_pkt_devp->ch_name, r);
+ goto release_pil;
+ }
+
+ open_wait_rem = max_t(unsigned int, 2000, open_wait_rem);
+ r = wait_event_interruptible_timeout(
+ smd_pkt_devp->ch_opened_wait_queue,
+ smd_pkt_devp->is_open,
+ msecs_to_jiffies(open_wait_rem));
+ if (r == 0)
+ r = -ETIMEDOUT;
+
+ if (r < 0) {
+ /* close the ch to sync smd's state with smd_pkt */
+ smd_close(smd_pkt_devp->ch);
+ smd_pkt_devp->ch = NULL;
+ }
+
+ if (r == -ERESTARTSYS) {
+ pr_info_ratelimited("%s: wait on smd_pkt_dev id:%d OPEN interrupted\n",
+ __func__, smd_pkt_devp->i);
+ } else if (r < 0) {
+ pr_err_ratelimited("%s: wait on smd_pkt_dev id:%d OPEN event failed rc:%d\n",
+ __func__, smd_pkt_devp->i, r);
+ } else if (!smd_pkt_devp->is_open) {
+ pr_err_ratelimited("%s: Invalid OPEN event on smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ r = -ENODEV;
+ } else {
+ smd_disable_read_intr(smd_pkt_devp->ch);
+ smd_pkt_devp->ch_size =
+ smd_write_avail(smd_pkt_devp->ch);
+ r = 0;
+ smd_pkt_devp->ref_cnt++;
+ D_STATUS("Finished %s on smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+ }
+ } else {
+ smd_pkt_devp->ref_cnt++;
+ }
+release_pil:
+ if (peripheral && (r < 0)) {
+ subsystem_put(smd_pkt_devp->pil);
+ smd_pkt_devp->pil = NULL;
+ }
+
+release_pd:
+ if (r < 0)
+ smd_pkt_remove_driver(smd_pkt_devp);
+out:
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+
+
+ return r;
+}
+
+int smd_pkt_release(struct inode *inode, struct file *file)
+{
+ int r = 0;
+ struct smd_pkt_dev *smd_pkt_devp = file->private_data;
+ unsigned long flags;
+
+ if (!smd_pkt_devp) {
+ pr_err_ratelimited("%s on a NULL device\n", __func__);
+ return -EINVAL;
+ }
+ D_STATUS("Begin %s on smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+
+ mutex_lock(&smd_pkt_devp->ch_lock);
+ mutex_lock(&smd_pkt_devp->rx_lock);
+ mutex_lock(&smd_pkt_devp->tx_lock);
+ if (smd_pkt_devp->ref_cnt > 0)
+ smd_pkt_devp->ref_cnt--;
+
+ if (smd_pkt_devp->ch != 0 && smd_pkt_devp->ref_cnt == 0) {
+ clean_and_signal(smd_pkt_devp);
+ r = smd_close(smd_pkt_devp->ch);
+ smd_pkt_devp->ch = 0;
+ smd_pkt_devp->blocking_write = 0;
+ smd_pkt_devp->poll_mode = 0;
+ smd_pkt_remove_driver(smd_pkt_devp);
+ if (smd_pkt_devp->pil)
+ subsystem_put(smd_pkt_devp->pil);
+ smd_pkt_devp->has_reset = 0;
+ smd_pkt_devp->do_reset_notification = 0;
+ spin_lock_irqsave(&smd_pkt_devp->pa_spinlock, flags);
+ if (smd_pkt_devp->ws_locked) {
+ __pm_relax(&smd_pkt_devp->pa_ws);
+ smd_pkt_devp->ws_locked = 0;
+ }
+ spin_unlock_irqrestore(&smd_pkt_devp->pa_spinlock, flags);
+ }
+ mutex_unlock(&smd_pkt_devp->tx_lock);
+ mutex_unlock(&smd_pkt_devp->rx_lock);
+ mutex_unlock(&smd_pkt_devp->ch_lock);
+
+ if (flush_work(&smd_pkt_devp->packet_arrival_work))
+ D_STATUS("%s: Flushed work for smd_pkt_dev id:%d\n", __func__,
+ smd_pkt_devp->i);
+
+ D_STATUS("Finished %s on smd_pkt_dev id:%d\n",
+ __func__, smd_pkt_devp->i);
+
+ return r;
+}
+
+static const struct file_operations smd_pkt_fops = {
+ .owner = THIS_MODULE,
+ .open = smd_pkt_open,
+ .release = smd_pkt_release,
+ .read = smd_pkt_read,
+ .write = smd_pkt_write,
+ .poll = smd_pkt_poll,
+ .unlocked_ioctl = smd_pkt_ioctl,
+ .compat_ioctl = smd_pkt_ioctl,
+};
+
+static int smd_pkt_init_add_device(struct smd_pkt_dev *smd_pkt_devp, int i)
+{
+ int r = 0;
+
+ smd_pkt_devp->i = i;
+
+ init_waitqueue_head(&smd_pkt_devp->ch_read_wait_queue);
+ init_waitqueue_head(&smd_pkt_devp->ch_write_wait_queue);
+ smd_pkt_devp->is_open = 0;
+ smd_pkt_devp->poll_mode = 0;
+ smd_pkt_devp->ws_locked = 0;
+ init_waitqueue_head(&smd_pkt_devp->ch_opened_wait_queue);
+
+ spin_lock_init(&smd_pkt_devp->pa_spinlock);
+ mutex_init(&smd_pkt_devp->ch_lock);
+ mutex_init(&smd_pkt_devp->rx_lock);
+ mutex_init(&smd_pkt_devp->tx_lock);
+ wakeup_source_init(&smd_pkt_devp->pa_ws, smd_pkt_devp->dev_name);
+ INIT_WORK(&smd_pkt_devp->packet_arrival_work, packet_arrival_worker);
+ init_completion(&smd_pkt_devp->ch_allocated);
+
+ cdev_init(&smd_pkt_devp->cdev, &smd_pkt_fops);
+ smd_pkt_devp->cdev.owner = THIS_MODULE;
+
+ r = cdev_add(&smd_pkt_devp->cdev, (smd_pkt_number + i), 1);
+ if (r) {
+ pr_err("%s: cdev_add() failed for smd_pkt_dev id:%d ret:%i\n",
+ __func__, i, r);
+ return r;
+ }
+
+ smd_pkt_devp->devicep =
+ device_create(smd_pkt_classp,
+ NULL,
+ (smd_pkt_number + i),
+ NULL,
+ smd_pkt_devp->dev_name);
+
+ if (IS_ERR_OR_NULL(smd_pkt_devp->devicep)) {
+ pr_err("%s: device_create() failed for smd_pkt_dev id:%d\n",
+ __func__, i);
+ r = -ENOMEM;
+ cdev_del(&smd_pkt_devp->cdev);
+ wakeup_source_trash(&smd_pkt_devp->pa_ws);
+ return r;
+ }
+ if (device_create_file(smd_pkt_devp->devicep,
+ &dev_attr_open_timeout))
+ pr_err("%s: unable to create device attr for smd_pkt_dev id:%d\n",
+ __func__, i);
+
+ if (!strcmp(smd_pkt_devp->ch_name, "LOOPBACK")) {
+ if (device_create_file(smd_pkt_devp->devicep,
+ &dev_attr_loopback_edge))
+ pr_err("%s: unable to create device attr for smd_pkt_dev id:%d\n",
+ __func__, i);
+ }
+ mutex_lock(&smd_pkt_dev_lock_lha1);
+ list_add(&smd_pkt_devp->dev_list, &smd_pkt_dev_list);
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+
+ return r;
+}
+
+static void smd_pkt_core_deinit(void)
+{
+ struct smd_pkt_dev *smd_pkt_devp;
+ struct smd_pkt_dev *index;
+
+ mutex_lock(&smd_pkt_dev_lock_lha1);
+ list_for_each_entry_safe(smd_pkt_devp, index, &smd_pkt_dev_list,
+ dev_list) {
+ cdev_del(&smd_pkt_devp->cdev);
+ list_del(&smd_pkt_devp->dev_list);
+ device_destroy(smd_pkt_classp,
+ MKDEV(MAJOR(smd_pkt_number), smd_pkt_devp->i));
+ kfree(smd_pkt_devp);
+ }
+ mutex_unlock(&smd_pkt_dev_lock_lha1);
+
+ if (!IS_ERR_OR_NULL(smd_pkt_classp))
+ class_destroy(smd_pkt_classp);
+
+ unregister_chrdev_region(MAJOR(smd_pkt_number), num_smd_pkt_ports);
+}
+
+static int smd_pkt_alloc_chrdev_region(void)
+{
+ int r = alloc_chrdev_region(&smd_pkt_number,
+ 0,
+ num_smd_pkt_ports,
+ DEVICE_NAME);
+
+ if (r) {
+ pr_err("%s: alloc_chrdev_region() failed ret:%i\n",
+ __func__, r);
+ return r;
+ }
+
+ smd_pkt_classp = class_create(THIS_MODULE, DEVICE_NAME);
+ if (IS_ERR(smd_pkt_classp)) {
+ pr_err("%s: class_create() failed ENOMEM\n", __func__);
+ r = -ENOMEM;
+ unregister_chrdev_region(MAJOR(smd_pkt_number),
+ num_smd_pkt_ports);
+ return r;
+ }
+
+ return 0;
+}
+
+static int parse_smdpkt_devicetree(struct device_node *node,
+ struct smd_pkt_dev *smd_pkt_devp)
+{
+ int edge;
+ char *key;
+ const char *ch_name;
+ const char *dev_name;
+ const char *remote_ss;
+
+ key = "qcom,smdpkt-remote";
+ remote_ss = of_get_property(node, key, NULL);
+ if (!remote_ss)
+ goto error;
+
+ edge = smd_remote_ss_to_edge(remote_ss);
+ if (edge < 0)
+ goto error;
+
+ smd_pkt_devp->edge = edge;
+ D_STATUS("%s: %s = %d", __func__, key, edge);
+
+ key = "qcom,smdpkt-port-name";
+ ch_name = of_get_property(node, key, NULL);
+ if (!ch_name)
+ goto error;
+
+ strlcpy(smd_pkt_devp->ch_name, ch_name, SMD_MAX_CH_NAME_LEN);
+ D_STATUS("%s ch_name = %s\n", __func__, ch_name);
+
+ key = "qcom,smdpkt-dev-name";
+ dev_name = of_get_property(node, key, NULL);
+ if (!dev_name)
+ goto error;
+
+ strlcpy(smd_pkt_devp->dev_name, dev_name, SMD_MAX_CH_NAME_LEN);
+ D_STATUS("%s dev_name = %s\n", __func__, dev_name);
+
+ return 0;
+
+error:
+ pr_err("%s: missing key: %s\n", __func__, key);
+ return -ENODEV;
+
+}
+
+static int smd_pkt_devicetree_init(struct platform_device *pdev)
+{
+ int ret;
+ int i = 0;
+ struct device_node *node;
+ struct smd_pkt_dev *smd_pkt_devp;
+ int subnode_num = 0;
+
+ for_each_child_of_node(pdev->dev.of_node, node)
+ ++subnode_num;
+
+ num_smd_pkt_ports = subnode_num;
+
+ ret = smd_pkt_alloc_chrdev_region();
+ if (ret) {
+ pr_err("%s: smd_pkt_alloc_chrdev_region() failed ret:%i\n",
+ __func__, ret);
+ return ret;
+ }
+
+ for_each_child_of_node(pdev->dev.of_node, node) {
+ smd_pkt_devp = kzalloc(sizeof(struct smd_pkt_dev), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(smd_pkt_devp)) {
+ pr_err("%s: kzalloc() failed for smd_pkt_dev id:%d\n",
+ __func__, i);
+ ret = -ENOMEM;
+ goto error_destroy;
+ }
+
+ ret = parse_smdpkt_devicetree(node, smd_pkt_devp);
+ if (ret) {
+ pr_err(" failed to parse_smdpkt_devicetree %d\n", i);
+ kfree(smd_pkt_devp);
+ goto error_destroy;
+ }
+
+ ret = smd_pkt_init_add_device(smd_pkt_devp, i);
+ if (ret < 0) {
+ pr_err("add device failed for idx:%d ret=%d\n", i, ret);
+ kfree(smd_pkt_devp);
+ goto error_destroy;
+ }
+ i++;
+ }
+
+ INIT_DELAYED_WORK(&loopback_work, loopback_probe_worker);
+
+ D_STATUS("SMD Packet Port Driver Initialized.\n");
+ return 0;
+
+error_destroy:
+ smd_pkt_core_deinit();
+ return ret;
+}
+
+static int msm_smd_pkt_probe(struct platform_device *pdev)
+{
+ int ret;
+
+ if (pdev) {
+ if (pdev->dev.of_node) {
+ D_STATUS("%s device tree implementation\n", __func__);
+ ret = smd_pkt_devicetree_init(pdev);
+ if (ret)
+ pr_err("%s: device tree init failed\n",
+ __func__);
+ }
+ }
+
+ return 0;
+}
+
+static const struct of_device_id msm_smd_pkt_match_table[] = {
+ { .compatible = "qcom,smdpkt" },
+ {},
+};
+
+static struct platform_driver msm_smd_pkt_driver = {
+ .probe = msm_smd_pkt_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = msm_smd_pkt_match_table,
+ },
+};
+
+static int __init smd_pkt_init(void)
+{
+ int rc;
+
+ INIT_LIST_HEAD(&smd_pkt_dev_list);
+ INIT_LIST_HEAD(&smd_pkt_driver_list);
+ rc = platform_driver_register(&msm_smd_pkt_driver);
+ if (rc) {
+ pr_err("%s: msm_smd_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ smd_pkt_ilctxt = ipc_log_context_create(SMD_PKT_IPC_LOG_PAGE_CNT,
+ "smd_pkt", 0);
+ return 0;
+}
+
+static void __exit smd_pkt_cleanup(void)
+{
+ smd_pkt_core_deinit();
+}
+
+module_init(smd_pkt_init);
+module_exit(smd_pkt_cleanup);
+
+MODULE_DESCRIPTION("MSM Shared Memory Packet Port");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/clk/mvebu/ap806-system-controller.c b/drivers/clk/mvebu/ap806-system-controller.c
index 02023ba..962e0c5 100644
--- a/drivers/clk/mvebu/ap806-system-controller.c
+++ b/drivers/clk/mvebu/ap806-system-controller.c
@@ -55,21 +55,39 @@ static int ap806_syscon_clk_probe(struct platform_device *pdev)
freq_mode = reg & AP806_SAR_CLKFREQ_MODE_MASK;
switch (freq_mode) {
- case 0x0 ... 0x5:
+ case 0x0:
+ case 0x1:
cpuclk_freq = 2000;
break;
- case 0x6 ... 0xB:
+ case 0x6:
+ case 0x7:
cpuclk_freq = 1800;
break;
- case 0xC ... 0x11:
+ case 0x4:
+ case 0xB:
+ case 0xD:
cpuclk_freq = 1600;
break;
- case 0x12 ... 0x16:
+ case 0x1a:
cpuclk_freq = 1400;
break;
- case 0x17 ... 0x19:
+ case 0x14:
+ case 0x17:
cpuclk_freq = 1300;
break;
+ case 0x19:
+ cpuclk_freq = 1200;
+ break;
+ case 0x13:
+ case 0x1d:
+ cpuclk_freq = 1000;
+ break;
+ case 0x1c:
+ cpuclk_freq = 800;
+ break;
+ case 0x1b:
+ cpuclk_freq = 600;
+ break;
default:
dev_err(&pdev->dev, "invalid SAR value\n");
return -EINVAL;
diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
index d47b66e..87d067a 100644
--- a/drivers/clk/qcom/Kconfig
+++ b/drivers/clk/qcom/Kconfig
@@ -235,4 +235,21 @@
subsystems via QMP mailboxes.
Say Y to support the clocks managed by AOP on platforms such as sdm845.
+config MDM_GCC_SDXPOORWILLS
+ tristate "SDXPOORWILLS Global Clock Controller"
+ depends on COMMON_CLK_QCOM
+ help
+ Support for the global clock controller on sdxpoorwills devices.
+ Say Y if you want to use peripheral devices such as UART, SPI,
+ i2c, USB, SD/eMMC, etc.
+
+config MDM_CLOCK_CPU_SDXPOORWILLS
+ tristate "SDXPOORWILLS CPU Clock Controller"
+ depends on COMMON_CLK_QCOM
+ help
+ Support for the cpu clock controller on sdxpoorwills
+ based devices.
+ Say Y if you want to support CPU clock scaling using
+ CPUfreq drivers for dyanmic power management.
+
source "drivers/clk/qcom/mdss/Kconfig"
diff --git a/drivers/clk/qcom/Makefile b/drivers/clk/qcom/Makefile
index 6a8c43b..8cb46a7 100644
--- a/drivers/clk/qcom/Makefile
+++ b/drivers/clk/qcom/Makefile
@@ -22,7 +22,9 @@
obj-$(CONFIG_IPQ_GCC_4019) += gcc-ipq4019.o
obj-$(CONFIG_IPQ_GCC_806X) += gcc-ipq806x.o
obj-$(CONFIG_IPQ_LCC_806X) += lcc-ipq806x.o
+obj-$(CONFIG_MDM_CLOCK_CPU_SDXPOORWILLS) += clk-cpu-a7.o
obj-$(CONFIG_MDM_GCC_9615) += gcc-mdm9615.o
+obj-$(CONFIG_MDM_GCC_SDXPOORWILLS) += gcc-sdxpoorwills.o
obj-$(CONFIG_MDM_LCC_9615) += lcc-mdm9615.o
obj-$(CONFIG_MSM_CAMCC_SDM845) += camcc-sdm845.o
obj-$(CONFIG_MSM_CLK_AOP_QMP) += clk-aop-qmp.o
diff --git a/drivers/clk/qcom/camcc-sdm845.c b/drivers/clk/qcom/camcc-sdm845.c
index 5caa975..836c25c 100644
--- a/drivers/clk/qcom/camcc-sdm845.c
+++ b/drivers/clk/qcom/camcc-sdm845.c
@@ -1971,6 +1971,87 @@ static void cam_cc_sdm845_fixup_sdm845v2(void)
cam_cc_sdm845_clocks[CAM_CC_CSIPHY3_CLK] = &cam_cc_csiphy3_clk.clkr;
cam_cc_sdm845_clocks[CAM_CC_CSI3PHYTIMER_CLK_SRC] =
&cam_cc_csi3phytimer_clk_src.clkr;
+ cam_cc_bps_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_bps_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_cci_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_cci_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_cphy_rx_clk_src.freq_tbl = ftbl_cam_cc_cphy_rx_clk_src_sdm845_v2;
+ cam_cc_cphy_rx_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_cphy_rx_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_cphy_rx_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 384000000;
+ cam_cc_csi0phytimer_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_csi0phytimer_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_csi1phytimer_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_csi1phytimer_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_csi2phytimer_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_csi2phytimer_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_fast_ahb_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_fast_ahb_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_fd_core_clk_src.freq_tbl = ftbl_cam_cc_fd_core_clk_src_sdm845_v2;
+ cam_cc_fd_core_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_fd_core_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_icp_clk_src.freq_tbl = ftbl_cam_cc_icp_clk_src_sdm845_v2;
+ cam_cc_icp_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_icp_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_icp_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW_L1] = 600000000;
+ cam_cc_ife_0_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ife_0_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ife_0_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ife_0_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ife_0_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] =
+ 384000000;
+ cam_cc_ife_1_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ife_1_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ife_1_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ife_1_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ife_1_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] =
+ 384000000;
+ cam_cc_ife_lite_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ife_lite_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ife_lite_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ife_lite_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ife_lite_csid_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] =
+ 384000000;
+ cam_cc_ipe_0_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ipe_0_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ipe_0_clk_src.clkr.hw.init->rate_max[VDD_CX_NOMINAL] = 600000000;
+ cam_cc_ipe_1_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_ipe_1_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_ipe_1_clk_src.clkr.hw.init->rate_max[VDD_CX_NOMINAL] = 600000000;
+ cam_cc_jpeg_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_jpeg_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_lrme_clk_src.freq_tbl = ftbl_cam_cc_lrme_clk_src_sdm845_v2;
+ cam_cc_lrme_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_lrme_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_lrme_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 269333333;
+ cam_cc_lrme_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW_L1] = 320000000;
+ cam_cc_lrme_clk_src.clkr.hw.init->rate_max[VDD_CX_NOMINAL] = 400000000;
+ cam_cc_mclk0_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_mclk0_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_mclk0_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 34285714;
+ cam_cc_mclk1_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_mclk1_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_mclk1_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 34285714;
+ cam_cc_mclk2_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_mclk2_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_mclk2_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 34285714;
+ cam_cc_mclk3_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_mclk3_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_mclk3_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 34285714;
+ cam_cc_slow_ahb_clk_src.clkr.hw.init->rate_max[VDD_CX_MIN] = 0;
+ cam_cc_slow_ahb_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 0;
+ cam_cc_slow_ahb_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 80000000;
+ cam_cc_slow_ahb_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW_L1] =
+ 80000000;
+}
+
+static void cam_cc_sdm845_fixup_sdm670(void)
+{
+ cam_cc_sdm845_clocks[CAM_CC_CSI3PHYTIMER_CLK] =
+ &cam_cc_csi3phytimer_clk.clkr;
+ cam_cc_sdm845_clocks[CAM_CC_CSIPHY3_CLK] = &cam_cc_csiphy3_clk.clkr;
+ cam_cc_sdm845_clocks[CAM_CC_CSI3PHYTIMER_CLK_SRC] =
+ &cam_cc_csi3phytimer_clk_src.clkr;
cam_cc_cphy_rx_clk_src.freq_tbl = ftbl_cam_cc_cphy_rx_clk_src_sdm845_v2;
cam_cc_cphy_rx_clk_src.clkr.hw.init->rate_max[VDD_CX_LOWER] = 384000000;
cam_cc_cphy_rx_clk_src.clkr.hw.init->rate_max[VDD_CX_LOW] = 384000000;
@@ -1991,11 +2072,6 @@ static void cam_cc_sdm845_fixup_sdm845v2(void)
80000000;
}
-static void cam_cc_sdm845_fixup_sdm670(void)
-{
- cam_cc_sdm845_fixup_sdm845v2();
-}
-
static int cam_cc_sdm845_fixup(struct platform_device *pdev)
{
const char *compat = NULL;
diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
index afb2c01..bf9b99d 100644
--- a/drivers/clk/qcom/clk-alpha-pll.c
+++ b/drivers/clk/qcom/clk-alpha-pll.c
@@ -22,6 +22,8 @@
#include "clk-alpha-pll.h"
#define PLL_MODE 0x00
+#define PLL_STANDBY 0x0
+#define PLL_RUN 0x1
# define PLL_OUTCTRL BIT(0)
# define PLL_BYPASSNL BIT(1)
# define PLL_RESET_N BIT(2)
@@ -51,25 +53,40 @@
#define PLL_TEST_CTL 0x1c
#define PLL_TEST_CTL_U 0x20
#define PLL_STATUS 0x24
+#define PLL_UPDATE BIT(22)
+#define PLL_ACK_LATCH BIT(29)
+#define PLL_CALIBRATION_MASK (0x7<<3)
+#define PLL_CALIBRATION_CONTROL 2
+#define PLL_HW_UPDATE_LOGIC_BYPASS BIT(23)
+#define ALPHA_16_BIT_PLL_RATE_MARGIN 500
/*
* Even though 40 bits are present, use only 32 for ease of calculation.
*/
#define ALPHA_REG_BITWIDTH 40
#define ALPHA_BITWIDTH 32
-#define FABIA_BITWIDTH 16
+#define SUPPORTS_16BIT_ALPHA 16
#define FABIA_USER_CTL_LO 0xc
#define FABIA_USER_CTL_HI 0x10
#define FABIA_FRAC_VAL 0x38
#define FABIA_OPMODE 0x2c
-#define FABIA_PLL_STANDBY 0x0
-#define FABIA_PLL_RUN 0x1
#define FABIA_PLL_OUT_MASK 0x7
-#define FABIA_PLL_RATE_MARGIN 500
#define FABIA_PLL_ACK_LATCH BIT(29)
#define FABIA_PLL_UPDATE BIT(22)
-#define FABIA_PLL_HW_UPDATE_LOGIC_BYPASS BIT(23)
+
+#define TRION_PLL_CAL_VAL 0x44
+#define TRION_PLL_CAL_L_VAL 0x8
+#define TRION_PLL_USER_CTL 0xc
+#define TRION_PLL_USER_CTL_U 0x10
+#define TRION_PLL_USER_CTL_U1 0x14
+#define TRION_PLL_CONFIG_CTL_U 0x1c
+#define TRION_PLL_CONFIG_CTL_U1 0x20
+#define TRION_PLL_OPMODE 0x38
+#define TRION_PLL_ALPHA_VAL 0x40
+
+#define TRION_PLL_OUT_MASK 0x7
+#define TRION_PLL_ENABLE_STATE_READ BIT(4)
#define to_clk_alpha_pll(_hw) container_of(to_clk_regmap(_hw), \
struct clk_alpha_pll, clkr)
@@ -121,6 +138,10 @@ static int wait_for_pll_offline(struct clk_alpha_pll *pll, u32 mask)
return wait_for_pll(pll, mask, 0, "offline");
}
+static int wait_for_pll_latch_ack(struct clk_alpha_pll *pll, u32 mask)
+{
+ return wait_for_pll(pll, mask, 0, "latch_ack");
+}
/* alpha pll with hwfsm support */
@@ -294,8 +315,8 @@ static unsigned long alpha_pll_calc_rate(const struct clk_alpha_pll *pll,
{
int alpha_bw = ALPHA_BITWIDTH;
- if (pll->type == FABIA_PLL)
- alpha_bw = FABIA_BITWIDTH;
+ if (pll->type == FABIA_PLL || pll->type == TRION_PLL)
+ alpha_bw = SUPPORTS_16BIT_ALPHA;
return (prate * l) + ((prate * a) >> alpha_bw);
}
@@ -326,9 +347,9 @@ alpha_pll_round_rate(const struct clk_alpha_pll *pll, unsigned long rate,
return rate;
}
- /* Fabia PLLs only have 16 bits to program the fractional divider */
- if (pll->type == FABIA_PLL)
- alpha_bw = FABIA_BITWIDTH;
+ /* Some PLLs only have 16 bits to program the fractional divider */
+ if (pll->type == FABIA_PLL || pll->type == TRION_PLL)
+ alpha_bw = SUPPORTS_16BIT_ALPHA;
/* Upper ALPHA_BITWIDTH bits of Alpha */
quotient = remainder << alpha_bw;
@@ -415,7 +436,8 @@ static long clk_alpha_pll_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long min_freq, max_freq;
rate = alpha_pll_round_rate(pll, rate, *prate, &l, &a);
- if (pll->type == FABIA_PLL || alpha_pll_find_vco(pll, rate))
+ if (pll->type == FABIA_PLL || pll->type == TRION_PLL ||
+ alpha_pll_find_vco(pll, rate))
return rate;
min_freq = pll->vco_table[0].min_freq;
@@ -523,8 +545,8 @@ void clk_fabia_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
clk_fabia_pll_latch_input(pll, regmap);
regmap_update_bits(regmap, pll->offset + PLL_MODE,
- FABIA_PLL_HW_UPDATE_LOGIC_BYPASS,
- FABIA_PLL_HW_UPDATE_LOGIC_BYPASS);
+ PLL_HW_UPDATE_LOGIC_BYPASS,
+ PLL_HW_UPDATE_LOGIC_BYPASS);
regmap_update_bits(regmap, pll->offset + PLL_MODE,
PLL_RESET_N, PLL_RESET_N);
@@ -560,7 +582,7 @@ static int clk_fabia_pll_enable(struct clk_hw *hw)
return ret;
/* Set operation mode to STANDBY */
- regmap_write(pll->clkr.regmap, off + FABIA_OPMODE, FABIA_PLL_STANDBY);
+ regmap_write(pll->clkr.regmap, off + FABIA_OPMODE, PLL_STANDBY);
/* PLL should be in STANDBY mode before continuing */
mb();
@@ -572,7 +594,7 @@ static int clk_fabia_pll_enable(struct clk_hw *hw)
return ret;
/* Set operation mode to RUN */
- regmap_write(pll->clkr.regmap, off + FABIA_OPMODE, FABIA_PLL_RUN);
+ regmap_write(pll->clkr.regmap, off + FABIA_OPMODE, PLL_RUN);
ret = wait_for_pll_enable(pll, PLL_LOCK_DET);
if (ret)
@@ -624,7 +646,7 @@ static void clk_fabia_pll_disable(struct clk_hw *hw)
return;
/* Place the PLL mode in STANDBY */
- regmap_write(pll->clkr.regmap, off + FABIA_OPMODE, FABIA_PLL_STANDBY);
+ regmap_write(pll->clkr.regmap, off + FABIA_OPMODE, PLL_STANDBY);
}
static unsigned long
@@ -659,7 +681,7 @@ static int clk_fabia_pll_set_rate(struct clk_hw *hw, unsigned long rate,
* Due to limited number of bits for fractional rate programming, the
* rounded up rate could be marginally higher than the requested rate.
*/
- if (rrate > (rate + FABIA_PLL_RATE_MARGIN) || rrate < rate) {
+ if (rrate > (rate + ALPHA_16_BIT_PLL_RATE_MARGIN) || rrate < rate) {
pr_err("Call set rate on the PLL with rounded rates!\n");
return -EINVAL;
}
@@ -879,3 +901,436 @@ const struct clk_ops clk_generic_pll_postdiv_ops = {
.set_rate = clk_generic_pll_postdiv_set_rate,
};
EXPORT_SYMBOL_GPL(clk_generic_pll_postdiv_ops);
+
+static int trion_pll_is_enabled(struct clk_alpha_pll *pll,
+ struct regmap *regmap)
+{
+ u32 mode_val, opmode_val, off = pll->offset;
+ int ret;
+
+ ret = regmap_read(regmap, off + PLL_MODE, &mode_val);
+ ret |= regmap_read(regmap, off + TRION_PLL_OPMODE, &opmode_val);
+ if (ret)
+ return 0;
+
+ return ((opmode_val & PLL_RUN) && (mode_val & PLL_OUTCTRL));
+}
+
+int clk_trion_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ const struct pll_config *config)
+{
+ int ret = 0;
+
+ if (trion_pll_is_enabled(pll, regmap)) {
+ pr_debug("PLL is already enabled. Skipping configuration.\n");
+
+ /*
+ * Set the PLL_HW_UPDATE_LOGIC_BYPASS bit to latch the input
+ * before continuing.
+ */
+ regmap_update_bits(regmap, pll->offset + PLL_MODE,
+ PLL_HW_UPDATE_LOGIC_BYPASS,
+ PLL_HW_UPDATE_LOGIC_BYPASS);
+
+ pll->inited = true;
+ return ret;
+ }
+
+ /*
+ * Disable the PLL if it's already been initialized. Not doing so might
+ * lead to the PLL running with the old frequency configuration.
+ */
+ if (pll->inited) {
+ ret = regmap_update_bits(regmap, pll->offset + PLL_MODE,
+ PLL_RESET_N, 0);
+ if (ret)
+ return ret;
+ }
+
+ if (config->l)
+ regmap_write(regmap, pll->offset + PLL_L_VAL,
+ config->l);
+
+ regmap_write(regmap, pll->offset + TRION_PLL_CAL_L_VAL,
+ TRION_PLL_CAL_VAL);
+
+ if (config->frac)
+ regmap_write(regmap, pll->offset + TRION_PLL_ALPHA_VAL,
+ config->frac);
+
+ if (config->config_ctl_val)
+ regmap_write(regmap, pll->offset + PLL_CONFIG_CTL,
+ config->config_ctl_val);
+
+ if (config->config_ctl_hi_val)
+ regmap_write(regmap, pll->offset + TRION_PLL_CONFIG_CTL_U,
+ config->config_ctl_hi_val);
+
+ if (config->config_ctl_hi1_val)
+ regmap_write(regmap, pll->offset + TRION_PLL_CONFIG_CTL_U1,
+ config->config_ctl_hi1_val);
+
+ if (config->post_div_mask)
+ regmap_update_bits(regmap, pll->offset + TRION_PLL_USER_CTL,
+ config->post_div_mask, config->post_div_val);
+
+ /* Disable state read */
+ regmap_update_bits(regmap, pll->offset + TRION_PLL_USER_CTL_U,
+ TRION_PLL_ENABLE_STATE_READ, 0);
+
+ regmap_update_bits(regmap, pll->offset + PLL_MODE,
+ PLL_HW_UPDATE_LOGIC_BYPASS,
+ PLL_HW_UPDATE_LOGIC_BYPASS);
+
+ /* Set calibration control to Automatic */
+ regmap_update_bits(regmap, pll->offset + TRION_PLL_USER_CTL_U,
+ PLL_CALIBRATION_MASK, PLL_CALIBRATION_CONTROL);
+
+ /* Disable PLL output */
+ ret = regmap_update_bits(regmap, pll->offset + PLL_MODE,
+ PLL_OUTCTRL, 0);
+ if (ret)
+ return ret;
+
+ /* Set operation mode to OFF */
+ regmap_write(regmap, pll->offset + TRION_PLL_OPMODE, PLL_STANDBY);
+
+ /* PLL should be in OFF mode before continuing */
+ wmb();
+
+ /* Place the PLL in STANDBY mode */
+ ret = regmap_update_bits(regmap, pll->offset + PLL_MODE,
+ PLL_RESET_N, PLL_RESET_N);
+ if (ret)
+ return ret;
+
+ pll->inited = true;
+
+ return ret;
+}
+
+static int clk_alpha_pll_latch_l_val(struct clk_alpha_pll *pll)
+{
+ int ret;
+
+ /* Latch the input to the PLL */
+ ret = regmap_update_bits(pll->clkr.regmap, pll->offset + PLL_MODE,
+ PLL_UPDATE, PLL_UPDATE);
+ if (ret)
+ return ret;
+
+ /* Wait for 2 reference cycle before checking ACK bit */
+ udelay(1);
+
+ ret = wait_for_pll_latch_ack(pll, PLL_ACK_LATCH);
+ if (ret)
+ return ret;
+
+ /* Return latch input to 0 */
+ ret = regmap_update_bits(pll->clkr.regmap, pll->offset + PLL_MODE,
+ PLL_UPDATE, (u32)~PLL_UPDATE);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int clk_trion_pll_enable(struct clk_hw *hw)
+{
+ int ret = 0;
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 val, off = pll->offset;
+
+ ret = regmap_read(pll->clkr.regmap, off + PLL_MODE, &val);
+ if (ret)
+ return ret;
+
+ /* If in FSM mode, just vote for it */
+ if (val & PLL_VOTE_FSM_ENA) {
+ ret = clk_enable_regmap(hw);
+ if (ret)
+ return ret;
+ return wait_for_pll_enable(pll, PLL_ACTIVE_FLAG);
+ }
+
+ if (unlikely(!pll->inited)) {
+ ret = clk_trion_pll_configure(pll, pll->clkr.regmap,
+ pll->config);
+ if (ret) {
+ pr_err("Failed to configure %s\n", clk_hw_get_name(hw));
+ return ret;
+ }
+ }
+
+ /* Skip If PLL is already running */
+ if (trion_pll_is_enabled(pll, pll->clkr.regmap))
+ return ret;
+
+ /* Set operation mode to RUN */
+ regmap_write(pll->clkr.regmap, off + TRION_PLL_OPMODE, PLL_RUN);
+
+ ret = wait_for_pll_enable(pll, PLL_LOCK_DET);
+ if (ret)
+ return ret;
+
+ /* Enable PLL main output */
+ ret = regmap_update_bits(pll->clkr.regmap, off + TRION_PLL_USER_CTL,
+ TRION_PLL_OUT_MASK, TRION_PLL_OUT_MASK);
+ if (ret)
+ return ret;
+
+ /* Enable Global PLL outputs */
+ ret = regmap_update_bits(pll->clkr.regmap, off + PLL_MODE,
+ PLL_OUTCTRL, PLL_OUTCTRL);
+ if (ret)
+ return ret;
+
+ /* Ensure that the write above goes through before returning. */
+ mb();
+ return ret;
+}
+
+static void clk_trion_pll_disable(struct clk_hw *hw)
+{
+ int ret;
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 val, off = pll->offset;
+
+ ret = regmap_read(pll->clkr.regmap, off + PLL_MODE, &val);
+ if (ret)
+ return;
+
+ /* If in FSM mode, just unvote it */
+ if (val & PLL_VOTE_FSM_ENA) {
+ clk_disable_regmap(hw);
+ return;
+ }
+
+ /* Disable Global PLL outputs */
+ ret = regmap_update_bits(pll->clkr.regmap, off + PLL_MODE,
+ PLL_OUTCTRL, 0);
+ if (ret)
+ return;
+
+ /* Disable the main PLL output */
+ ret = regmap_update_bits(pll->clkr.regmap, off + TRION_PLL_USER_CTL,
+ TRION_PLL_OUT_MASK, 0);
+ if (ret)
+ return;
+
+ /* Place the PLL into STANDBY mode */
+ regmap_write(pll->clkr.regmap, off + TRION_PLL_OPMODE, PLL_STANDBY);
+
+ regmap_update_bits(pll->clkr.regmap, off + PLL_MODE,
+ PLL_RESET_N, PLL_RESET_N);
+}
+
+static unsigned long
+clk_trion_pll_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+{
+ u32 l, frac = 0;
+ u64 prate = parent_rate;
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 off = pll->offset;
+
+ regmap_read(pll->clkr.regmap, off + PLL_L_VAL, &l);
+ regmap_read(pll->clkr.regmap, off + TRION_PLL_ALPHA_VAL, &frac);
+
+ return alpha_pll_calc_rate(pll, prate, l, frac);
+}
+
+static int clk_trion_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long prate)
+{
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ unsigned long rrate;
+ bool is_enabled;
+ int ret;
+ u32 l, val, off = pll->offset;
+ u64 a;
+
+ rrate = alpha_pll_round_rate(pll, rate, prate, &l, &a);
+ /*
+ * Due to limited number of bits for fractional rate programming, the
+ * rounded up rate could be marginally higher than the requested rate.
+ */
+ if (rrate > (rate + ALPHA_16_BIT_PLL_RATE_MARGIN) || rrate < rate) {
+ pr_err("Trion_pll: Call clk_set_rate with rounded rates!\n");
+ return -EINVAL;
+ }
+
+ is_enabled = clk_hw_is_enabled(hw);
+
+ if (is_enabled)
+ hw->init->ops->disable(hw);
+
+ regmap_write(pll->clkr.regmap, off + PLL_L_VAL, l);
+ regmap_write(pll->clkr.regmap, off + TRION_PLL_ALPHA_VAL, a);
+
+ ret = regmap_read(pll->clkr.regmap, off + PLL_MODE, &val);
+ if (ret)
+ return ret;
+
+ /*
+ * If PLL is in Standby or RUN mode then only latch the L value
+ * Else PLL is in OFF mode and just configure L register - as per
+ * HPG no need to latch input.
+ */
+ if (val & PLL_RESET_N)
+ clk_alpha_pll_latch_l_val(pll);
+
+ if (is_enabled)
+ hw->init->ops->enable(hw);
+
+ /* Wait for PLL output to stabilize */
+ udelay(100);
+
+ return ret;
+}
+
+static int clk_trion_pll_is_enabled(struct clk_hw *hw)
+{
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+
+ return trion_pll_is_enabled(pll, pll->clkr.regmap);
+}
+
+static void clk_trion_pll_list_registers(struct seq_file *f, struct clk_hw *hw)
+{
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ int size, i, val;
+
+ static struct clk_register_data data[] = {
+ {"PLL_MODE", 0x0},
+ {"PLL_L_VAL", 0x4},
+ {"PLL_USER_CTL", 0xc},
+ {"PLL_USER_CTL_U", 0x10},
+ {"PLL_USER_CTL_U1", 0x14},
+ {"PLL_CONFIG_CTL", 0x18},
+ {"PLL_CONFIG_CTL_U", 0x1c},
+ {"PLL_CONFIG_CTL_U1", 0x20},
+ {"PLL_OPMODE", 0x38},
+ };
+
+ static struct clk_register_data data1[] = {
+ {"APSS_PLL_VOTE", 0x0},
+ };
+
+ size = ARRAY_SIZE(data);
+
+ for (i = 0; i < size; i++) {
+ regmap_read(pll->clkr.regmap, pll->offset + data[i].offset,
+ &val);
+ seq_printf(f, "%20s: 0x%.8x\n", data[i].name, val);
+ }
+
+ regmap_read(pll->clkr.regmap, pll->offset + data[0].offset, &val);
+
+ if (val & PLL_VOTE_FSM_ENA) {
+ regmap_read(pll->clkr.regmap, pll->clkr.enable_reg +
+ data1[0].offset, &val);
+ seq_printf(f, "%20s: 0x%.8x\n", data1[0].name, val);
+ }
+}
+
+const struct clk_ops clk_trion_pll_ops = {
+ .enable = clk_trion_pll_enable,
+ .disable = clk_trion_pll_disable,
+ .recalc_rate = clk_trion_pll_recalc_rate,
+ .round_rate = clk_alpha_pll_round_rate,
+ .set_rate = clk_trion_pll_set_rate,
+ .is_enabled = clk_trion_pll_is_enabled,
+ .list_registers = clk_trion_pll_list_registers,
+};
+EXPORT_SYMBOL(clk_trion_pll_ops);
+
+const struct clk_ops clk_trion_fixed_pll_ops = {
+ .enable = clk_trion_pll_enable,
+ .disable = clk_trion_pll_disable,
+ .recalc_rate = clk_trion_pll_recalc_rate,
+ .round_rate = clk_alpha_pll_round_rate,
+ .is_enabled = clk_trion_pll_is_enabled,
+ .list_registers = clk_trion_pll_list_registers,
+};
+EXPORT_SYMBOL(clk_trion_fixed_pll_ops);
+
+static unsigned long clk_trion_pll_postdiv_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct clk_alpha_pll_postdiv *pll = to_clk_alpha_pll_postdiv(hw);
+ u32 i, cal_div = 1, val;
+
+ if (!pll->post_div_table) {
+ pr_err("Missing the post_div_table for the PLL\n");
+ return -EINVAL;
+ }
+
+ regmap_read(pll->clkr.regmap, pll->offset + TRION_PLL_USER_CTL, &val);
+
+ val >>= pll->post_div_shift;
+ val &= PLL_POST_DIV_MASK;
+
+ for (i = 0; i < pll->num_post_div; i++) {
+ if (pll->post_div_table[i].val == val) {
+ cal_div = pll->post_div_table[i].div;
+ break;
+ }
+ }
+
+ return (parent_rate / cal_div);
+}
+
+static long clk_trion_pll_postdiv_round_rate(struct clk_hw *hw,
+ unsigned long rate, unsigned long *prate)
+{
+ struct clk_alpha_pll_postdiv *pll = to_clk_alpha_pll_postdiv(hw);
+
+ if (!pll->post_div_table)
+ return -EINVAL;
+
+ return divider_round_rate(hw, rate, prate, pll->post_div_table,
+ pll->width, CLK_DIVIDER_ROUND_CLOSEST);
+}
+
+static int clk_trion_pll_postdiv_set_rate(struct clk_hw *hw,
+ unsigned long rate, unsigned long parent_rate)
+{
+ struct clk_alpha_pll_postdiv *pll = to_clk_alpha_pll_postdiv(hw);
+ int i, val = 0, cal_div, ret;
+
+ /*
+ * If the PLL is in FSM mode, then treat the set_rate callback
+ * as a no-operation.
+ */
+ ret = regmap_read(pll->clkr.regmap, pll->offset + PLL_MODE, &val);
+ if (ret)
+ return ret;
+
+ if (val & PLL_VOTE_FSM_ENA)
+ return 0;
+
+ if (!pll->post_div_table) {
+ pr_err("Missing the post_div_table for the PLL\n");
+ return -EINVAL;
+ }
+
+ cal_div = DIV_ROUND_UP_ULL((u64)parent_rate, rate);
+ for (i = 0; i < pll->num_post_div; i++) {
+ if (pll->post_div_table[i].div == cal_div) {
+ val = pll->post_div_table[i].val;
+ break;
+ }
+ }
+
+ return regmap_update_bits(pll->clkr.regmap,
+ pll->offset + TRION_PLL_USER_CTL,
+ PLL_POST_DIV_MASK << pll->post_div_shift,
+ val << pll->post_div_shift);
+}
+
+const struct clk_ops clk_trion_pll_postdiv_ops = {
+ .recalc_rate = clk_trion_pll_postdiv_recalc_rate,
+ .round_rate = clk_trion_pll_postdiv_round_rate,
+ .set_rate = clk_trion_pll_postdiv_set_rate,
+};
+EXPORT_SYMBOL(clk_trion_pll_postdiv_ops);
diff --git a/drivers/clk/qcom/clk-alpha-pll.h b/drivers/clk/qcom/clk-alpha-pll.h
index 2656cd6..c5fecb1 100644
--- a/drivers/clk/qcom/clk-alpha-pll.h
+++ b/drivers/clk/qcom/clk-alpha-pll.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@@ -27,6 +27,7 @@ struct pll_vco {
enum pll_type {
ALPHA_PLL,
FABIA_PLL,
+ TRION_PLL,
};
/**
@@ -35,7 +36,7 @@ enum pll_type {
* @inited: flag that's set when the PLL is initialized
* @vco_table: array of VCO settings
* @clkr: regmap clock handle
- * @is_fabia: Set if the PLL type is FABIA
+ * @pll_type: Specify the type of PLL
*/
struct clk_alpha_pll {
u32 offset;
@@ -79,10 +80,15 @@ extern const struct clk_ops clk_alpha_pll_postdiv_ops;
extern const struct clk_ops clk_fabia_pll_ops;
extern const struct clk_ops clk_fabia_fixed_pll_ops;
extern const struct clk_ops clk_generic_pll_postdiv_ops;
+extern const struct clk_ops clk_trion_pll_ops;
+extern const struct clk_ops clk_trion_fixed_pll_ops;
+extern const struct clk_ops clk_trion_pll_postdiv_ops;
void clk_alpha_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
const struct pll_config *config);
void clk_fabia_pll_configure(struct clk_alpha_pll *pll,
struct regmap *regmap, const struct pll_config *config);
+int clk_trion_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ const struct pll_config *config);
#endif
diff --git a/drivers/clk/qcom/clk-cpu-a7.c b/drivers/clk/qcom/clk-cpu-a7.c
new file mode 100644
index 0000000..c0cc00f
--- /dev/null
+++ b/drivers/clk/qcom/clk-cpu-a7.c
@@ -0,0 +1,718 @@
+/*
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/clk.h>
+#include <linux/clk-provider.h>
+#include <linux/module.h>
+#include <linux/pm_opp.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <dt-bindings/clock/qcom,cpu-a7.h>
+
+#include "clk-alpha-pll.h"
+#include "clk-debug.h"
+#include "clk-rcg.h"
+#include "clk-regmap-mux-div.h"
+#include "common.h"
+#include "vdd-level-sdm845.h"
+
+#define SYS_APC0_AUX_CLK_SRC 1
+
+#define PLL_MODE_REG 0x0
+#define PLL_OPMODE_RUN 0x1
+#define PLL_OPMODE_REG 0x38
+#define PLL_MODE_OUTCTRL BIT(0)
+
+#define to_clk_regmap_mux_div(_hw) \
+ container_of(to_clk_regmap(_hw), struct clk_regmap_mux_div, clkr)
+
+static DEFINE_VDD_REGULATORS(vdd_cx, VDD_CX_NUM, 1, vdd_corner);
+static DEFINE_VDD_REGS_INIT(vdd_cpu, 1);
+
+enum apcs_clk_parent_index {
+ XO_AO_INDEX,
+ SYS_APC0_AUX_CLK_INDEX,
+ APCS_CPU_PLL_INDEX,
+};
+
+enum {
+ P_SYS_APC0_AUX_CLK,
+ P_APCS_CPU_PLL,
+ P_BI_TCXO_AO,
+};
+
+static const struct parent_map apcs_clk_parent_map[] = {
+ [XO_AO_INDEX] = { P_BI_TCXO_AO, 0 },
+ [SYS_APC0_AUX_CLK_INDEX] = { P_SYS_APC0_AUX_CLK, 1 },
+ [APCS_CPU_PLL_INDEX] = { P_APCS_CPU_PLL, 5 },
+};
+
+static const char *const apcs_clk_parent_name[] = {
+ [XO_AO_INDEX] = "bi_tcxo_ao",
+ [SYS_APC0_AUX_CLK_INDEX] = "sys_apc0_aux_clk",
+ [APCS_CPU_PLL_INDEX] = "apcs_cpu_pll",
+};
+
+static int a7cc_clk_set_rate_and_parent(struct clk_hw *hw, unsigned long rate,
+ unsigned long prate, u8 index)
+{
+ struct clk_regmap_mux_div *cpuclk = to_clk_regmap_mux_div(hw);
+
+ return __mux_div_set_src_div(cpuclk, cpuclk->parent_map[index].cfg,
+ cpuclk->div);
+}
+
+static int a7cc_clk_set_parent(struct clk_hw *hw, u8 index)
+{
+ /*
+ * Since a7cc_clk_set_rate_and_parent() is defined and set_parent()
+ * will never gets called from clk_change_rate() so return 0.
+ */
+ return 0;
+}
+
+static int a7cc_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long prate)
+{
+ struct clk_regmap_mux_div *cpuclk = to_clk_regmap_mux_div(hw);
+
+ /*
+ * Parent is same as the last rate.
+ * Here just configure new div.
+ */
+ return __mux_div_set_src_div(cpuclk, cpuclk->src, cpuclk->div);
+}
+
+static int a7cc_clk_determine_rate(struct clk_hw *hw,
+ struct clk_rate_request *req)
+{
+ int ret;
+ u32 div = 1;
+ struct clk_hw *xo, *apc0_auxclk_hw, *apcs_cpu_pll_hw;
+ unsigned long apc0_auxclk_rate, rate = req->rate;
+ struct clk_rate_request parent_req = { };
+ struct clk_regmap_mux_div *cpuclk = to_clk_regmap_mux_div(hw);
+ unsigned long mask = BIT(cpuclk->hid_width) - 1;
+
+ xo = clk_hw_get_parent_by_index(hw, XO_AO_INDEX);
+ if (rate == clk_hw_get_rate(xo)) {
+ req->best_parent_hw = xo;
+ req->best_parent_rate = rate;
+ cpuclk->div = div;
+ cpuclk->src = cpuclk->parent_map[XO_AO_INDEX].cfg;
+ return 0;
+ }
+
+ apc0_auxclk_hw = clk_hw_get_parent_by_index(hw, SYS_APC0_AUX_CLK_INDEX);
+ apcs_cpu_pll_hw = clk_hw_get_parent_by_index(hw, APCS_CPU_PLL_INDEX);
+
+ apc0_auxclk_rate = clk_hw_get_rate(apc0_auxclk_hw);
+ if (rate <= apc0_auxclk_rate) {
+ req->best_parent_hw = apc0_auxclk_hw;
+ req->best_parent_rate = apc0_auxclk_rate;
+
+ div = DIV_ROUND_UP((2 * req->best_parent_rate), rate) - 1;
+ div = min_t(unsigned long, div, mask);
+
+ req->rate = clk_rcg2_calc_rate(req->best_parent_rate, 0,
+ 0, 0, div);
+ cpuclk->src = cpuclk->parent_map[SYS_APC0_AUX_CLK_INDEX].cfg;
+ } else {
+ parent_req.rate = rate;
+ parent_req.best_parent_hw = apcs_cpu_pll_hw;
+
+ req->best_parent_hw = apcs_cpu_pll_hw;
+ ret = __clk_determine_rate(req->best_parent_hw, &parent_req);
+ if (ret)
+ return ret;
+
+ req->best_parent_rate = parent_req.rate;
+ cpuclk->src = cpuclk->parent_map[APCS_CPU_PLL_INDEX].cfg;
+ }
+ cpuclk->div = div;
+
+ return 0;
+}
+
+static void a7cc_clk_list_registers(struct seq_file *f, struct clk_hw *hw)
+{
+ struct clk_regmap_mux_div *cpuclk = to_clk_regmap_mux_div(hw);
+ int i = 0, size = 0, val;
+
+ static struct clk_register_data data[] = {
+ {"CMD_RCGR", 0x0},
+ {"CFG_RCGR", 0x4},
+ };
+
+ size = ARRAY_SIZE(data);
+ for (i = 0; i < size; i++) {
+ regmap_read(cpuclk->clkr.regmap,
+ cpuclk->reg_offset + data[i].offset, &val);
+ seq_printf(f, "%20s: 0x%.8x\n", data[i].name, val);
+ }
+}
+
+static unsigned long a7cc_clk_recalc_rate(struct clk_hw *hw,
+ unsigned long prate)
+{
+ struct clk_regmap_mux_div *cpuclk = to_clk_regmap_mux_div(hw);
+ const char *name = clk_hw_get_name(hw);
+ struct clk_hw *parent;
+ int ret = 0;
+ unsigned long parent_rate;
+ u32 i, div, src = 0;
+ u32 num_parents = clk_hw_get_num_parents(hw);
+
+ ret = mux_div_get_src_div(cpuclk, &src, &div);
+ if (ret)
+ return ret;
+
+ for (i = 0; i < num_parents; i++) {
+ if (src == cpuclk->parent_map[i].cfg) {
+ parent = clk_hw_get_parent_by_index(hw, i);
+ parent_rate = clk_hw_get_rate(parent);
+ return clk_rcg2_calc_rate(parent_rate, 0, 0, 0, div);
+ }
+ }
+ pr_err("%s: Can't find parent %d\n", name, src);
+ return ret;
+}
+
+static int a7cc_clk_enable(struct clk_hw *hw)
+{
+ return clk_regmap_mux_div_ops.enable(hw);
+}
+
+static void a7cc_clk_disable(struct clk_hw *hw)
+{
+ clk_regmap_mux_div_ops.disable(hw);
+}
+
+static u8 a7cc_clk_get_parent(struct clk_hw *hw)
+{
+ return clk_regmap_mux_div_ops.get_parent(hw);
+}
+
+/*
+ * We use the notifier function for switching to a temporary safe configuration
+ * (mux and divider), while the APSS pll is reconfigured.
+ */
+static int a7cc_notifier_cb(struct notifier_block *nb, unsigned long event,
+ void *data)
+{
+ int ret = 0;
+ struct clk_regmap_mux_div *cpuclk = container_of(nb,
+ struct clk_regmap_mux_div, clk_nb);
+
+ if (event == PRE_RATE_CHANGE)
+ /* set the mux to safe source(sys_apc0_aux_clk) & div */
+ ret = __mux_div_set_src_div(cpuclk, SYS_APC0_AUX_CLK_SRC, 1);
+
+ if (event == ABORT_RATE_CHANGE)
+ pr_err("Error in configuring PLL - stay at safe src only\n");
+
+ return notifier_from_errno(ret);
+}
+
+static const struct clk_ops a7cc_clk_ops = {
+ .enable = a7cc_clk_enable,
+ .disable = a7cc_clk_disable,
+ .get_parent = a7cc_clk_get_parent,
+ .set_rate = a7cc_clk_set_rate,
+ .set_parent = a7cc_clk_set_parent,
+ .set_rate_and_parent = a7cc_clk_set_rate_and_parent,
+ .determine_rate = a7cc_clk_determine_rate,
+ .recalc_rate = a7cc_clk_recalc_rate,
+ .debug_init = clk_debug_measure_add,
+ .list_registers = a7cc_clk_list_registers,
+};
+
+/*
+ * As per HW, sys_apc0_aux_clk runs at 300MHz and configured by BOOT
+ * So adding it as dummy clock.
+ */
+
+static struct clk_dummy sys_apc0_aux_clk = {
+ .rrate = 300000000,
+ .hw.init = &(struct clk_init_data){
+ .name = "sys_apc0_aux_clk",
+ .ops = &clk_dummy_ops,
+ },
+};
+
+/* Initial configuration for 1497.6MHz(Turbo) */
+static const struct pll_config apcs_cpu_pll_config = {
+ .l = 0x4E,
+};
+
+static struct pll_vco trion_vco[] = {
+ { 249600000, 2000000000, 0 },
+};
+
+static struct clk_alpha_pll apcs_cpu_pll = {
+ .type = TRION_PLL,
+ .vco_table = trion_vco,
+ .num_vco = ARRAY_SIZE(trion_vco),
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "apcs_cpu_pll",
+ .parent_names = (const char *[]){ "bi_tcxo_ao" },
+ .num_parents = 1,
+ .ops = &clk_trion_pll_ops,
+ VDD_CX_FMAX_MAP4(LOWER, 345600000,
+ LOW, 576000000,
+ NOMINAL, 1094400000,
+ HIGH, 1497600000),
+ },
+};
+
+static struct clk_regmap_mux_div apcs_clk = {
+ .hid_width = 5,
+ .hid_shift = 0,
+ .src_width = 3,
+ .src_shift = 8,
+ .safe_src = 1,
+ .safe_div = 1,
+ .parent_map = apcs_clk_parent_map,
+ .clk_nb.notifier_call = a7cc_notifier_cb,
+ .clkr.hw.init = &(struct clk_init_data) {
+ .name = "apcs_clk",
+ .parent_names = apcs_clk_parent_name,
+ .num_parents = 3,
+ .vdd_class = &vdd_cpu,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &a7cc_clk_ops,
+ },
+};
+
+static const struct of_device_id match_table[] = {
+ { .compatible = "qcom,cpu-sdxpoorwills" },
+ {}
+};
+
+static const struct regmap_config cpu_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = 4,
+ .val_bits = 32,
+ .max_register = 0x7F10,
+ .fast_io = true,
+};
+
+static struct clk_hw *cpu_clks_hws[] = {
+ [SYS_APC0_AUX_CLK] = &sys_apc0_aux_clk.hw,
+ [APCS_CPU_PLL] = &apcs_cpu_pll.clkr.hw,
+ [APCS_CLK] = &apcs_clk.clkr.hw,
+};
+
+static void a7cc_clk_get_speed_bin(struct platform_device *pdev, int *bin,
+ int *version)
+{
+ struct resource *res;
+ void __iomem *base;
+ u32 pte_efuse, valid;
+
+ *bin = 0;
+ *version = 0;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "efuse");
+ if (!res) {
+ dev_info(&pdev->dev,
+ "No speed/PVS binning available. Defaulting to 0!\n");
+ return;
+ }
+
+ base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
+ if (!base) {
+ dev_info(&pdev->dev,
+ "Unable to read efuse data. Defaulting to 0!\n");
+ return;
+ }
+
+ pte_efuse = readl_relaxed(base);
+ devm_iounmap(&pdev->dev, base);
+
+ *bin = pte_efuse & 0x7;
+ valid = (pte_efuse >> 3) & 0x1;
+ *version = (pte_efuse >> 4) & 0x3;
+
+ if (!valid) {
+ dev_info(&pdev->dev, "Speed bin not set. Defaulting to 0!\n");
+ *bin = 0;
+ } else {
+ dev_info(&pdev->dev, "Speed bin: %d\n", *bin);
+ }
+
+ dev_info(&pdev->dev, "PVS version: %d\n", *version);
+}
+
+static int a7cc_clk_get_fmax_vdd_class(struct platform_device *pdev,
+ struct clk_init_data *clk_intd, char *prop_name)
+{
+ struct device_node *of = pdev->dev.of_node;
+ int prop_len, i, j;
+ struct clk_vdd_class *vdd = clk_intd->vdd_class;
+ int num = vdd->num_regulators + 1;
+ u32 *array;
+
+ if (!of_find_property(of, prop_name, &prop_len)) {
+ dev_err(&pdev->dev, "missing %s\n", prop_name);
+ return -EINVAL;
+ }
+
+ prop_len /= sizeof(u32);
+ if (prop_len % num) {
+ dev_err(&pdev->dev, "bad length %d\n", prop_len);
+ return -EINVAL;
+ }
+
+ prop_len /= num;
+ vdd->level_votes = devm_kzalloc(&pdev->dev, prop_len * sizeof(int),
+ GFP_KERNEL);
+ if (!vdd->level_votes)
+ return -ENOMEM;
+
+ vdd->vdd_uv = devm_kzalloc(&pdev->dev,
+ prop_len * sizeof(int) * (num - 1), GFP_KERNEL);
+ if (!vdd->vdd_uv)
+ return -ENOMEM;
+
+ clk_intd->rate_max = devm_kzalloc(&pdev->dev,
+ prop_len * sizeof(unsigned long), GFP_KERNEL);
+ if (!clk_intd->rate_max)
+ return -ENOMEM;
+
+ array = devm_kzalloc(&pdev->dev,
+ prop_len * sizeof(u32) * num, GFP_KERNEL);
+ if (!array)
+ return -ENOMEM;
+
+ of_property_read_u32_array(of, prop_name, array, prop_len * num);
+ for (i = 0; i < prop_len; i++) {
+ clk_intd->rate_max[i] = array[num * i];
+ for (j = 1; j < num; j++) {
+ vdd->vdd_uv[(num - 1) * i + (j - 1)] =
+ array[num * i + j];
+ }
+ }
+
+ devm_kfree(&pdev->dev, array);
+ vdd->num_levels = prop_len;
+ vdd->cur_level = prop_len;
+ clk_intd->num_rate_max = prop_len;
+
+ return 0;
+}
+
+/*
+ * Find the voltage level required for a given clock rate.
+ */
+static int find_vdd_level(struct clk_init_data *clk_intd, unsigned long rate)
+{
+ int level;
+
+ for (level = 0; level < clk_intd->num_rate_max; level++)
+ if (rate <= clk_intd->rate_max[level])
+ break;
+
+ if (level == clk_intd->num_rate_max) {
+ pr_err("Rate %lu for %s is greater than highest Fmax\n", rate,
+ clk_intd->name);
+ return -EINVAL;
+ }
+
+ return level;
+}
+
+static int
+a7cc_clk_add_opp(struct clk_hw *hw, struct device *dev, unsigned long max_rate)
+{
+ unsigned long rate = 0;
+ int level, uv, j = 1;
+ long ret;
+ struct clk_init_data *clk_intd = (struct clk_init_data *)hw->init;
+ struct clk_vdd_class *vdd = clk_intd->vdd_class;
+
+ if (IS_ERR_OR_NULL(dev)) {
+ pr_err("%s: Invalid parameters\n", __func__);
+ return -EINVAL;
+ }
+
+ while (1) {
+ rate = clk_intd->rate_max[j++];
+ level = find_vdd_level(clk_intd, rate);
+ if (level <= 0) {
+ pr_warn("clock-cpu: no corner for %lu.\n", rate);
+ return -EINVAL;
+ }
+
+ uv = vdd->vdd_uv[level];
+ if (uv < 0) {
+ pr_warn("clock-cpu: no uv for %lu.\n", rate);
+ return -EINVAL;
+ }
+
+ ret = dev_pm_opp_add(dev, rate, uv);
+ if (ret) {
+ pr_warn("clock-cpu: failed to add OPP for %lu\n", rate);
+ return rate;
+ }
+
+ if (rate >= max_rate)
+ break;
+ }
+
+ return 0;
+}
+
+static void a7cc_clk_print_opp_table(int a7_cpu)
+{
+ struct dev_pm_opp *oppfmax, *oppfmin;
+ unsigned long apc_fmax, apc_fmin;
+ u32 max_a7ss_index = apcs_clk.clkr.hw.init->num_rate_max;
+
+ apc_fmax = apcs_clk.clkr.hw.init->rate_max[max_a7ss_index - 1];
+ apc_fmin = apcs_clk.clkr.hw.init->rate_max[1];
+
+ rcu_read_lock();
+
+ oppfmax = dev_pm_opp_find_freq_exact(get_cpu_device(a7_cpu),
+ apc_fmax, true);
+ oppfmin = dev_pm_opp_find_freq_exact(get_cpu_device(a7_cpu),
+ apc_fmin, true);
+ pr_info("Clock_cpu: OPP voltage for %lu: %ld\n", apc_fmin,
+ dev_pm_opp_get_voltage(oppfmin));
+ pr_info("Clock_cpu: OPP voltage for %lu: %ld\n", apc_fmax,
+ dev_pm_opp_get_voltage(oppfmax));
+
+ rcu_read_unlock();
+}
+
+static void a7cc_clk_populate_opp_table(struct platform_device *pdev)
+{
+ unsigned long apc_fmax;
+ int cpu, a7_cpu = 0;
+ u32 max_a7ss_index = apcs_clk.clkr.hw.init->num_rate_max;
+
+ apc_fmax = apcs_clk.clkr.hw.init->rate_max[max_a7ss_index - 1];
+
+ for_each_possible_cpu(cpu) {
+ a7_cpu = cpu;
+ WARN(a7cc_clk_add_opp(&apcs_clk.clkr.hw, get_cpu_device(cpu),
+ apc_fmax),
+ "Failed to add OPP levels for apcs_clk\n");
+ }
+ /* One time print during bootup */
+ dev_info(&pdev->dev, "OPP tables populated (cpu %d)\n", a7_cpu);
+
+ a7cc_clk_print_opp_table(a7_cpu);
+}
+
+static int a7cc_driver_probe(struct platform_device *pdev)
+{
+ struct clk *clk;
+ void __iomem *base;
+ u32 opmode_regval, mode_regval;
+ struct resource *res;
+ struct clk_onecell_data *data;
+ struct device *dev = &pdev->dev;
+ struct device_node *of = pdev->dev.of_node;
+ int i, ret, speed_bin, version, cpu;
+ int num_clks = ARRAY_SIZE(cpu_clks_hws);
+ u32 a7cc_clk_init_rate = 0;
+ char prop_name[] = "qcom,speedX-bin-vX";
+ struct clk *ext_xo_clk;
+
+ /* Require the RPMH-XO clock to be registered before */
+ ext_xo_clk = devm_clk_get(dev, "xo_ao");
+ if (IS_ERR(ext_xo_clk)) {
+ if (PTR_ERR(ext_xo_clk) != -EPROBE_DEFER)
+ dev_err(dev, "Unable to get xo clock\n");
+ return PTR_ERR(ext_xo_clk);
+ }
+
+ /* Get speed bin information */
+ a7cc_clk_get_speed_bin(pdev, &speed_bin, &version);
+
+ /* Rail Regulator for apcs_pll */
+ vdd_cx.regulator[0] = devm_regulator_get(&pdev->dev, "vdd_dig_ao");
+ if (IS_ERR(vdd_cx.regulator[0])) {
+ if (!(PTR_ERR(vdd_cx.regulator[0]) == -EPROBE_DEFER))
+ dev_err(&pdev->dev,
+ "Unable to get vdd_dig_ao regulator\n");
+ return PTR_ERR(vdd_cx.regulator[0]);
+ }
+
+ /* Rail Regulator for APSS a7ss mux */
+ vdd_cpu.regulator[0] = devm_regulator_get(&pdev->dev, "cpu-vdd");
+ if (IS_ERR(vdd_cpu.regulator[0])) {
+ if (!(PTR_ERR(vdd_cpu.regulator[0]) == -EPROBE_DEFER))
+ dev_err(&pdev->dev,
+ "Unable to get cpu-vdd regulator\n");
+ return PTR_ERR(vdd_cpu.regulator[0]);
+ }
+
+ snprintf(prop_name, ARRAY_SIZE(prop_name),
+ "qcom,speed%d-bin-v%d", speed_bin, version);
+
+ ret = a7cc_clk_get_fmax_vdd_class(pdev,
+ (struct clk_init_data *)apcs_clk.clkr.hw.init, prop_name);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "Can't get speed bin for apcs_clk. Falling back to zero\n");
+ ret = a7cc_clk_get_fmax_vdd_class(pdev,
+ (struct clk_init_data *)apcs_clk.clkr.hw.init,
+ "qcom,speed0-bin-v0");
+ if (ret) {
+ dev_err(&pdev->dev,
+ "Unable to get speed bin for apcs_clk freq-corner mapping info\n");
+ return ret;
+ }
+ }
+
+ ret = of_property_read_u32(of, "qcom,a7cc-init-rate",
+ &a7cc_clk_init_rate);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "unable to find qcom,a7cc_clk_init_rate property,ret=%d\n",
+ ret);
+ return -EINVAL;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apcs_pll");
+ base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(base)) {
+ dev_err(&pdev->dev, "Failed to map apcs_cpu_pll register base\n");
+ return PTR_ERR(base);
+ }
+
+ apcs_cpu_pll.clkr.regmap = devm_regmap_init_mmio(dev, base,
+ &cpu_regmap_config);
+ if (IS_ERR(apcs_cpu_pll.clkr.regmap)) {
+ dev_err(&pdev->dev, "Couldn't get regmap for apcs_cpu_pll\n");
+ return PTR_ERR(apcs_cpu_pll.clkr.regmap);
+ }
+
+ ret = of_property_read_u32(of, "qcom,rcg-reg-offset",
+ &apcs_clk.reg_offset);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "unable to find qcom,rcg-reg-offset property,ret=%d\n",
+ ret);
+ return -EINVAL;
+ }
+
+ apcs_clk.clkr.regmap = apcs_cpu_pll.clkr.regmap;
+
+ /* Read PLLs OPMODE and mode register */
+ ret = regmap_read(apcs_cpu_pll.clkr.regmap, PLL_OPMODE_REG,
+ &opmode_regval);
+ if (ret)
+ return ret;
+
+ ret = regmap_read(apcs_cpu_pll.clkr.regmap, PLL_MODE_REG,
+ &mode_regval);
+ if (ret)
+ return ret;
+
+ /* Configure APSS PLL only if it is not enabled and running */
+ if (!(opmode_regval & PLL_OPMODE_RUN) &&
+ !(mode_regval & PLL_MODE_OUTCTRL))
+ clk_trion_pll_configure(&apcs_cpu_pll,
+ apcs_cpu_pll.clkr.regmap, &apcs_cpu_pll_config);
+
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ data->clk_num = num_clks;
+
+ data->clks = devm_kzalloc(dev, num_clks * sizeof(struct clk *),
+ GFP_KERNEL);
+ if (!data->clks)
+ return -ENOMEM;
+
+ /* Register clocks with clock framework */
+ for (i = 0; i < num_clks; i++) {
+ clk = devm_clk_register(dev, cpu_clks_hws[i]);
+ if (IS_ERR(clk))
+ return PTR_ERR(clk);
+ data->clks[i] = clk;
+ }
+
+ ret = of_clk_add_provider(dev->of_node, of_clk_src_onecell_get, data);
+ if (ret) {
+ dev_err(&pdev->dev, "CPU clock driver registeration failed\n");
+ return ret;
+ }
+
+ ret = clk_notifier_register(apcs_cpu_pll.clkr.hw.clk, &apcs_clk.clk_nb);
+ if (ret) {
+ dev_err(dev, "failed to register clock notifier: %d\n", ret);
+ return ret;
+ }
+
+ /* Put proxy vote for APSS PLL */
+ clk_prepare_enable(apcs_cpu_pll.clkr.hw.clk);
+
+ /* Set to TURBO boot frequency */
+ ret = clk_set_rate(apcs_clk.clkr.hw.clk, a7cc_clk_init_rate);
+ if (ret)
+ dev_err(&pdev->dev, "Unable to set init rate on apcs_clk\n");
+
+ /*
+ * We don't want the CPU clocks to be turned off at late init
+ * if CPUFREQ or HOTPLUG configs are disabled. So, bump up the
+ * refcount of these clocks. Any cpufreq/hotplug manager can assume
+ * that the clocks have already been prepared and enabled by the time
+ * they take over.
+ */
+
+ get_online_cpus();
+ for_each_online_cpu(cpu)
+ WARN(clk_prepare_enable(apcs_clk.clkr.hw.clk),
+ "Unable to turn on CPU clock\n");
+ put_online_cpus();
+
+ /* Remove proxy vote for APSS PLL */
+ clk_disable_unprepare(apcs_cpu_pll.clkr.hw.clk);
+
+ a7cc_clk_populate_opp_table(pdev);
+
+ dev_info(dev, "CPU clock Driver probed successfully\n");
+
+ return ret;
+}
+
+static struct platform_driver a7_clk_driver = {
+ .probe = a7cc_driver_probe,
+ .driver = {
+ .name = "qcom-cpu-sdxpoorwills",
+ .of_match_table = match_table,
+ },
+};
+
+static int __init a7_clk_init(void)
+{
+ return platform_driver_register(&a7_clk_driver);
+}
+subsys_initcall(a7_clk_init);
+
+static void __exit a7_clk_exit(void)
+{
+ platform_driver_unregister(&a7_clk_driver);
+}
+module_exit(a7_clk_exit);
+
+MODULE_ALIAS("platform:cpu");
+MODULE_DESCRIPTION("A7 CPU clock Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/clk/qcom/clk-cpu-osm.c b/drivers/clk/qcom/clk-cpu-osm.c
index ec4c83e..fb0b504 100644
--- a/drivers/clk/qcom/clk-cpu-osm.c
+++ b/drivers/clk/qcom/clk-cpu-osm.c
@@ -31,7 +31,9 @@
#include <linux/sched.h>
#include <linux/cpufreq.h>
#include <linux/slab.h>
+#include <linux/regulator/consumer.h>
#include <dt-bindings/clock/qcom,cpucc-sdm845.h>
+#include <dt-bindings/regulator/qcom,rpmh-regulator.h>
#include "common.h"
#include "clk-regmap.h"
@@ -53,6 +55,9 @@
#define VOLT_REG 0x114
#define CORE_DCVS_CTRL 0xbc
+#define EFUSE_SHIFT(v1) ((v1) ? 3 : 2)
+#define EFUSE_MASK 0x7
+
#define DCVS_PERF_STATE_DESIRED_REG_0_V1 0x780
#define DCVS_PERF_STATE_DESIRED_REG_0_V2 0x920
#define DCVS_PERF_STATE_DESIRED_REG(n, v1) \
@@ -65,6 +70,9 @@
(((v1) ? OSM_CYCLE_COUNTER_STATUS_REG_0_V1 \
: OSM_CYCLE_COUNTER_STATUS_REG_0_V2) + 4 * (n))
+static DEFINE_VDD_REGS_INIT(vdd_l3_mx_ao, 1);
+static DEFINE_VDD_REGS_INIT(vdd_pwrcl_mx_ao, 1);
+
struct osm_entry {
u16 virtual_corner;
u16 open_loop_volt;
@@ -85,6 +93,8 @@ struct clk_osm {
u64 total_cycle_counter;
u32 prev_cycle_counter;
u32 max_core_count;
+ u32 mx_turbo_freq;
+ unsigned int cpr_rc;
};
static bool is_sdm845v1;
@@ -131,6 +141,18 @@ static inline bool is_better_rate(unsigned long req, unsigned long best,
return (req <= new && new < best) || (best < req && best < new);
}
+static int clk_osm_search_table(struct osm_entry *table, int entries, long rate)
+{
+ int index;
+
+ for (index = 0; index < entries; index++) {
+ if (rate == table[index].frequency)
+ return index;
+ }
+
+ return -EINVAL;
+}
+
static long clk_osm_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *parent_rate)
{
@@ -161,23 +183,62 @@ static long clk_osm_round_rate(struct clk_hw *hw, unsigned long rate,
return rrate;
}
-static int clk_osm_search_table(struct osm_entry *table, int entries, long rate)
+static int clk_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
{
+ struct clk_osm *c = to_clk_osm(hw);
+ struct clk_hw *p_hw = clk_hw_get_parent(hw);
+ struct clk_osm *parent = to_clk_osm(p_hw);
int index = 0;
- for (index = 0; index < entries; index++) {
- if (rate == table[index].frequency)
- return index;
+ if (!c || !parent)
+ return -EINVAL;
+
+ index = clk_osm_search_table(parent->osm_table,
+ parent->num_entries, rate);
+ if (index < 0) {
+ pr_err("cannot set %s to %lu\n", clk_hw_get_name(hw), rate);
+ return -EINVAL;
}
- return -EINVAL;
+ clk_osm_write_reg(parent, index,
+ DCVS_PERF_STATE_DESIRED_REG(c->core_num,
+ is_sdm845v1));
+
+ /* Make sure the write goes through before proceeding */
+ clk_osm_mb(parent);
+
+ return 0;
}
-const struct clk_ops clk_ops_cpu_osm = {
- .round_rate = clk_osm_round_rate,
- .list_rate = clk_osm_list_rate,
- .debug_init = clk_debug_measure_add,
-};
+static unsigned long clk_cpu_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct clk_osm *c = to_clk_osm(hw);
+ struct clk_hw *p_hw = clk_hw_get_parent(hw);
+ struct clk_osm *parent = to_clk_osm(p_hw);
+ int index = 0;
+
+ if (!c || !parent)
+ return -EINVAL;
+
+ index = clk_osm_read_reg(parent,
+ DCVS_PERF_STATE_DESIRED_REG(c->core_num,
+ is_sdm845v1));
+ return parent->osm_table[index].frequency;
+}
+
+static long clk_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ struct clk_hw *parent_hw = clk_hw_get_parent(hw);
+
+ if (!parent_hw)
+ return -EINVAL;
+
+ *parent_rate = rate;
+ return clk_hw_round_rate(parent_hw, rate);
+}
static int l3_clk_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
@@ -233,7 +294,6 @@ static unsigned long l3_clk_recalc_rate(struct clk_hw *hw,
return cpuclk->osm_table[index].frequency;
}
-
static struct clk_ops clk_ops_l3_osm = {
.round_rate = clk_osm_round_rate,
.list_rate = clk_osm_list_rate,
@@ -242,18 +302,23 @@ static struct clk_ops clk_ops_l3_osm = {
.debug_init = clk_debug_measure_add,
};
+static struct clk_ops clk_ops_core;
+static struct clk_ops clk_ops_cpu_osm;
+
static struct clk_init_data osm_clks_init[] = {
[0] = {
.name = "l3_clk",
.parent_names = (const char *[]){ "bi_tcxo_ao" },
.num_parents = 1,
.ops = &clk_ops_l3_osm,
+ .vdd_class = &vdd_l3_mx_ao,
},
[1] = {
.name = "pwrcl_clk",
.parent_names = (const char *[]){ "bi_tcxo_ao" },
.num_parents = 1,
.ops = &clk_ops_cpu_osm,
+ .vdd_class = &vdd_pwrcl_mx_ao,
},
[2] = {
.name = "perfcl_clk",
@@ -287,7 +352,8 @@ static struct clk_osm cpu0_pwrcl_clk = {
.name = "cpu0_pwrcl_clk",
.parent_names = (const char *[]){ "pwrcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -299,7 +365,8 @@ static struct clk_osm cpu1_pwrcl_clk = {
.name = "cpu1_pwrcl_clk",
.parent_names = (const char *[]){ "pwrcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -311,7 +378,8 @@ static struct clk_osm cpu2_pwrcl_clk = {
.name = "cpu2_pwrcl_clk",
.parent_names = (const char *[]){ "pwrcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -323,7 +391,8 @@ static struct clk_osm cpu3_pwrcl_clk = {
.name = "cpu3_pwrcl_clk",
.parent_names = (const char *[]){ "pwrcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -335,7 +404,8 @@ static struct clk_osm cpu4_pwrcl_clk = {
.name = "cpu4_pwrcl_clk",
.parent_names = (const char *[]){ "pwrcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -347,7 +417,8 @@ static struct clk_osm cpu5_pwrcl_clk = {
.name = "cpu5_pwrcl_clk",
.parent_names = (const char *[]){ "pwrcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -366,7 +437,8 @@ static struct clk_osm cpu4_perfcl_clk = {
.name = "cpu4_perfcl_clk",
.parent_names = (const char *[]){ "perfcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -378,7 +450,8 @@ static struct clk_osm cpu5_perfcl_clk = {
.name = "cpu5_perfcl_clk",
.parent_names = (const char *[]){ "perfcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -390,7 +463,8 @@ static struct clk_osm cpu6_perfcl_clk = {
.name = "cpu6_perfcl_clk",
.parent_names = (const char *[]){ "perfcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -402,7 +476,8 @@ static struct clk_osm cpu7_perfcl_clk = {
.name = "cpu7_perfcl_clk",
.parent_names = (const char *[]){ "perfcl_clk" },
.num_parents = 1,
- .ops = &clk_dummy_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_ops_core,
},
};
@@ -515,13 +590,23 @@ static struct clk_osm *osm_configure_policy(struct cpufreq_policy *policy)
}
static void
-osm_set_index(struct clk_osm *c, unsigned int index, unsigned int num)
+osm_set_index(struct clk_osm *c, unsigned int index)
{
- clk_osm_write_reg(c, index,
- DCVS_PERF_STATE_DESIRED_REG(num, is_sdm845v1));
+ struct clk_hw *p_hw = clk_hw_get_parent(&c->hw);
+ struct clk_osm *parent = to_clk_osm(p_hw);
+ unsigned long rate = 0;
- /* Make sure the write goes through before proceeding */
- clk_osm_mb(c);
+ if (index >= OSM_TABLE_SIZE) {
+ pr_err("Passing an index (%u) that's greater than max (%d)\n",
+ index, OSM_TABLE_SIZE - 1);
+ return;
+ }
+
+ rate = parent->osm_table[index].frequency;
+ if (!rate)
+ return;
+
+ clk_set_rate(c->hw.clk, clk_round_rate(c->hw.clk, rate));
}
static int
@@ -529,7 +614,7 @@ osm_cpufreq_target_index(struct cpufreq_policy *policy, unsigned int index)
{
struct clk_osm *c = policy->driver_data;
- osm_set_index(c, index, c->core_num);
+ osm_set_index(c, index);
return 0;
}
@@ -849,6 +934,7 @@ static u64 clk_osm_get_cpu_cycle_counter(int cpu)
static int clk_osm_read_lut(struct platform_device *pdev, struct clk_osm *c)
{
u32 data, src, lval, i, j = OSM_TABLE_SIZE;
+ struct clk_vdd_class *vdd = osm_clks_init[c->cluster_num].vdd_class;
for (i = 0; i < OSM_TABLE_SIZE; i++) {
data = clk_osm_read_reg(c, FREQ_REG + i * OSM_REG_SIZE);
@@ -881,6 +967,29 @@ static int clk_osm_read_lut(struct platform_device *pdev, struct clk_osm *c)
if (!osm_clks_init[c->cluster_num].rate_max)
return -ENOMEM;
+ if (vdd) {
+ vdd->level_votes = devm_kcalloc(&pdev->dev, j,
+ sizeof(*vdd->level_votes), GFP_KERNEL);
+ if (!vdd->level_votes)
+ return -ENOMEM;
+
+ vdd->vdd_uv = devm_kcalloc(&pdev->dev, j, sizeof(*vdd->vdd_uv),
+ GFP_KERNEL);
+ if (!vdd->vdd_uv)
+ return -ENOMEM;
+
+ for (i = 0; i < j; i++) {
+ if (c->osm_table[i].frequency < c->mx_turbo_freq ||
+ (c->cpr_rc > 1))
+ vdd->vdd_uv[i] = RPMH_REGULATOR_LEVEL_NOM;
+ else
+ vdd->vdd_uv[i] = RPMH_REGULATOR_LEVEL_TURBO;
+ }
+ vdd->num_levels = j;
+ vdd->cur_level = j;
+ vdd->use_max_uV = true;
+ }
+
for (i = 0; i < j; i++)
osm_clks_init[c->cluster_num].rate_max[i] =
c->osm_table[i].frequency;
@@ -964,12 +1073,17 @@ static void clk_cpu_osm_driver_sdm670_fixup(void)
static int clk_cpu_osm_driver_probe(struct platform_device *pdev)
{
- int rc = 0, i;
- u32 val;
+ int rc = 0, i, cpu;
+ bool is_sdm670 = false;
+ u32 *array;
+ u32 val, pte_efuse;
+ void __iomem *vbase;
int num_clks = ARRAY_SIZE(osm_qcom_clk_hws);
struct clk *ext_xo_clk, *clk;
+ struct clk_osm *osm_clk;
struct device *dev = &pdev->dev;
struct clk_onecell_data *clk_data;
+ struct resource *res;
struct cpu_cycle_counter_cb cb = {
.get_cpu_cycle_counter = clk_osm_get_cpu_cycle_counter,
};
@@ -989,8 +1103,68 @@ static int clk_cpu_osm_driver_probe(struct platform_device *pdev)
"qcom,clk-cpu-osm");
if (of_device_is_compatible(pdev->dev.of_node,
- "qcom,clk-cpu-osm-sdm670"))
+ "qcom,clk-cpu-osm-sdm670")) {
+ is_sdm670 = true;
clk_cpu_osm_driver_sdm670_fixup();
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cpr_rc");
+ if (res) {
+ vbase = devm_ioremap(&pdev->dev, res->start,
+ resource_size(res));
+ if (!vbase) {
+ dev_err(&pdev->dev, "Unable to map in cpr_rc base\n");
+ return -ENOMEM;
+ }
+ pte_efuse = readl_relaxed(vbase);
+ l3_clk.cpr_rc = pwrcl_clk.cpr_rc = perfcl_clk.cpr_rc =
+ ((pte_efuse >> EFUSE_SHIFT(is_sdm845v1 | is_sdm670))
+ & EFUSE_MASK);
+ pr_info("LOCAL_CPR_RC: %u\n", l3_clk.cpr_rc);
+ devm_iounmap(&pdev->dev, vbase);
+ } else {
+ dev_err(&pdev->dev,
+ "Unable to get platform resource for cpr_rc\n");
+ return -ENOMEM;
+ }
+
+ vdd_l3_mx_ao.regulator[0] = devm_regulator_get(&pdev->dev,
+ "vdd_l3_mx_ao");
+ if (IS_ERR(vdd_l3_mx_ao.regulator[0])) {
+ if (PTR_ERR(vdd_l3_mx_ao.regulator[0]) != -EPROBE_DEFER)
+ dev_err(&pdev->dev,
+ "Unable to get vdd_l3_mx_ao regulator\n");
+ return PTR_ERR(vdd_l3_mx_ao.regulator[0]);
+ }
+
+ vdd_pwrcl_mx_ao.regulator[0] = devm_regulator_get(&pdev->dev,
+ "vdd_pwrcl_mx_ao");
+ if (IS_ERR(vdd_pwrcl_mx_ao.regulator[0])) {
+ if (PTR_ERR(vdd_pwrcl_mx_ao.regulator[0]) != -EPROBE_DEFER)
+ dev_err(&pdev->dev,
+ "Unable to get vdd_pwrcl_mx_ao regulator\n");
+ return PTR_ERR(vdd_pwrcl_mx_ao.regulator[0]);
+ }
+
+ array = devm_kcalloc(&pdev->dev, MAX_CLUSTER_CNT, sizeof(*array),
+ GFP_KERNEL);
+ if (!array)
+ return -ENOMEM;
+
+ rc = of_property_read_u32_array(pdev->dev.of_node, "qcom,mx-turbo-freq",
+ array, MAX_CLUSTER_CNT);
+ if (rc) {
+ dev_err(&pdev->dev, "unable to find qcom,mx-turbo-freq property, rc=%d\n",
+ rc);
+ devm_kfree(&pdev->dev, array);
+ return rc;
+ }
+
+ l3_clk.mx_turbo_freq = array[l3_clk.cluster_num];
+ pwrcl_clk.mx_turbo_freq = array[pwrcl_clk.cluster_num];
+ perfcl_clk.mx_turbo_freq = array[perfcl_clk.cluster_num];
+
+ devm_kfree(&pdev->dev, array);
clk_data = devm_kzalloc(&pdev->dev, sizeof(struct clk_onecell_data),
GFP_KERNEL);
@@ -1014,11 +1188,11 @@ static int clk_cpu_osm_driver_probe(struct platform_device *pdev)
/* Check if per-core DCVS is enabled/not */
val = clk_osm_read_reg(&pwrcl_clk, CORE_DCVS_CTRL);
- if (val && BIT(0))
+ if (val & BIT(0))
pwrcl_clk.per_core_dcvs = true;
val = clk_osm_read_reg(&perfcl_clk, CORE_DCVS_CTRL);
- if (val && BIT(0))
+ if (val & BIT(0))
perfcl_clk.per_core_dcvs = true;
rc = clk_osm_read_lut(pdev, &l3_clk);
@@ -1046,6 +1220,16 @@ static int clk_cpu_osm_driver_probe(struct platform_device *pdev)
spin_lock_init(&pwrcl_clk.lock);
spin_lock_init(&perfcl_clk.lock);
+ clk_ops_core = clk_dummy_ops;
+ clk_ops_core.set_rate = clk_cpu_set_rate;
+ clk_ops_core.round_rate = clk_cpu_round_rate;
+ clk_ops_core.recalc_rate = clk_cpu_recalc_rate;
+
+ clk_ops_cpu_osm = clk_dummy_ops;
+ clk_ops_cpu_osm.round_rate = clk_osm_round_rate;
+ clk_ops_cpu_osm.list_rate = clk_osm_list_rate;
+ clk_ops_cpu_osm.debug_init = clk_debug_measure_add;
+
/* Register OSM l3, pwr and perf clocks with Clock Framework */
for (i = 0; i < num_clks; i++) {
if (!osm_qcom_clk_hws[i])
@@ -1076,6 +1260,16 @@ static int clk_cpu_osm_driver_probe(struct platform_device *pdev)
WARN(clk_prepare_enable(l3_misc_vote_clk.hw.clk),
"clk: Failed to enable misc clock for L3\n");
+ /*
+ * Call clk_prepare_enable for the silver clock explicitly in order to
+ * place an implicit vote on MX
+ */
+ for_each_online_cpu(cpu) {
+ osm_clk = logical_cpu_to_clk(cpu);
+ if (!osm_clk)
+ return -EINVAL;
+ clk_prepare_enable(osm_clk->hw.clk);
+ }
populate_opp_table(pdev);
of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
diff --git a/drivers/clk/qcom/clk-pll.h b/drivers/clk/qcom/clk-pll.h
index 9682799..70f7612 100644
--- a/drivers/clk/qcom/clk-pll.h
+++ b/drivers/clk/qcom/clk-pll.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2013, 2016, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2013, 2016-2017, The Linux Foundation. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@@ -83,6 +83,8 @@ struct pll_config {
u32 aux2_output_mask;
u32 early_output_mask;
u32 config_ctl_val;
+ u32 config_ctl_hi_val;
+ u32 config_ctl_hi1_val;
};
void clk_pll_configure_sr(struct clk_pll *pll, struct regmap *regmap,
diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
index 60758b4..aaf2324 100644
--- a/drivers/clk/qcom/clk-rcg.h
+++ b/drivers/clk/qcom/clk-rcg.h
@@ -188,4 +188,6 @@ extern const struct clk_ops clk_dp_ops;
extern int clk_rcg2_get_dfs_clock_rate(struct clk_rcg2 *clk,
struct device *dev, u8 rcg_flags);
+extern unsigned long
+clk_rcg2_calc_rate(unsigned long rate, u32 m, u32 n, u32 mode, u32 hid_div);
#endif
diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
index 8d5e527..35bcf5a 100644
--- a/drivers/clk/qcom/clk-rcg2.c
+++ b/drivers/clk/qcom/clk-rcg2.c
@@ -223,8 +223,8 @@ static void disable_unprepare_rcg_srcs(struct clk *curr, struct clk *new)
* rate = ----------- x ---
* hid_div n
*/
-static unsigned long
-calc_rate(unsigned long rate, u32 m, u32 n, u32 mode, u32 hid_div)
+unsigned long
+clk_rcg2_calc_rate(unsigned long rate, u32 m, u32 n, u32 mode, u32 hid_div)
{
if (hid_div) {
rate *= 2;
@@ -240,6 +240,7 @@ calc_rate(unsigned long rate, u32 m, u32 n, u32 mode, u32 hid_div)
return rate;
}
+EXPORT_SYMBOL(clk_rcg2_calc_rate);
static unsigned long
clk_rcg2_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
@@ -274,7 +275,7 @@ clk_rcg2_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
hid_div = cfg >> CFG_SRC_DIV_SHIFT;
hid_div &= mask;
- return calc_rate(parent_rate, m, n, mode, hid_div);
+ return clk_rcg2_calc_rate(parent_rate, m, n, mode, hid_div);
}
static int _freq_tbl_determine_rate(struct clk_hw *hw,
@@ -764,7 +765,7 @@ static int clk_edp_pixel_determine_rate(struct clk_hw *hw,
hid_div >>= CFG_SRC_DIV_SHIFT;
hid_div &= mask;
- req->rate = calc_rate(req->best_parent_rate,
+ req->rate = clk_rcg2_calc_rate(req->best_parent_rate,
frac->num, frac->den,
!!frac->den, hid_div);
return 0;
@@ -804,7 +805,7 @@ static int clk_byte_determine_rate(struct clk_hw *hw,
div = DIV_ROUND_UP((2 * parent_rate), req->rate) - 1;
div = min_t(u32, div, mask);
- req->rate = calc_rate(parent_rate, 0, 0, 0, div);
+ req->rate = clk_rcg2_calc_rate(parent_rate, 0, 0, 0, div);
return 0;
}
@@ -862,7 +863,7 @@ static int clk_byte2_determine_rate(struct clk_hw *hw,
div = DIV_ROUND_UP((2 * parent_rate), rate) - 1;
div = min_t(u32, div, mask);
- req->rate = calc_rate(parent_rate, 0, 0, 0, div);
+ req->rate = clk_rcg2_calc_rate(parent_rate, 0, 0, 0, div);
return 0;
}
@@ -1318,7 +1319,7 @@ int clk_rcg2_get_dfs_clock_rate(struct clk_rcg2 *clk, struct device *dev,
dfs_freq_tbl[i].n = n;
/* calculate the final frequency */
- calc_freq = calc_rate(prate, dfs_freq_tbl[i].m,
+ calc_freq = clk_rcg2_calc_rate(prate, dfs_freq_tbl[i].m,
dfs_freq_tbl[i].n, mode,
dfs_freq_tbl[i].pre_div);
diff --git a/drivers/clk/qcom/clk-regmap-mux-div.h b/drivers/clk/qcom/clk-regmap-mux-div.h
index 63a696a..6cd8d4f 100644
--- a/drivers/clk/qcom/clk-regmap-mux-div.h
+++ b/drivers/clk/qcom/clk-regmap-mux-div.h
@@ -42,6 +42,7 @@
* on and runs at only one rate.
* @parent_map: pointer to parent_map struct
* @clkr: handle between common and hardware-specific interfaces
+ * @clk_nb: clock notifier registered for clock rate change
*/
struct clk_regmap_mux_div {
@@ -57,6 +58,7 @@ struct clk_regmap_mux_div {
unsigned long safe_freq;
const struct parent_map *parent_map;
struct clk_regmap clkr;
+ struct notifier_block clk_nb;
};
extern const struct clk_ops clk_regmap_mux_div_ops;
diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
index 2109132..1f90d46 100644
--- a/drivers/clk/qcom/clk-rpmh.c
+++ b/drivers/clk/qcom/clk-rpmh.c
@@ -318,17 +318,30 @@ static const struct clk_rpmh_desc clk_rpmh_sdm845 = {
static const struct of_device_id clk_rpmh_match_table[] = {
{ .compatible = "qcom,rpmh-clk-sdm845", .data = &clk_rpmh_sdm845},
{ .compatible = "qcom,rpmh-clk-sdm670", .data = &clk_rpmh_sdm845},
+ { .compatible = "qcom,rpmh-clk-sdxpoorwills", .data = &clk_rpmh_sdm845},
{ }
};
MODULE_DEVICE_TABLE(of, clk_rpmh_match_table);
-static void clk_rpmh_sdm670_fixup_sdm670(void)
+static void clk_rpmh_sdm670_fixup(void)
{
sdm845_rpmh_clocks[RPMH_RF_CLK3] = NULL;
sdm845_rpmh_clocks[RPMH_RF_CLK3_A] = NULL;
}
-static int clk_rpmh_sdm670_fixup(struct platform_device *pdev)
+static void clk_rpmh_sdxpoorwills_fixup(void)
+{
+ sdm845_rpmh_clocks[RPMH_LN_BB_CLK2] = NULL;
+ sdm845_rpmh_clocks[RPMH_LN_BB_CLK2_A] = NULL;
+ sdm845_rpmh_clocks[RPMH_LN_BB_CLK3] = NULL;
+ sdm845_rpmh_clocks[RPMH_LN_BB_CLK3_A] = NULL;
+ sdm845_rpmh_clocks[RPMH_RF_CLK2] = NULL;
+ sdm845_rpmh_clocks[RPMH_RF_CLK2_A] = NULL;
+ sdm845_rpmh_clocks[RPMH_RF_CLK3] = NULL;
+ sdm845_rpmh_clocks[RPMH_RF_CLK3_A] = NULL;
+}
+
+static int clk_rpmh_fixup(struct platform_device *pdev)
{
const char *compat = NULL;
int compatlen = 0;
@@ -338,7 +351,9 @@ static int clk_rpmh_sdm670_fixup(struct platform_device *pdev)
return -EINVAL;
if (!strcmp(compat, "qcom,rpmh-clk-sdm670"))
- clk_rpmh_sdm670_fixup_sdm670();
+ clk_rpmh_sdm670_fixup();
+ else if (!strcmp(compat, "qcom,rpmh-clk-sdxpoorwills"))
+ clk_rpmh_sdxpoorwills_fixup();
return 0;
}
@@ -410,7 +425,7 @@ static int clk_rpmh_probe(struct platform_device *pdev)
goto err2;
}
- ret = clk_rpmh_sdm670_fixup(pdev);
+ ret = clk_rpmh_fixup(pdev);
if (ret)
return ret;
diff --git a/drivers/clk/qcom/dispcc-sdm845.c b/drivers/clk/qcom/dispcc-sdm845.c
index 3b13c9b..d4f27d7 100644
--- a/drivers/clk/qcom/dispcc-sdm845.c
+++ b/drivers/clk/qcom/dispcc-sdm845.c
@@ -390,7 +390,7 @@ static const struct freq_tbl ftbl_disp_cc_mdss_mdp_clk_src_sdm670[] = {
F(150000000, P_GPLL0_OUT_MAIN, 4, 0, 0),
F(171428571, P_GPLL0_OUT_MAIN, 3.5, 0, 0),
F(200000000, P_GPLL0_OUT_MAIN, 3, 0, 0),
- F(286670000, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
+ F(286666667, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
F(300000000, P_GPLL0_OUT_MAIN, 2, 0, 0),
F(344000000, P_DISP_CC_PLL0_OUT_MAIN, 2.5, 0, 0),
F(430000000, P_DISP_CC_PLL0_OUT_MAIN, 2, 0, 0),
diff --git a/drivers/clk/qcom/gcc-sdxpoorwills.c b/drivers/clk/qcom/gcc-sdxpoorwills.c
new file mode 100644
index 0000000..1b5cf61
--- /dev/null
+++ b/drivers/clk/qcom/gcc-sdxpoorwills.c
@@ -0,0 +1,1916 @@
+/*
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt) "clk: %s: " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/bitops.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/clk.h>
+#include <linux/clk-provider.h>
+#include <linux/regmap.h>
+#include <linux/reset-controller.h>
+
+#include <dt-bindings/clock/qcom,gcc-sdxpoorwills.h>
+
+#include "common.h"
+#include "clk-regmap.h"
+#include "clk-pll.h"
+#include "clk-rcg.h"
+#include "clk-branch.h"
+#include "reset.h"
+
+#include "clk-alpha-pll.h"
+#include "vdd-level-sdm845.h"
+
+#define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) }
+
+static DEFINE_VDD_REGULATORS(vdd_cx, VDD_CX_NUM, 1, vdd_corner);
+
+enum {
+ P_BI_TCXO,
+ P_CORE_BI_PLL_TEST_SE,
+ P_GPLL0_OUT_EVEN,
+ P_GPLL0_OUT_MAIN,
+ P_GPLL4_OUT_EVEN,
+ P_SLEEP_CLK,
+};
+
+static const struct parent_map gcc_parent_map_0[] = {
+ { P_BI_TCXO, 0 },
+ { P_GPLL0_OUT_MAIN, 1 },
+ { P_GPLL0_OUT_EVEN, 6 },
+ { P_CORE_BI_PLL_TEST_SE, 7 },
+};
+
+static const char * const gcc_parent_names_0[] = {
+ "bi_tcxo",
+ "gpll0",
+ "gpll0_out_even",
+ "core_bi_pll_test_se",
+};
+
+static const struct parent_map gcc_parent_map_1[] = {
+ { P_BI_TCXO, 0 },
+ { P_CORE_BI_PLL_TEST_SE, 7 },
+};
+
+static const char * const gcc_parent_names_1[] = {
+ "bi_tcxo",
+ "core_bi_pll_test_se",
+};
+
+static const struct parent_map gcc_parent_map_2[] = {
+ { P_BI_TCXO, 0 },
+ { P_GPLL0_OUT_MAIN, 1 },
+ { P_SLEEP_CLK, 5 },
+ { P_GPLL0_OUT_EVEN, 6 },
+ { P_CORE_BI_PLL_TEST_SE, 7 },
+};
+
+static const char * const gcc_parent_names_2[] = {
+ "bi_tcxo",
+ "gpll0",
+ "core_pi_sleep_clk",
+ "gpll0_out_even",
+ "core_bi_pll_test_se",
+};
+
+static const struct parent_map gcc_parent_map_3[] = {
+ { P_BI_TCXO, 0 },
+ { P_SLEEP_CLK, 5 },
+ { P_CORE_BI_PLL_TEST_SE, 7 },
+};
+
+static const char * const gcc_parent_names_3[] = {
+ "bi_tcxo",
+ "core_pi_sleep_clk",
+ "core_bi_pll_test_se",
+};
+
+static const struct parent_map gcc_parent_map_4[] = {
+ { P_BI_TCXO, 0 },
+ { P_GPLL0_OUT_MAIN, 1 },
+ { P_GPLL4_OUT_EVEN, 2 },
+ { P_GPLL0_OUT_EVEN, 6 },
+ { P_CORE_BI_PLL_TEST_SE, 7 },
+};
+
+static const char * const gcc_parent_names_4[] = {
+ "bi_tcxo",
+ "gpll0",
+ "gpll4_out_even",
+ "gpll0_out_even",
+ "core_bi_pll_test_se",
+};
+
+static struct pll_vco trion_vco[] = {
+ { 249600000, 2000000000, 0 },
+};
+
+static struct clk_alpha_pll gpll0 = {
+ .offset = 0x0,
+ .vco_table = trion_vco,
+ .num_vco = ARRAY_SIZE(trion_vco),
+ .type = TRION_PLL,
+ .clkr = {
+ .enable_reg = 0x6d000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gpll0",
+ .parent_names = (const char *[]){ "bi_tcxo" },
+ .num_parents = 1,
+ .ops = &clk_trion_fixed_pll_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 615000000,
+ LOW, 1066000000,
+ LOW_L1, 1600000000,
+ NOMINAL, 2000000000),
+ },
+ },
+};
+
+static const struct clk_div_table post_div_table_trion_even[] = {
+ { 0x0, 1 },
+ { 0x1, 2 },
+ { 0x3, 4 },
+ { 0x7, 8 },
+ { }
+};
+
+static struct clk_alpha_pll_postdiv gpll0_out_even = {
+ .offset = 0x0,
+ .post_div_shift = 8,
+ .post_div_table = post_div_table_trion_even,
+ .num_post_div = ARRAY_SIZE(post_div_table_trion_even),
+ .width = 4,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll0_out_even",
+ .parent_names = (const char *[]){ "gpll0" },
+ .num_parents = 1,
+ .ops = &clk_trion_pll_postdiv_ops,
+ },
+};
+
+static struct clk_alpha_pll gpll4 = {
+ .offset = 0x76000,
+ .vco_table = trion_vco,
+ .num_vco = ARRAY_SIZE(trion_vco),
+ .type = TRION_PLL,
+ .clkr = {
+ .enable_reg = 0x6d000,
+ .enable_mask = BIT(4),
+ .hw.init = &(struct clk_init_data){
+ .name = "gpll4",
+ .parent_names = (const char *[]){ "bi_tcxo" },
+ .num_parents = 1,
+ .ops = &clk_trion_fixed_pll_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 615000000,
+ LOW, 1066000000,
+ LOW_L1, 1600000000,
+ NOMINAL, 2000000000),
+ },
+ },
+};
+
+static struct clk_alpha_pll_postdiv gpll4_out_even = {
+ .offset = 0x76000,
+ .post_div_shift = 8,
+ .post_div_table = post_div_table_trion_even,
+ .num_post_div = ARRAY_SIZE(post_div_table_trion_even),
+ .width = 4,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll4_out_even",
+ .parent_names = (const char *[]){ "gpll4" },
+ .num_parents = 1,
+ .ops = &clk_trion_pll_postdiv_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_blsp1_qup1_i2c_apps_clk_src[] = {
+ F(9600000, P_BI_TCXO, 2, 0, 0),
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_blsp1_qup1_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x11024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_i2c_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup1_i2c_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP3(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 50000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_blsp1_qup1_spi_apps_clk_src[] = {
+ F(960000, P_BI_TCXO, 10, 1, 2),
+ F(4800000, P_BI_TCXO, 4, 0, 0),
+ F(9600000, P_BI_TCXO, 2, 0, 0),
+ F(15000000, P_GPLL0_OUT_EVEN, 5, 1, 4),
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(24000000, P_GPLL0_OUT_MAIN, 12.5, 1, 2),
+ F(25000000, P_GPLL0_OUT_MAIN, 12, 1, 2),
+ F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_blsp1_qup1_spi_apps_clk_src = {
+ .cmd_rcgr = 0x1100c,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_spi_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup1_spi_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 25000000,
+ NOMINAL, 50000000),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_qup2_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x13024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_i2c_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup2_i2c_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP3(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 50000000),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_qup2_spi_apps_clk_src = {
+ .cmd_rcgr = 0x1300c,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_spi_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup2_spi_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 25000000,
+ NOMINAL, 50000000),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_qup3_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x15024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_i2c_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup3_i2c_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP3(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 50000000),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_qup3_spi_apps_clk_src = {
+ .cmd_rcgr = 0x1500c,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_spi_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup3_spi_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 25000000,
+ NOMINAL, 50000000),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_qup4_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x17024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_i2c_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup4_i2c_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP3(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 50000000),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_qup4_spi_apps_clk_src = {
+ .cmd_rcgr = 0x1700c,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_spi_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup4_spi_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 25000000,
+ NOMINAL, 50000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_blsp1_uart1_apps_clk_src[] = {
+ F(3686400, P_GPLL0_OUT_EVEN, 1, 192, 15625),
+ F(7372800, P_GPLL0_OUT_EVEN, 1, 384, 15625),
+ F(9600000, P_BI_TCXO, 2, 0, 0),
+ F(14745600, P_GPLL0_OUT_EVEN, 1, 768, 15625),
+ F(16000000, P_GPLL0_OUT_EVEN, 1, 4, 75),
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(19354839, P_GPLL0_OUT_MAIN, 15.5, 1, 2),
+ F(20000000, P_GPLL0_OUT_MAIN, 15, 1, 2),
+ F(20689655, P_GPLL0_OUT_MAIN, 14.5, 1, 2),
+ F(21428571, P_GPLL0_OUT_MAIN, 14, 1, 2),
+ F(22222222, P_GPLL0_OUT_MAIN, 13.5, 1, 2),
+ F(23076923, P_GPLL0_OUT_MAIN, 13, 1, 2),
+ F(24000000, P_GPLL0_OUT_MAIN, 5, 1, 5),
+ F(25000000, P_GPLL0_OUT_MAIN, 12, 1, 2),
+ F(26086957, P_GPLL0_OUT_MAIN, 11.5, 1, 2),
+ F(27272727, P_GPLL0_OUT_MAIN, 11, 1, 2),
+ F(28571429, P_GPLL0_OUT_MAIN, 10.5, 1, 2),
+ F(32000000, P_GPLL0_OUT_MAIN, 1, 4, 75),
+ F(40000000, P_GPLL0_OUT_MAIN, 15, 0, 0),
+ F(46400000, P_GPLL0_OUT_MAIN, 1, 29, 375),
+ F(48000000, P_GPLL0_OUT_MAIN, 12.5, 0, 0),
+ F(51200000, P_GPLL0_OUT_MAIN, 1, 32, 375),
+ F(56000000, P_GPLL0_OUT_MAIN, 1, 7, 75),
+ F(58982400, P_GPLL0_OUT_MAIN, 1, 1536, 15625),
+ F(60000000, P_GPLL0_OUT_MAIN, 10, 0, 0),
+ F(63157895, P_GPLL0_OUT_MAIN, 9.5, 0, 0),
+ { }
+};
+
+
+static struct clk_rcg2 gcc_blsp1_uart1_apps_clk_src = {
+ .cmd_rcgr = 0x1200c,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_uart1_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart1_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 48000000,
+ NOMINAL, 63157895),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_uart2_apps_clk_src = {
+ .cmd_rcgr = 0x1400c,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_uart1_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart2_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 48000000,
+ NOMINAL, 63157895),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_uart3_apps_clk_src = {
+ .cmd_rcgr = 0x1600c,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_uart1_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart3_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 48000000,
+ NOMINAL, 63157895),
+ },
+};
+
+static struct clk_rcg2 gcc_blsp1_uart4_apps_clk_src = {
+ .cmd_rcgr = 0x1800c,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_blsp1_uart1_apps_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart4_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 48000000,
+ NOMINAL, 63157895),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_cpuss_ahb_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(50000000, P_GPLL0_OUT_EVEN, 6, 0, 0),
+ F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+ F(133333333, P_GPLL0_OUT_MAIN, 4.5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_cpuss_ahb_clk_src = {
+ .cmd_rcgr = 0x24010,
+ .mnd_width = 0,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_cpuss_ahb_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_cpuss_ahb_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 19200000,
+ LOWER, 50000000,
+ NOMINAL, 100000000,
+ HIGH, 133333333),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_cpuss_rbcpr_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(50000000, P_GPLL0_OUT_MAIN, 12, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_cpuss_rbcpr_clk_src = {
+ .cmd_rcgr = 0x2402c,
+ .mnd_width = 0,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_cpuss_rbcpr_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_cpuss_rbcpr_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP2(
+ MIN, 19200000,
+ NOMINAL, 50000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_emac_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(50000000, P_GPLL0_OUT_EVEN, 6, 0, 0),
+ F(125000000, P_GPLL4_OUT_EVEN, 4, 0, 0),
+ F(250000000, P_GPLL4_OUT_EVEN, 2, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_emac_clk_src = {
+ .cmd_rcgr = 0x47020,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_4,
+ .freq_tbl = ftbl_gcc_emac_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_emac_clk_src",
+ .parent_names = gcc_parent_names_4,
+ .num_parents = 5,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 19200000,
+ LOWER, 50000000,
+ LOW, 125000000,
+ NOMINAL, 250000000),
+ },
+};
+
+static struct clk_rcg2 gcc_emac_ptp_clk_src = {
+ .cmd_rcgr = 0x47038,
+ .mnd_width = 0,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_4,
+ .freq_tbl = ftbl_gcc_emac_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_emac_ptp_clk_src",
+ .parent_names = gcc_parent_names_4,
+ .num_parents = 5,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 19200000,
+ LOWER, 50000000,
+ LOW, 125000000,
+ NOMINAL, 250000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_gp1_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(25000000, P_GPLL0_OUT_EVEN, 12, 0, 0),
+ F(50000000, P_GPLL0_OUT_EVEN, 6, 0, 0),
+ F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+ F(200000000, P_GPLL0_OUT_MAIN, 3, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_gp1_clk_src = {
+ .cmd_rcgr = 0x2b004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_2,
+ .freq_tbl = ftbl_gcc_gp1_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_gp1_clk_src",
+ .parent_names = gcc_parent_names_2,
+ .num_parents = 5,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 19200000,
+ LOWER, 50000000,
+ LOW, 100000000,
+ NOMINAL, 200000000),
+ },
+};
+
+static struct clk_rcg2 gcc_gp2_clk_src = {
+ .cmd_rcgr = 0x2c004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_2,
+ .freq_tbl = ftbl_gcc_gp1_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_gp2_clk_src",
+ .parent_names = gcc_parent_names_2,
+ .num_parents = 5,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 19200000,
+ LOWER, 50000000,
+ LOW, 100000000,
+ NOMINAL, 200000000),
+ },
+};
+
+static struct clk_rcg2 gcc_gp3_clk_src = {
+ .cmd_rcgr = 0x2d004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_2,
+ .freq_tbl = ftbl_gcc_gp1_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_gp3_clk_src",
+ .parent_names = gcc_parent_names_2,
+ .num_parents = 5,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 19200000,
+ LOWER, 50000000,
+ LOW, 100000000,
+ NOMINAL, 200000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_pcie_aux_phy_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_pcie_aux_phy_clk_src = {
+ .cmd_rcgr = 0x37030,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_3,
+ .freq_tbl = ftbl_gcc_pcie_aux_phy_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_aux_phy_clk_src",
+ .parent_names = gcc_parent_names_3,
+ .num_parents = 3,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP1(
+ MIN, 19200000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_pcie_phy_refgen_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_pcie_phy_refgen_clk_src = {
+ .cmd_rcgr = 0x39010,
+ .mnd_width = 0,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_pcie_phy_refgen_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_phy_refgen_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP2(
+ MIN, 19200000,
+ LOW, 100000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_pdm2_clk_src[] = {
+ F(9600000, P_BI_TCXO, 2, 0, 0),
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(60000000, P_GPLL0_OUT_MAIN, 10, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_pdm2_clk_src = {
+ .cmd_rcgr = 0x19010,
+ .mnd_width = 0,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_pdm2_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_pdm2_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP3(
+ MIN, 9600000,
+ LOWER, 19200000,
+ LOW, 60000000),
+ },
+};
+
+static struct clk_rcg2 gcc_sdcc1_apps_clk_src = {
+ .cmd_rcgr = 0xf00c,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_gp1_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_sdcc1_apps_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP4(
+ MIN, 19200000,
+ LOWER, 50000000,
+ LOW, 100000000,
+ NOMINAL, 200000000),
+ },
+};
+
+static struct clk_rcg2 gcc_spmi_fetcher_clk_src = {
+ .cmd_rcgr = 0x3f00c,
+ .mnd_width = 0,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_pcie_aux_phy_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_spmi_fetcher_clk_src",
+ .parent_names = gcc_parent_names_1,
+ .num_parents = 2,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP1(
+ MIN, 19200000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_usb30_master_clk_src[] = {
+ F(50000000, P_GPLL0_OUT_EVEN, 6, 0, 0),
+ F(75000000, P_GPLL0_OUT_EVEN, 4, 0, 0),
+ F(100000000, P_GPLL0_OUT_MAIN, 6, 0, 0),
+ F(200000000, P_GPLL0_OUT_MAIN, 3, 0, 0),
+ F(240000000, P_GPLL0_OUT_MAIN, 2.5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_usb30_master_clk_src = {
+ .cmd_rcgr = 0xb01c,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_usb30_master_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_usb30_master_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP5(
+ MIN, 50000000,
+ LOWER, 75000000,
+ LOW, 100000000,
+ NOMINAL, 200000000,
+ HIGH, 240000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_usb30_mock_utmi_clk_src[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(40000000, P_GPLL0_OUT_EVEN, 7.5, 0, 0),
+ F(60000000, P_GPLL0_OUT_MAIN, 10, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_usb30_mock_utmi_clk_src = {
+ .cmd_rcgr = 0xb034,
+ .mnd_width = 0,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_0,
+ .freq_tbl = ftbl_gcc_usb30_mock_utmi_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_usb30_mock_utmi_clk_src",
+ .parent_names = gcc_parent_names_0,
+ .num_parents = 4,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP3(
+ MIN, 19200000,
+ LOWER, 40000000,
+ LOW, 60000000),
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_usb3_phy_aux_clk_src[] = {
+ F(1000000, P_BI_TCXO, 1, 5, 96),
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gcc_usb3_phy_aux_clk_src = {
+ .cmd_rcgr = 0xb05c,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_parent_map_3,
+ .freq_tbl = ftbl_gcc_usb3_phy_aux_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_usb3_phy_aux_clk_src",
+ .parent_names = gcc_parent_names_3,
+ .num_parents = 3,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_rcg2_ops,
+ VDD_CX_FMAX_MAP1(
+ MIN, 19200000),
+ },
+};
+
+static struct clk_branch gcc_blsp1_ahb_clk = {
+ .halt_reg = 0x10004,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(25),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup1_i2c_apps_clk = {
+ .halt_reg = 0x11008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x11008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup1_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup1_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup1_spi_apps_clk = {
+ .halt_reg = 0x11004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x11004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup1_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup1_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup2_i2c_apps_clk = {
+ .halt_reg = 0x13008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x13008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup2_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup2_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup2_spi_apps_clk = {
+ .halt_reg = 0x13004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x13004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup2_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup2_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup3_i2c_apps_clk = {
+ .halt_reg = 0x15008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x15008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup3_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup3_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup3_spi_apps_clk = {
+ .halt_reg = 0x15004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x15004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup3_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup3_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup4_i2c_apps_clk = {
+ .halt_reg = 0x17008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x17008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup4_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup4_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup4_spi_apps_clk = {
+ .halt_reg = 0x17004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x17004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup4_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_qup4_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_sleep_clk = {
+ .halt_reg = 0x10008,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(26),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_sleep_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_uart1_apps_clk = {
+ .halt_reg = 0x12004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x12004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart1_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_uart1_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_uart2_apps_clk = {
+ .halt_reg = 0x14004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x14004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart2_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_uart2_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_uart3_apps_clk = {
+ .halt_reg = 0x16004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x16004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart3_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_uart3_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_uart4_apps_clk = {
+ .halt_reg = 0x18004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x18004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart4_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_blsp1_uart4_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_boot_rom_ahb_clk = {
+ .halt_reg = 0x1c004,
+ .halt_check = BRANCH_HALT_VOTED,
+ .hwcg_reg = 0x1c004,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(10),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_boot_rom_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_ce1_ahb_clk = {
+ .halt_reg = 0x2100c,
+ .halt_check = BRANCH_HALT_VOTED,
+ .hwcg_reg = 0x2100c,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(3),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_ce1_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_ce1_axi_clk = {
+ .halt_reg = 0x21008,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(4),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_ce1_axi_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_ce1_clk = {
+ .halt_reg = 0x21004,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(5),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_ce1_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_cpuss_ahb_clk = {
+ .halt_reg = 0x24000,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(21),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_cpuss_ahb_clk",
+ .parent_names = (const char *[]){
+ "gcc_cpuss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_cpuss_gnoc_clk = {
+ .halt_reg = 0x24004,
+ .halt_check = BRANCH_HALT_VOTED,
+ .hwcg_reg = 0x24004,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(22),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_cpuss_gnoc_clk",
+ .flags = CLK_IS_CRITICAL,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_cpuss_rbcpr_clk = {
+ .halt_reg = 0x24008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x24008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_cpuss_rbcpr_clk",
+ .parent_names = (const char *[]){
+ "gcc_cpuss_rbcpr_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_eth_axi_clk = {
+ .halt_reg = 0x4701c,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x4701c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_eth_axi_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_eth_ptp_clk = {
+ .halt_reg = 0x47018,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x47018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_eth_ptp_clk",
+ .parent_names = (const char *[]){
+ "gcc_emac_ptp_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_eth_rgmii_clk = {
+ .halt_reg = 0x47010,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x47010,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_eth_rgmii_clk",
+ .parent_names = (const char *[]){
+ "gcc_emac_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_eth_slave_ahb_clk = {
+ .halt_reg = 0x47014,
+ .halt_check = BRANCH_HALT,
+ .hwcg_reg = 0x47014,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x47014,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_eth_slave_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_gp1_clk = {
+ .halt_reg = 0x2b000,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x2b000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gp1_clk",
+ .parent_names = (const char *[]){
+ "gcc_gp1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_gp2_clk = {
+ .halt_reg = 0x2c000,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x2c000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gp2_clk",
+ .parent_names = (const char *[]){
+ "gcc_gp2_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_gp3_clk = {
+ .halt_reg = 0x2d000,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x2d000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gp3_clk",
+ .parent_names = (const char *[]){
+ "gcc_gp3_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mss_cfg_ahb_clk = {
+ .halt_reg = 0x40000,
+ .halt_check = BRANCH_HALT,
+ .hwcg_reg = 0x40000,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x40000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mss_cfg_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_gate2 gcc_mss_gpll0_div_clk_src = {
+ .udelay = 500,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(17),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mss_gpll0_div_clk_src",
+ .ops = &clk_gate2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mss_snoc_axi_clk = {
+ .halt_reg = 0x40148,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x40148,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mss_snoc_axi_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_aux_clk = {
+ .halt_reg = 0x37020,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d00c,
+ .enable_mask = BIT(3),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_aux_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_cfg_ahb_clk = {
+ .halt_reg = 0x3701c,
+ .halt_check = BRANCH_HALT_VOTED,
+ .hwcg_reg = 0x3701c,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x6d00c,
+ .enable_mask = BIT(2),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_cfg_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_mstr_axi_clk = {
+ .halt_reg = 0x37018,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d00c,
+ .enable_mask = BIT(1),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_mstr_axi_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_phy_refgen_clk = {
+ .halt_reg = 0x39028,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x39028,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_phy_refgen_clk",
+ .parent_names = (const char *[]){
+ "gcc_pcie_phy_refgen_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_pipe_clk = {
+ .halt_reg = 0x37028,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d00c,
+ .enable_mask = BIT(4),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_pipe_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_sleep_clk = {
+ .halt_reg = 0x37024,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d00c,
+ .enable_mask = BIT(6),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_sleep_clk",
+ .parent_names = (const char *[]){
+ "gcc_pcie_aux_phy_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_slv_axi_clk = {
+ .halt_reg = 0x37014,
+ .halt_check = BRANCH_HALT_VOTED,
+ .hwcg_reg = 0x37014,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x6d00c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_slv_axi_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pcie_slv_q2a_axi_clk = {
+ .halt_reg = 0x37010,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d00c,
+ .enable_mask = BIT(5),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pcie_slv_q2a_axi_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pdm2_clk = {
+ .halt_reg = 0x1900c,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x1900c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pdm2_clk",
+ .parent_names = (const char *[]){
+ "gcc_pdm2_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pdm_ahb_clk = {
+ .halt_reg = 0x19004,
+ .halt_check = BRANCH_HALT,
+ .hwcg_reg = 0x19004,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x19004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pdm_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pdm_xo4_clk = {
+ .halt_reg = 0x19008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x19008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pdm_xo4_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_prng_ahb_clk = {
+ .halt_reg = 0x1a004,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(13),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_prng_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sdcc1_ahb_clk = {
+ .halt_reg = 0xf008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0xf008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sdcc1_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sdcc1_apps_clk = {
+ .halt_reg = 0xf004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0xf004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sdcc1_apps_clk",
+ .parent_names = (const char *[]){
+ "gcc_sdcc1_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_spmi_fetcher_ahb_clk = {
+ .halt_reg = 0x3f008,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x3f008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_spmi_fetcher_ahb_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_spmi_fetcher_clk = {
+ .halt_reg = 0x3f004,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x3f004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_spmi_fetcher_clk",
+ .parent_names = (const char *[]){
+ "gcc_spmi_fetcher_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sys_noc_cpuss_ahb_clk = {
+ .halt_reg = 0x400c,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x6d004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sys_noc_cpuss_ahb_clk",
+ .parent_names = (const char *[]){
+ "gcc_cpuss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sys_noc_usb3_clk = {
+ .halt_reg = 0x4018,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0x4018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sys_noc_usb3_clk",
+ .parent_names = (const char *[]){
+ "gcc_usb30_master_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb30_master_clk = {
+ .halt_reg = 0xb010,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0xb010,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb30_master_clk",
+ .parent_names = (const char *[]){
+ "gcc_usb30_master_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb30_mock_utmi_clk = {
+ .halt_reg = 0xb018,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0xb018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb30_mock_utmi_clk",
+ .parent_names = (const char *[]){
+ "gcc_usb30_mock_utmi_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb30_sleep_clk = {
+ .halt_reg = 0xb014,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0xb014,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb30_sleep_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb3_phy_aux_clk = {
+ .halt_reg = 0xb050,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0xb050,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb3_phy_aux_clk",
+ .parent_names = (const char *[]){
+ "gcc_usb3_phy_aux_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb3_phy_pipe_clk = {
+ .halt_reg = 0xb054,
+ .halt_check = BRANCH_HALT,
+ .clkr = {
+ .enable_reg = 0xb054,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb3_phy_pipe_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb_phy_cfg_ahb2phy_clk = {
+ .halt_reg = 0xe004,
+ .halt_check = BRANCH_HALT,
+ .hwcg_reg = 0xe004,
+ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0xe004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb_phy_cfg_ahb2phy_clk",
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_regmap *gcc_sdxpoorwills_clocks[] = {
+ [GCC_BLSP1_AHB_CLK] = &gcc_blsp1_ahb_clk.clkr,
+ [GCC_BLSP1_QUP1_I2C_APPS_CLK] = &gcc_blsp1_qup1_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP1_I2C_APPS_CLK_SRC] =
+ &gcc_blsp1_qup1_i2c_apps_clk_src.clkr,
+ [GCC_BLSP1_QUP1_SPI_APPS_CLK] = &gcc_blsp1_qup1_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP1_SPI_APPS_CLK_SRC] =
+ &gcc_blsp1_qup1_spi_apps_clk_src.clkr,
+ [GCC_BLSP1_QUP2_I2C_APPS_CLK] = &gcc_blsp1_qup2_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP2_I2C_APPS_CLK_SRC] =
+ &gcc_blsp1_qup2_i2c_apps_clk_src.clkr,
+ [GCC_BLSP1_QUP2_SPI_APPS_CLK] = &gcc_blsp1_qup2_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP2_SPI_APPS_CLK_SRC] =
+ &gcc_blsp1_qup2_spi_apps_clk_src.clkr,
+ [GCC_BLSP1_QUP3_I2C_APPS_CLK] = &gcc_blsp1_qup3_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP3_I2C_APPS_CLK_SRC] =
+ &gcc_blsp1_qup3_i2c_apps_clk_src.clkr,
+ [GCC_BLSP1_QUP3_SPI_APPS_CLK] = &gcc_blsp1_qup3_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP3_SPI_APPS_CLK_SRC] =
+ &gcc_blsp1_qup3_spi_apps_clk_src.clkr,
+ [GCC_BLSP1_QUP4_I2C_APPS_CLK] = &gcc_blsp1_qup4_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP4_I2C_APPS_CLK_SRC] =
+ &gcc_blsp1_qup4_i2c_apps_clk_src.clkr,
+ [GCC_BLSP1_QUP4_SPI_APPS_CLK] = &gcc_blsp1_qup4_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP4_SPI_APPS_CLK_SRC] =
+ &gcc_blsp1_qup4_spi_apps_clk_src.clkr,
+ [GCC_BLSP1_SLEEP_CLK] = &gcc_blsp1_sleep_clk.clkr,
+ [GCC_BLSP1_UART1_APPS_CLK] = &gcc_blsp1_uart1_apps_clk.clkr,
+ [GCC_BLSP1_UART1_APPS_CLK_SRC] = &gcc_blsp1_uart1_apps_clk_src.clkr,
+ [GCC_BLSP1_UART2_APPS_CLK] = &gcc_blsp1_uart2_apps_clk.clkr,
+ [GCC_BLSP1_UART2_APPS_CLK_SRC] = &gcc_blsp1_uart2_apps_clk_src.clkr,
+ [GCC_BLSP1_UART3_APPS_CLK] = &gcc_blsp1_uart3_apps_clk.clkr,
+ [GCC_BLSP1_UART3_APPS_CLK_SRC] = &gcc_blsp1_uart3_apps_clk_src.clkr,
+ [GCC_BLSP1_UART4_APPS_CLK] = &gcc_blsp1_uart4_apps_clk.clkr,
+ [GCC_BLSP1_UART4_APPS_CLK_SRC] = &gcc_blsp1_uart4_apps_clk_src.clkr,
+ [GCC_BOOT_ROM_AHB_CLK] = &gcc_boot_rom_ahb_clk.clkr,
+ [GCC_CE1_AHB_CLK] = &gcc_ce1_ahb_clk.clkr,
+ [GCC_CE1_AXI_CLK] = &gcc_ce1_axi_clk.clkr,
+ [GCC_CE1_CLK] = &gcc_ce1_clk.clkr,
+ [GCC_CPUSS_AHB_CLK] = &gcc_cpuss_ahb_clk.clkr,
+ [GCC_CPUSS_AHB_CLK_SRC] = &gcc_cpuss_ahb_clk_src.clkr,
+ [GCC_CPUSS_GNOC_CLK] = &gcc_cpuss_gnoc_clk.clkr,
+ [GCC_CPUSS_RBCPR_CLK] = &gcc_cpuss_rbcpr_clk.clkr,
+ [GCC_CPUSS_RBCPR_CLK_SRC] = &gcc_cpuss_rbcpr_clk_src.clkr,
+ [GCC_EMAC_CLK_SRC] = &gcc_emac_clk_src.clkr,
+ [GCC_EMAC_PTP_CLK_SRC] = &gcc_emac_ptp_clk_src.clkr,
+ [GCC_ETH_AXI_CLK] = &gcc_eth_axi_clk.clkr,
+ [GCC_ETH_PTP_CLK] = &gcc_eth_ptp_clk.clkr,
+ [GCC_ETH_RGMII_CLK] = &gcc_eth_rgmii_clk.clkr,
+ [GCC_ETH_SLAVE_AHB_CLK] = &gcc_eth_slave_ahb_clk.clkr,
+ [GCC_GP1_CLK] = &gcc_gp1_clk.clkr,
+ [GCC_GP1_CLK_SRC] = &gcc_gp1_clk_src.clkr,
+ [GCC_GP2_CLK] = &gcc_gp2_clk.clkr,
+ [GCC_GP2_CLK_SRC] = &gcc_gp2_clk_src.clkr,
+ [GCC_GP3_CLK] = &gcc_gp3_clk.clkr,
+ [GCC_GP3_CLK_SRC] = &gcc_gp3_clk_src.clkr,
+ [GCC_MSS_CFG_AHB_CLK] = &gcc_mss_cfg_ahb_clk.clkr,
+ [GCC_MSS_GPLL0_DIV_CLK_SRC] = &gcc_mss_gpll0_div_clk_src.clkr,
+ [GCC_MSS_SNOC_AXI_CLK] = &gcc_mss_snoc_axi_clk.clkr,
+ [GCC_PCIE_AUX_CLK] = &gcc_pcie_aux_clk.clkr,
+ [GCC_PCIE_AUX_PHY_CLK_SRC] = &gcc_pcie_aux_phy_clk_src.clkr,
+ [GCC_PCIE_CFG_AHB_CLK] = &gcc_pcie_cfg_ahb_clk.clkr,
+ [GCC_PCIE_MSTR_AXI_CLK] = &gcc_pcie_mstr_axi_clk.clkr,
+ [GCC_PCIE_PHY_REFGEN_CLK] = &gcc_pcie_phy_refgen_clk.clkr,
+ [GCC_PCIE_PHY_REFGEN_CLK_SRC] = &gcc_pcie_phy_refgen_clk_src.clkr,
+ [GCC_PCIE_PIPE_CLK] = &gcc_pcie_pipe_clk.clkr,
+ [GCC_PCIE_SLEEP_CLK] = &gcc_pcie_sleep_clk.clkr,
+ [GCC_PCIE_SLV_AXI_CLK] = &gcc_pcie_slv_axi_clk.clkr,
+ [GCC_PCIE_SLV_Q2A_AXI_CLK] = &gcc_pcie_slv_q2a_axi_clk.clkr,
+ [GCC_PDM2_CLK] = &gcc_pdm2_clk.clkr,
+ [GCC_PDM2_CLK_SRC] = &gcc_pdm2_clk_src.clkr,
+ [GCC_PDM_AHB_CLK] = &gcc_pdm_ahb_clk.clkr,
+ [GCC_PDM_XO4_CLK] = &gcc_pdm_xo4_clk.clkr,
+ [GCC_PRNG_AHB_CLK] = &gcc_prng_ahb_clk.clkr,
+ [GCC_SDCC1_AHB_CLK] = &gcc_sdcc1_ahb_clk.clkr,
+ [GCC_SDCC1_APPS_CLK] = &gcc_sdcc1_apps_clk.clkr,
+ [GCC_SDCC1_APPS_CLK_SRC] = &gcc_sdcc1_apps_clk_src.clkr,
+ [GCC_SPMI_FETCHER_AHB_CLK] = &gcc_spmi_fetcher_ahb_clk.clkr,
+ [GCC_SPMI_FETCHER_CLK] = &gcc_spmi_fetcher_clk.clkr,
+ [GCC_SPMI_FETCHER_CLK_SRC] = &gcc_spmi_fetcher_clk_src.clkr,
+ [GCC_SYS_NOC_CPUSS_AHB_CLK] = &gcc_sys_noc_cpuss_ahb_clk.clkr,
+ [GCC_SYS_NOC_USB3_CLK] = &gcc_sys_noc_usb3_clk.clkr,
+ [GCC_USB30_MASTER_CLK] = &gcc_usb30_master_clk.clkr,
+ [GCC_USB30_MASTER_CLK_SRC] = &gcc_usb30_master_clk_src.clkr,
+ [GCC_USB30_MOCK_UTMI_CLK] = &gcc_usb30_mock_utmi_clk.clkr,
+ [GCC_USB30_MOCK_UTMI_CLK_SRC] = &gcc_usb30_mock_utmi_clk_src.clkr,
+ [GCC_USB30_SLEEP_CLK] = &gcc_usb30_sleep_clk.clkr,
+ [GCC_USB3_PHY_AUX_CLK] = &gcc_usb3_phy_aux_clk.clkr,
+ [GCC_USB3_PHY_AUX_CLK_SRC] = &gcc_usb3_phy_aux_clk_src.clkr,
+ [GCC_USB3_PHY_PIPE_CLK] = &gcc_usb3_phy_pipe_clk.clkr,
+ [GCC_USB_PHY_CFG_AHB2PHY_CLK] = &gcc_usb_phy_cfg_ahb2phy_clk.clkr,
+ [GPLL0] = &gpll0.clkr,
+ [GPLL0_OUT_EVEN] = &gpll0_out_even.clkr,
+ [GPLL4] = &gpll4.clkr,
+ [GPLL4_OUT_EVEN] = &gpll4_out_even.clkr,
+};
+
+static const struct qcom_reset_map gcc_sdxpoorwills_resets[] = {
+ [GCC_BLSP1_QUP1_BCR] = { 0x11000 },
+ [GCC_BLSP1_QUP2_BCR] = { 0x13000 },
+ [GCC_BLSP1_QUP3_BCR] = { 0x15000 },
+ [GCC_BLSP1_QUP4_BCR] = { 0x17000 },
+ [GCC_BLSP1_UART2_BCR] = { 0x14000 },
+ [GCC_BLSP1_UART3_BCR] = { 0x16000 },
+ [GCC_BLSP1_UART4_BCR] = { 0x18000 },
+ [GCC_CE1_BCR] = { 0x21000 },
+ [GCC_EMAC_BCR] = { 0x47000 },
+ [GCC_PCIE_BCR] = { 0x37000 },
+ [GCC_PCIE_PHY_BCR] = { 0x39000 },
+ [GCC_PDM_BCR] = { 0x19000 },
+ [GCC_PRNG_BCR] = { 0x1a000 },
+ [GCC_SDCC1_BCR] = { 0xf000 },
+ [GCC_SPMI_FETCHER_BCR] = { 0x3f000 },
+ [GCC_USB30_BCR] = { 0xb000 },
+ [GCC_USB_PHY_CFG_AHB2PHY_BCR] = { 0xe000 },
+};
+
+
+static const struct regmap_config gcc_sdxpoorwills_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = 4,
+ .val_bits = 32,
+ .max_register = 0x9b040,
+ .fast_io = true,
+};
+
+static const struct qcom_cc_desc gcc_sdxpoorwills_desc = {
+ .config = &gcc_sdxpoorwills_regmap_config,
+ .clks = gcc_sdxpoorwills_clocks,
+ .num_clks = ARRAY_SIZE(gcc_sdxpoorwills_clocks),
+ .resets = gcc_sdxpoorwills_resets,
+ .num_resets = ARRAY_SIZE(gcc_sdxpoorwills_resets),
+};
+
+static const struct of_device_id gcc_sdxpoorwills_match_table[] = {
+ { .compatible = "qcom,gcc-sdxpoorwills" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, gcc_sdxpoorwills_match_table);
+
+static int gcc_sdxpoorwills_probe(struct platform_device *pdev)
+{
+ int ret = 0;
+ struct regmap *regmap;
+
+ regmap = qcom_cc_map(pdev, &gcc_sdxpoorwills_desc);
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+ vdd_cx.regulator[0] = devm_regulator_get(&pdev->dev, "vdd_cx");
+ if (IS_ERR(vdd_cx.regulator[0])) {
+ if (!(PTR_ERR(vdd_cx.regulator[0]) == -EPROBE_DEFER))
+ dev_err(&pdev->dev,
+ "Unable to get vdd_cx regulator\n");
+ return PTR_ERR(vdd_cx.regulator[0]);
+ }
+
+ ret = qcom_cc_really_probe(pdev, &gcc_sdxpoorwills_desc, regmap);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to register GCC clocks\n");
+ return ret;
+ }
+
+ dev_info(&pdev->dev, "Registered GCC clocks\n");
+
+ return ret;
+}
+
+static struct platform_driver gcc_sdxpoorwills_driver = {
+ .probe = gcc_sdxpoorwills_probe,
+ .driver = {
+ .name = "gcc-sdxpoorwills",
+ .of_match_table = gcc_sdxpoorwills_match_table,
+ },
+};
+
+static int __init gcc_sdxpoorwills_init(void)
+{
+ return platform_driver_register(&gcc_sdxpoorwills_driver);
+}
+core_initcall(gcc_sdxpoorwills_init);
+
+static void __exit gcc_sdxpoorwills_exit(void)
+{
+ platform_driver_unregister(&gcc_sdxpoorwills_driver);
+}
+module_exit(gcc_sdxpoorwills_exit);
+
+MODULE_DESCRIPTION("QTI GCC SDXPOORWILLS Driver");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("platform:gcc-sdxpoorwills");
diff --git a/drivers/clk/qcom/mdss/mdss-dsi-pll-10nm.c b/drivers/clk/qcom/mdss/mdss-dsi-pll-10nm.c
index dd02a8f..97d0a0f 100644
--- a/drivers/clk/qcom/mdss/mdss-dsi-pll-10nm.c
+++ b/drivers/clk/qcom/mdss/mdss-dsi-pll-10nm.c
@@ -249,7 +249,7 @@ static inline int pclk_mux_read_sel(void *context, unsigned int reg,
if (rc)
pr_err("Failed to enable dsi pll resources, rc=%d\n", rc);
else
- *val = (MDSS_PLL_REG_R(rsc->pll_base, reg) & 0x3);
+ *val = (MDSS_PLL_REG_R(rsc->phy_base, reg) & 0x3);
(void)mdss_pll_resource_enable(rsc, false);
return rc;
@@ -741,12 +741,32 @@ static void vco_10nm_unprepare(struct clk_hw *hw)
pr_err("dsi pll resources not available\n");
return;
}
- pll->cached_cfg0 = MDSS_PLL_REG_R(pll->phy_base, PHY_CMN_CLK_CFG0);
- pll->cached_outdiv = MDSS_PLL_REG_R(pll->pll_base, PLL_PLL_OUTDIV_RATE);
- pr_debug("cfg0=%d,cfg1=%d, outdiv=%d\n", pll->cached_cfg0,
- pll->cached_cfg1, pll->cached_outdiv);
- pll->vco_cached_rate = clk_hw_get_rate(hw);
+ /*
+ * During unprepare in continuous splash use case we want driver
+ * to pick all dividers instead of retaining bootloader configurations.
+ */
+ if (!pll->handoff_resources) {
+ pll->cached_cfg0 = MDSS_PLL_REG_R(pll->phy_base,
+ PHY_CMN_CLK_CFG0);
+ pll->cached_outdiv = MDSS_PLL_REG_R(pll->pll_base,
+ PLL_PLL_OUTDIV_RATE);
+ pr_debug("cfg0=%d,cfg1=%d, outdiv=%d\n", pll->cached_cfg0,
+ pll->cached_cfg1, pll->cached_outdiv);
+
+ pll->vco_cached_rate = clk_hw_get_rate(hw);
+ }
+
+ /*
+ * When continuous splash screen feature is enabled, we need to cache
+ * the mux configuration for the pixel_clk_src mux clock. The clock
+ * framework does not call back to re-configure the mux value if it is
+ * does not change.For such usecases, we need to ensure that the cached
+ * value is programmed prior to PLL being locked
+ */
+ if (pll->handoff_resources)
+ pll->cached_cfg1 = MDSS_PLL_REG_R(pll->phy_base,
+ PHY_CMN_CLK_CFG1);
dsi_pll_disable(vco);
mdss_pll_resource_enable(pll, false);
}
@@ -1026,8 +1046,8 @@ static struct regmap_bus pll_regmap_bus = {
.reg_read = pll_reg_read,
};
-static struct regmap_bus pclk_mux_regmap_bus = {
- .reg_read = phy_reg_read,
+static struct regmap_bus pclk_src_mux_regmap_bus = {
+ .reg_read = pclk_mux_read_sel,
.reg_write = pclk_mux_write_sel,
};
@@ -1472,7 +1492,7 @@ int dsi_pll_clock_register_10nm(struct platform_device *pdev,
pll_res, &dsi_pll_10nm_config);
dsi0pll_pclk_mux.clkr.regmap = rmap;
- rmap = devm_regmap_init(&pdev->dev, &pclk_mux_regmap_bus,
+ rmap = devm_regmap_init(&pdev->dev, &pclk_src_mux_regmap_bus,
pll_res, &dsi_pll_10nm_config);
dsi0pll_pclk_src_mux.clkr.regmap = rmap;
rmap = devm_regmap_init(&pdev->dev, &mdss_mux_regmap_bus,
@@ -1510,11 +1530,11 @@ int dsi_pll_clock_register_10nm(struct platform_device *pdev,
pll_res, &dsi_pll_10nm_config);
dsi1pll_pclk_src.clkr.regmap = rmap;
- rmap = devm_regmap_init(&pdev->dev, &pclk_mux_regmap_bus,
+ rmap = devm_regmap_init(&pdev->dev, &mdss_mux_regmap_bus,
pll_res, &dsi_pll_10nm_config);
dsi1pll_pclk_mux.clkr.regmap = rmap;
- rmap = devm_regmap_init(&pdev->dev, &mdss_mux_regmap_bus,
+ rmap = devm_regmap_init(&pdev->dev, &pclk_src_mux_regmap_bus,
pll_res, &dsi_pll_10nm_config);
dsi1pll_pclk_src_mux.clkr.regmap = rmap;
rmap = devm_regmap_init(&pdev->dev, &mdss_mux_regmap_bus,
diff --git a/drivers/clk/samsung/clk-exynos5433.c b/drivers/clk/samsung/clk-exynos5433.c
index ea16086..2fe0573 100644
--- a/drivers/clk/samsung/clk-exynos5433.c
+++ b/drivers/clk/samsung/clk-exynos5433.c
@@ -2559,8 +2559,10 @@ static const struct samsung_fixed_rate_clock disp_fixed_clks[] __initconst = {
FRATE(0, "phyclk_mipidphy1_bitclkdiv8_phy", NULL, 0, 188000000),
FRATE(0, "phyclk_mipidphy1_rxclkesc0_phy", NULL, 0, 100000000),
/* PHY clocks from MIPI_DPHY0 */
- FRATE(0, "phyclk_mipidphy0_bitclkdiv8_phy", NULL, 0, 188000000),
- FRATE(0, "phyclk_mipidphy0_rxclkesc0_phy", NULL, 0, 100000000),
+ FRATE(CLK_PHYCLK_MIPIDPHY0_BITCLKDIV8_PHY, "phyclk_mipidphy0_bitclkdiv8_phy",
+ NULL, 0, 188000000),
+ FRATE(CLK_PHYCLK_MIPIDPHY0_RXCLKESC0_PHY, "phyclk_mipidphy0_rxclkesc0_phy",
+ NULL, 0, 100000000),
/* PHY clocks from HDMI_PHY */
FRATE(CLK_PHYCLK_HDMIPHY_TMDS_CLKO_PHY, "phyclk_hdmiphy_tmds_clko_phy",
NULL, 0, 300000000),
diff --git a/drivers/clk/sunxi-ng/ccu_common.c b/drivers/clk/sunxi-ng/ccu_common.c
index 51d4bac..01d0594 100644
--- a/drivers/clk/sunxi-ng/ccu_common.c
+++ b/drivers/clk/sunxi-ng/ccu_common.c
@@ -70,6 +70,11 @@ int sunxi_ccu_probe(struct device_node *node, void __iomem *reg,
goto err_clk_unreg;
reset = kzalloc(sizeof(*reset), GFP_KERNEL);
+ if (!reset) {
+ ret = -ENOMEM;
+ goto err_alloc_reset;
+ }
+
reset->rcdev.of_node = node;
reset->rcdev.ops = &ccu_reset_ops;
reset->rcdev.owner = THIS_MODULE;
@@ -85,6 +90,16 @@ int sunxi_ccu_probe(struct device_node *node, void __iomem *reg,
return 0;
err_of_clk_unreg:
+ kfree(reset);
+err_alloc_reset:
+ of_clk_del_provider(node);
err_clk_unreg:
+ while (--i >= 0) {
+ struct clk_hw *hw = desc->hw_clks->hws[i];
+
+ if (!hw)
+ continue;
+ clk_hw_unregister(hw);
+ }
return ret;
}
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 062d297..e8c7af52 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -1245,8 +1245,6 @@ static int cpufreq_online(unsigned int cpu)
if (new_policy) {
/* related_cpus should at least include policy->cpus. */
cpumask_copy(policy->related_cpus, policy->cpus);
- /* Clear mask of registered CPUs */
- cpumask_clear(policy->real_cpus);
}
/*
diff --git a/drivers/cpuidle/lpm-levels.c b/drivers/cpuidle/lpm-levels.c
index 630cda2..0bff951 100644
--- a/drivers/cpuidle/lpm-levels.c
+++ b/drivers/cpuidle/lpm-levels.c
@@ -39,6 +39,7 @@
#include <soc/qcom/event_timer.h>
#include <soc/qcom/lpm-stats.h>
#include <soc/qcom/system_pm.h>
+#include <soc/qcom/minidump.h>
#include <asm/arch_timer.h>
#include <asm/suspend.h>
#include <asm/cpuidle.h>
@@ -121,9 +122,6 @@ static void cluster_prepare(struct lpm_cluster *cluster,
const struct cpumask *cpu, int child_idx, bool from_idle,
int64_t time);
-static bool menu_select;
-module_param_named(menu_select, menu_select, bool, 0664);
-
static int msm_pm_sleep_time_override;
module_param_named(sleep_time_override,
msm_pm_sleep_time_override, int, 0664);
@@ -644,7 +642,7 @@ static int cpu_power_select(struct cpuidle_device *dev,
next_wakeup_us = next_event_us - lvl_latency_us;
}
- if (!i) {
+ if (!i && !cpu_isolated(dev->cpu)) {
/*
* If the next_wake_us itself is not sufficient for
* deeper low power modes than clock gating do not
@@ -981,7 +979,7 @@ static int cluster_select(struct lpm_cluster *cluster, bool from_idle,
if (suspend_in_progress && from_idle && level->notify_rpm)
continue;
- if (level->is_reset && !system_sleep_allowed())
+ if (level->notify_rpm && !system_sleep_allowed())
continue;
best_level = i;
@@ -1356,7 +1354,8 @@ static int lpm_cpuidle_enter(struct cpuidle_device *dev,
struct lpm_cpu *cpu = per_cpu(cpu_lpm, dev->cpu);
bool success = false;
const struct cpumask *cpumask = get_cpu_mask(dev->cpu);
- int64_t start_time = ktime_to_ns(ktime_get()), end_time;
+ ktime_t start = ktime_get();
+ uint64_t start_time = ktime_to_ns(start), end_time;
struct power_params *pwr_params;
pwr_params = &cpu->levels[idx].pwr;
@@ -1381,9 +1380,7 @@ static int lpm_cpuidle_enter(struct cpuidle_device *dev,
cluster_unprepare(cpu->parent, cpumask, idx, true, end_time);
cpu_unprepare(cpu, idx, true);
sched_set_cpu_cstate(smp_processor_id(), 0, 0, 0);
- end_time = ktime_to_ns(ktime_get()) - start_time;
- do_div(end_time, 1000);
- dev->last_residency = end_time;
+ dev->last_residency = ktime_us_delta(ktime_get(), start);
update_history(dev, idx);
trace_cpu_idle_exit(idx, success);
local_irq_enable();
@@ -1642,6 +1639,7 @@ static int lpm_probe(struct platform_device *pdev)
int ret;
int size;
struct kobject *module_kobj = NULL;
+ struct md_region md_entry;
get_online_cpus();
lpm_root_node = lpm_of_parse_cluster(pdev);
@@ -1698,6 +1696,14 @@ static int lpm_probe(struct platform_device *pdev)
goto failed;
}
+ /* Add lpm_debug to Minidump*/
+ strlcpy(md_entry.name, "KLPMDEBUG", sizeof(md_entry.name));
+ md_entry.virt_addr = (uintptr_t)lpm_debug;
+ md_entry.phys_addr = lpm_debug_phys;
+ md_entry.size = size;
+ if (msm_minidump_add_region(&md_entry))
+ pr_info("Failed to add lpm_debug in Minidump\n");
+
return 0;
failed:
free_cluster_node(lpm_root_node);
diff --git a/drivers/cpuidle/lpm-levels.h b/drivers/cpuidle/lpm-levels.h
index b3364b4..2f7a55d 100644
--- a/drivers/cpuidle/lpm-levels.h
+++ b/drivers/cpuidle/lpm-levels.h
@@ -33,7 +33,6 @@ struct lpm_cpu_level {
struct power_params pwr;
unsigned int psci_id;
bool is_reset;
- bool hyp_psci;
int reset_level;
};
@@ -125,7 +124,7 @@ uint32_t *get_per_cpu_max_residency(int cpu);
uint32_t *get_per_cpu_min_residency(int cpu);
extern struct lpm_cluster *lpm_root_node;
-#if CONFIG_SMP
+#if defined(CONFIG_SMP)
extern DEFINE_PER_CPU(bool, pending_ipi);
static inline bool is_IPI_pending(const struct cpumask *mask)
{
diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
index 17b19a6..71980c4 100644
--- a/drivers/crypto/ccp/ccp-dev-v5.c
+++ b/drivers/crypto/ccp/ccp-dev-v5.c
@@ -278,8 +278,7 @@ static int ccp5_perform_aes(struct ccp_op *op)
CCP_AES_ENCRYPT(&function) = op->u.aes.action;
CCP_AES_MODE(&function) = op->u.aes.mode;
CCP_AES_TYPE(&function) = op->u.aes.type;
- if (op->u.aes.mode == CCP_AES_MODE_CFB)
- CCP_AES_SIZE(&function) = 0x7f;
+ CCP_AES_SIZE(&function) = op->u.aes.size;
CCP5_CMD_FUNCTION(&desc) = function.raw;
diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
index e23c36c..347b771 100644
--- a/drivers/crypto/ccp/ccp-dev.h
+++ b/drivers/crypto/ccp/ccp-dev.h
@@ -470,6 +470,7 @@ struct ccp_aes_op {
enum ccp_aes_type type;
enum ccp_aes_mode mode;
enum ccp_aes_action action;
+ unsigned int size;
};
struct ccp_xts_aes_op {
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 64deb00..7d4cd51 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -692,6 +692,14 @@ static int ccp_run_aes_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
goto e_ctx;
}
}
+ switch (aes->mode) {
+ case CCP_AES_MODE_CFB: /* CFB128 only */
+ case CCP_AES_MODE_CTR:
+ op.u.aes.size = AES_BLOCK_SIZE * BITS_PER_BYTE - 1;
+ break;
+ default:
+ op.u.aes.size = 0;
+ }
/* Prepare the input and output data workareas. For in-place
* operations we need to set the dma direction to BIDIRECTIONAL
diff --git a/drivers/crypto/msm/ice.c b/drivers/crypto/msm/ice.c
index 6fa91ae..182097c 100644
--- a/drivers/crypto/msm/ice.c
+++ b/drivers/crypto/msm/ice.c
@@ -25,26 +25,8 @@
#include <soc/qcom/scm.h>
#include <soc/qcom/qseecomi.h>
#include "iceregs.h"
-
-#ifdef CONFIG_PFK
#include <linux/pfk.h>
-#else
-#include <linux/bio.h>
-static inline int pfk_load_key_start(const struct bio *bio,
- struct ice_crypto_setting *ice_setting, bool *is_pfe, bool async)
-{
- return 0;
-}
-static inline int pfk_load_key_end(const struct bio *bio, bool *is_pfe)
-{
- return 0;
-}
-
-static inline void pfk_clear_on_reset(void)
-{
-}
-#endif
#define TZ_SYSCALL_CREATE_SMC_ID(o, s, f) \
((uint32_t)((((o & 0x3f) << 24) | (s & 0xff) << 8) | (f & 0xff)))
@@ -144,6 +126,9 @@ static int qti_ice_setting_config(struct request *req,
return -EPERM;
}
+ if (!setting)
+ return -EINVAL;
+
if ((short)(crypto_data->key_index) >= 0) {
memcpy(&setting->crypto_data, crypto_data,
@@ -1451,7 +1436,7 @@ static int qcom_ice_config_start(struct platform_device *pdev,
int ret = 0;
bool is_pfe = false;
- if (!pdev || !req || !setting) {
+ if (!pdev || !req) {
pr_err("%s: Invalid params passed\n", __func__);
return -EINVAL;
}
@@ -1470,6 +1455,7 @@ static int qcom_ice_config_start(struct platform_device *pdev,
/* It is not an error to have a request with no bio */
return 0;
}
+ //pr_err("%s bio is %pK\n", __func__, req->bio);
ret = pfk_load_key_start(req->bio, &pfk_crypto_data, &is_pfe, async);
if (is_pfe) {
@@ -1633,7 +1619,7 @@ static struct ice_device *get_ice_device_from_storage_type
list_for_each_entry(ice_dev, &ice_devices, list) {
if (!strcmp(ice_dev->ice_instance_type, storage_type)) {
- pr_info("%s: found ice device %p\n", __func__, ice_dev);
+ pr_debug("%s: ice device %pK\n", __func__, ice_dev);
return ice_dev;
}
}
diff --git a/drivers/crypto/vmx/aes_ctr.c b/drivers/crypto/vmx/aes_ctr.c
index 38ed10d..7cf6d31 100644
--- a/drivers/crypto/vmx/aes_ctr.c
+++ b/drivers/crypto/vmx/aes_ctr.c
@@ -80,11 +80,13 @@ static int p8_aes_ctr_setkey(struct crypto_tfm *tfm, const u8 *key,
int ret;
struct p8_aes_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
+ preempt_disable();
pagefault_disable();
enable_kernel_vsx();
ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key);
disable_kernel_vsx();
pagefault_enable();
+ preempt_enable();
ret += crypto_blkcipher_setkey(ctx->fallback, key, keylen);
return ret;
@@ -99,11 +101,13 @@ static void p8_aes_ctr_final(struct p8_aes_ctr_ctx *ctx,
u8 *dst = walk->dst.virt.addr;
unsigned int nbytes = walk->nbytes;
+ preempt_disable();
pagefault_disable();
enable_kernel_vsx();
aes_p8_encrypt(ctrblk, keystream, &ctx->enc_key);
disable_kernel_vsx();
pagefault_enable();
+ preempt_enable();
crypto_xor(keystream, src, nbytes);
memcpy(dst, keystream, nbytes);
@@ -132,6 +136,7 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc,
blkcipher_walk_init(&walk, dst, src, nbytes);
ret = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE);
while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
+ preempt_disable();
pagefault_disable();
enable_kernel_vsx();
aes_p8_ctr32_encrypt_blocks(walk.src.virt.addr,
@@ -143,6 +148,7 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc,
walk.iv);
disable_kernel_vsx();
pagefault_enable();
+ preempt_enable();
/* We need to update IV mostly for last bytes/round */
inc = (nbytes & AES_BLOCK_MASK) / AES_BLOCK_SIZE;
diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index cf76fc6..fbb7551 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -666,6 +666,7 @@ static int dmatest_func(void *data)
* free it this time?" dancing. For now, just
* leave it dangling.
*/
+ WARN(1, "dmatest: Kernel stack may be corrupted!!\n");
dmaengine_unmap_put(um);
result("test timed out", total_tests, src_off, dst_off,
len, 0);
diff --git a/drivers/dma/qcom/gpi.c b/drivers/dma/qcom/gpi.c
index 94a8e6a..433f768 100644
--- a/drivers/dma/qcom/gpi.c
+++ b/drivers/dma/qcom/gpi.c
@@ -1207,6 +1207,7 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
void *tre = ch_ring->base +
(ch_ring->el_size * imed_event->tre_index);
struct msm_gpi_dma_async_tx_cb_param *tx_cb_param;
+ unsigned long flags;
/*
* If channel not active don't process event but let
@@ -1221,13 +1222,13 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
return;
}
- spin_lock_irq(&gpii_chan->vc.lock);
+ spin_lock_irqsave(&gpii_chan->vc.lock, flags);
vd = vchan_next_desc(&gpii_chan->vc);
if (!vd) {
struct gpi_ere *gpi_ere;
struct msm_gpi_tre *gpi_tre;
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
GPII_ERR(gpii, gpii_chan->chid,
"event without a pending descriptor!\n");
gpi_ere = (struct gpi_ere *)imed_event;
@@ -1247,7 +1248,7 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
/* Event TR RP gen. don't match descriptor TR */
if (gpi_desc->wp != tre) {
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
GPII_ERR(gpii, gpii_chan->chid,
"EOT/EOB received for wrong TRE 0x%0llx != 0x%0llx\n",
to_physical(ch_ring, gpi_desc->wp),
@@ -1258,7 +1259,7 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
}
list_del(&vd->node);
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
sg_tre = gpi_desc->sg_tre;
client_tre = ((struct sg_tre *)sg_tre)->ptr;
@@ -1300,9 +1301,9 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
tx_cb_param->status = imed_event->status;
}
- spin_lock_irq(&gpii_chan->vc.lock);
+ spin_lock_irqsave(&gpii_chan->vc.lock, flags);
vchan_cookie_complete(vd);
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
}
/* processing transfer completion events */
@@ -1318,6 +1319,7 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
struct msm_gpi_dma_async_tx_cb_param *tx_cb_param;
struct gpi_desc *gpi_desc;
void *sg_tre = NULL;
+ unsigned long flags;
/* only process events on active channel */
if (unlikely(gpii_chan->pm_state != ACTIVE_STATE)) {
@@ -1329,12 +1331,12 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
return;
}
- spin_lock_irq(&gpii_chan->vc.lock);
+ spin_lock_irqsave(&gpii_chan->vc.lock, flags);
vd = vchan_next_desc(&gpii_chan->vc);
if (!vd) {
struct gpi_ere *gpi_ere;
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
GPII_ERR(gpii, gpii_chan->chid,
"Event without a pending descriptor!\n");
gpi_ere = (struct gpi_ere *)compl_event;
@@ -1350,7 +1352,7 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
/* TRE Event generated didn't match descriptor's TRE */
if (gpi_desc->wp != ev_rp) {
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
GPII_ERR(gpii, gpii_chan->chid,
"EOT\EOB received for wrong TRE 0x%0llx != 0x%0llx\n",
to_physical(ch_ring, gpi_desc->wp),
@@ -1361,7 +1363,7 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
}
list_del(&vd->node);
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
sg_tre = gpi_desc->sg_tre;
client_tre = ((struct sg_tre *)sg_tre)->ptr;
@@ -1393,9 +1395,9 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
tx_cb_param->status = compl_event->status;
}
- spin_lock_irq(&gpii_chan->vc.lock);
+ spin_lock_irqsave(&gpii_chan->vc.lock, flags);
vchan_cookie_complete(vd);
- spin_unlock_irq(&gpii_chan->vc.lock);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
}
/* process all events */
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
index ee181c5..6e197c1 100644
--- a/drivers/edac/amd64_edac.c
+++ b/drivers/edac/amd64_edac.c
@@ -2984,8 +2984,11 @@ static int __init amd64_edac_init(void)
int err = -ENODEV;
int i;
+ if (!x86_match_cpu(amd64_cpuids))
+ return -ENODEV;
+
if (amd_cache_northbridges() < 0)
- goto err_ret;
+ return -ENODEV;
opstate_init();
@@ -2998,14 +3001,16 @@ static int __init amd64_edac_init(void)
if (!msrs)
goto err_free;
- for (i = 0; i < amd_nb_num(); i++)
- if (probe_one_instance(i)) {
+ for (i = 0; i < amd_nb_num(); i++) {
+ err = probe_one_instance(i);
+ if (err) {
/* unwind properly */
while (--i >= 0)
remove_one_instance(i);
goto err_pci;
}
+ }
setup_pci_device();
@@ -3025,7 +3030,6 @@ static int __init amd64_edac_init(void)
kfree(ecc_stngs);
ecc_stngs = NULL;
-err_ret:
return err;
}
diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
index c088704..dcb5f94 100644
--- a/drivers/edac/amd64_edac.h
+++ b/drivers/edac/amd64_edac.h
@@ -16,6 +16,7 @@
#include <linux/slab.h>
#include <linux/mmzone.h>
#include <linux/edac.h>
+#include <asm/cpu_device_id.h>
#include <asm/msr.h>
#include "edac_core.h"
#include "mce_amd.h"
diff --git a/drivers/edac/kryo3xx_arm64_edac.c b/drivers/edac/kryo3xx_arm64_edac.c
index a300c7f..cf3fdde 100644
--- a/drivers/edac/kryo3xx_arm64_edac.c
+++ b/drivers/edac/kryo3xx_arm64_edac.c
@@ -30,10 +30,11 @@ module_param(poll_msec, int, 0444);
#endif
#ifdef CONFIG_EDAC_KRYO3XX_ARM64_PANIC_ON_CE
-#define ARM64_ERP_PANIC_ON_CE 1
+static bool panic_on_ce = 1;
#else
-#define ARM64_ERP_PANIC_ON_CE 0
+static bool panic_on_ce;
#endif
+module_param_named(panic_on_ce, panic_on_ce, bool, 0664);
#ifdef CONFIG_EDAC_KRYO3XX_ARM64_PANIC_ON_UE
#define ARM64_ERP_PANIC_ON_UE 1
@@ -238,6 +239,8 @@ static void dump_err_reg(int errorcode, int level, u64 errxstatus, u64 errxmisc,
else
edac_printk(KERN_CRIT, EDAC_CPU,
"Way: %d\n", (int) KRYO3XX_ERRXMISC_WAY(errxmisc) >> 2);
+
+ edev_ctl->panic_on_ce = panic_on_ce;
errors[errorcode].func(edev_ctl, smp_processor_id(),
level, errors[errorcode].msg);
}
@@ -427,7 +430,7 @@ static int kryo3xx_cpu_erp_probe(struct platform_device *pdev)
drv->edev_ctl->mod_name = dev_name(dev);
drv->edev_ctl->dev_name = dev_name(dev);
drv->edev_ctl->ctl_name = "cache";
- drv->edev_ctl->panic_on_ce = ARM64_ERP_PANIC_ON_CE;
+ drv->edev_ctl->panic_on_ce = panic_on_ce;
drv->edev_ctl->panic_on_ue = ARM64_ERP_PANIC_ON_UE;
drv->nb_pm.notifier_call = kryo3xx_pmu_cpu_pm_notify;
platform_set_drvdata(pdev, drv);
diff --git a/drivers/extcon/extcon-palmas.c b/drivers/extcon/extcon-palmas.c
index 634ba70..a128fd2 100644
--- a/drivers/extcon/extcon-palmas.c
+++ b/drivers/extcon/extcon-palmas.c
@@ -190,6 +190,11 @@ static int palmas_usb_probe(struct platform_device *pdev)
struct palmas_usb *palmas_usb;
int status;
+ if (!palmas) {
+ dev_err(&pdev->dev, "failed to get valid parent\n");
+ return -EINVAL;
+ }
+
palmas_usb = devm_kzalloc(&pdev->dev, sizeof(*palmas_usb), GFP_KERNEL);
if (!palmas_usb)
return -ENOMEM;
diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
index 0e1d428..5135571 100644
--- a/drivers/extcon/extcon.c
+++ b/drivers/extcon/extcon.c
@@ -921,35 +921,16 @@ int extcon_register_notifier(struct extcon_dev *edev, unsigned int id,
unsigned long flags;
int ret, idx = -EINVAL;
- if (!nb)
+ if (!edev || !nb)
return -EINVAL;
- if (edev) {
- idx = find_cable_index_by_id(edev, id);
- if (idx < 0)
- return idx;
+ idx = find_cable_index_by_id(edev, id);
+ if (idx < 0)
+ return idx;
- spin_lock_irqsave(&edev->lock, flags);
- ret = raw_notifier_chain_register(&edev->nh[idx], nb);
- spin_unlock_irqrestore(&edev->lock, flags);
- } else {
- struct extcon_dev *extd;
-
- mutex_lock(&extcon_dev_list_lock);
- list_for_each_entry(extd, &extcon_dev_list, entry) {
- idx = find_cable_index_by_id(extd, id);
- if (idx >= 0)
- break;
- }
- mutex_unlock(&extcon_dev_list_lock);
-
- if (idx >= 0) {
- edev = extd;
- return extcon_register_notifier(extd, id, nb);
- } else {
- ret = -ENODEV;
- }
- }
+ spin_lock_irqsave(&edev->lock, flags);
+ ret = raw_notifier_chain_register(&edev->nh[idx], nb);
+ spin_unlock_irqrestore(&edev->lock, flags);
return ret;
}
diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
index 9338ff7..642fa03 100644
--- a/drivers/gpio/Kconfig
+++ b/drivers/gpio/Kconfig
@@ -1206,6 +1206,8 @@
tristate "Microchip MCP23xxx I/O expander"
depends on OF_GPIO
select GPIOLIB_IRQCHIP
+ select REGMAP_I2C if I2C
+ select REGMAP if SPI_MASTER
help
SPI/I2C driver for Microchip MCP23S08/MCP23S17/MCP23008/MCP23017
I/O expanders.
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 7fe8fd8..743a12d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -315,6 +315,10 @@ static void amdgpu_vce_idle_work_handler(struct work_struct *work)
amdgpu_dpm_enable_vce(adev, false);
} else {
amdgpu_asic_set_vce_clocks(adev, 0, 0);
+ amdgpu_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCE,
+ AMD_PG_STATE_GATE);
+ amdgpu_set_clockgating_state(adev, AMD_IP_BLOCK_TYPE_VCE,
+ AMD_CG_STATE_GATE);
}
} else {
schedule_delayed_work(&adev->vce.idle_work, VCE_IDLE_TIMEOUT);
@@ -340,6 +344,11 @@ void amdgpu_vce_ring_begin_use(struct amdgpu_ring *ring)
amdgpu_dpm_enable_vce(adev, true);
} else {
amdgpu_asic_set_vce_clocks(adev, 53300, 40000);
+ amdgpu_set_clockgating_state(adev, AMD_IP_BLOCK_TYPE_VCE,
+ AMD_CG_STATE_UNGATE);
+ amdgpu_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCE,
+ AMD_PG_STATE_UNGATE);
+
}
}
mutex_unlock(&adev->vce.idle_mutex);
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
index ab3df6d..3f445df91 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
@@ -89,6 +89,10 @@ static int uvd_v6_0_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ if (!(adev->flags & AMD_IS_APU) &&
+ (RREG32_SMC(ixCC_HARVEST_FUSES) & CC_HARVEST_FUSES__UVD_DISABLE_MASK))
+ return -ENOENT;
+
uvd_v6_0_set_ring_funcs(adev);
uvd_v6_0_set_irq_funcs(adev);
diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
index afe0480..8b009b5 100644
--- a/drivers/gpu/drm/arm/malidp_planes.c
+++ b/drivers/gpu/drm/arm/malidp_planes.c
@@ -182,7 +182,8 @@ static void malidp_de_plane_update(struct drm_plane *plane,
/* setup the rotation and axis flip bits */
if (plane->state->rotation & DRM_ROTATE_MASK)
- val = ilog2(plane->state->rotation & DRM_ROTATE_MASK) << LAYER_ROT_OFFSET;
+ val |= ilog2(plane->state->rotation & DRM_ROTATE_MASK) <<
+ LAYER_ROT_OFFSET;
if (plane->state->rotation & DRM_REFLECT_X)
val |= LAYER_H_FLIP;
if (plane->state->rotation & DRM_REFLECT_Y)
diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
index 213d892..a68f94d 100644
--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
@@ -325,7 +325,7 @@ static void adv7511_set_link_config(struct adv7511 *adv7511,
adv7511->rgb = config->input_colorspace == HDMI_COLORSPACE_RGB;
}
-static void adv7511_power_on(struct adv7511 *adv7511)
+static void __adv7511_power_on(struct adv7511 *adv7511)
{
adv7511->current_edid_segment = -1;
@@ -354,6 +354,11 @@ static void adv7511_power_on(struct adv7511 *adv7511)
regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
ADV7511_REG_POWER2_HPD_SRC_MASK,
ADV7511_REG_POWER2_HPD_SRC_NONE);
+}
+
+static void adv7511_power_on(struct adv7511 *adv7511)
+{
+ __adv7511_power_on(adv7511);
/*
* Most of the registers are reset during power down or when HPD is low.
@@ -362,21 +367,23 @@ static void adv7511_power_on(struct adv7511 *adv7511)
if (adv7511->type == ADV7533)
adv7533_dsi_power_on(adv7511);
-
adv7511->powered = true;
}
-static void adv7511_power_off(struct adv7511 *adv7511)
+static void __adv7511_power_off(struct adv7511 *adv7511)
{
/* TODO: setup additional power down modes */
regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
ADV7511_POWER_POWER_DOWN,
ADV7511_POWER_POWER_DOWN);
regcache_mark_dirty(adv7511->regmap);
+}
+static void adv7511_power_off(struct adv7511 *adv7511)
+{
+ __adv7511_power_off(adv7511);
if (adv7511->type == ADV7533)
adv7533_dsi_power_off(adv7511);
-
adv7511->powered = false;
}
@@ -567,23 +574,20 @@ static int adv7511_get_modes(struct adv7511 *adv7511,
/* Reading the EDID only works if the device is powered */
if (!adv7511->powered) {
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN, 0);
- if (adv7511->i2c_main->irq) {
- regmap_write(adv7511->regmap, ADV7511_REG_INT_ENABLE(0),
- ADV7511_INT0_EDID_READY);
- regmap_write(adv7511->regmap, ADV7511_REG_INT_ENABLE(1),
- ADV7511_INT1_DDC_ERROR);
- }
- adv7511->current_edid_segment = -1;
+ unsigned int edid_i2c_addr =
+ (adv7511->i2c_main->addr << 1) + 4;
+
+ __adv7511_power_on(adv7511);
+
+ /* Reset the EDID_I2C_ADDR register as it might be cleared */
+ regmap_write(adv7511->regmap, ADV7511_REG_EDID_I2C_ADDR,
+ edid_i2c_addr);
}
edid = drm_do_get_edid(connector, adv7511_get_edid_block, adv7511);
if (!adv7511->powered)
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN,
- ADV7511_POWER_POWER_DOWN);
+ __adv7511_power_off(adv7511);
kfree(adv7511->edid);
adv7511->edid = edid;
diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index 362b8cd..80a903b 100644
--- a/drivers/gpu/drm/drm_drv.c
+++ b/drivers/gpu/drm/drm_drv.c
@@ -218,7 +218,7 @@ static int drm_minor_register(struct drm_device *dev, unsigned int type)
ret = drm_debugfs_init(minor, minor->index, drm_debugfs_root);
if (ret) {
DRM_ERROR("DRM: Failed to initialize /sys/kernel/debug/dri.\n");
- return ret;
+ goto err_debugfs;
}
ret = device_add(minor->kdev);
diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
index 76a1e43..d9a5762 100644
--- a/drivers/gpu/drm/drm_mipi_dsi.c
+++ b/drivers/gpu/drm/drm_mipi_dsi.c
@@ -360,6 +360,7 @@ static ssize_t mipi_dsi_device_transfer(struct mipi_dsi_device *dsi,
if (dsi->mode_flags & MIPI_DSI_MODE_LPM)
msg->flags |= MIPI_DSI_MSG_USE_LPM;
+ msg->flags |= MIPI_DSI_MSG_LASTCOMMAND;
return ops->transfer(dsi->host, msg);
}
diff --git a/drivers/gpu/drm/drm_property.c b/drivers/gpu/drm/drm_property.c
index ef80ec6..9b79a5b 100644
--- a/drivers/gpu/drm/drm_property.c
+++ b/drivers/gpu/drm/drm_property.c
@@ -557,7 +557,7 @@ drm_property_create_blob(struct drm_device *dev, size_t length,
if (!length || length > ULONG_MAX - sizeof(struct drm_property_blob))
return ERR_PTR(-EINVAL);
- blob = vmalloc(sizeof(struct drm_property_blob)+length);
+ blob = vzalloc(sizeof(struct drm_property_blob)+length);
if (!blob)
return ERR_PTR(-ENOMEM);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
index fbd13fa..603d842 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
@@ -1193,6 +1193,17 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data,
if (!node)
return -ENOMEM;
+ /*
+ * To avoid an integer overflow for the later size computations, we
+ * enforce a maximum number of submitted commands here. This limit is
+ * sufficient for all conceivable usage cases of the G2D.
+ */
+ if (req->cmd_nr > G2D_CMDLIST_DATA_NUM ||
+ req->cmd_buf_nr > G2D_CMDLIST_DATA_NUM) {
+ dev_err(dev, "number of submitted G2D commands exceeds limit\n");
+ return -EINVAL;
+ }
+
node->event = NULL;
if (req->event_type != G2D_EVENT_NOT) {
@@ -1250,7 +1261,11 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data,
cmdlist->data[cmdlist->last++] = G2D_INTEN_ACF;
}
- /* Check size of cmdlist: last 2 is about G2D_BITBLT_START */
+ /*
+ * Check the size of cmdlist. The 2 that is added last comes from
+ * the implicit G2D_BITBLT_START that is appended once we have
+ * checked all the submitted commands.
+ */
size = cmdlist->last + req->cmd_nr * 2 + req->cmd_buf_nr * 2 + 2;
if (size > G2D_CMDLIST_DATA_NUM) {
dev_err(dev, "cmdlist size is too big\n");
diff --git a/drivers/gpu/drm/fsl-dcu/fsl_tcon.c b/drivers/gpu/drm/fsl-dcu/fsl_tcon.c
index 3194e54..faacc81 100644
--- a/drivers/gpu/drm/fsl-dcu/fsl_tcon.c
+++ b/drivers/gpu/drm/fsl-dcu/fsl_tcon.c
@@ -89,9 +89,13 @@ struct fsl_tcon *fsl_tcon_init(struct device *dev)
goto err_node_put;
}
- of_node_put(np);
- clk_prepare_enable(tcon->ipg_clk);
+ ret = clk_prepare_enable(tcon->ipg_clk);
+ if (ret) {
+ dev_err(dev, "Couldn't enable the TCON clock\n");
+ goto err_node_put;
+ }
+ of_node_put(np);
dev_info(dev, "Using TCON in bypass mode\n");
return tcon;
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index afa3d01..7fdc42e 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -3558,9 +3558,16 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
dev_priv->psr.psr2_support ? "supported" : "not supported");
}
- /* Read the eDP Display control capabilities registers */
- if ((intel_dp->dpcd[DP_EDP_CONFIGURATION_CAP] & DP_DPCD_DISPLAY_CONTROL_CAPABLE) &&
- drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_DPCD_REV,
+ /*
+ * Read the eDP display control registers.
+ *
+ * Do this independent of DP_DPCD_DISPLAY_CONTROL_CAPABLE bit in
+ * DP_EDP_CONFIGURATION_CAP, because some buggy displays do not have it
+ * set, but require eDP 1.4+ detection (e.g. for supported link rates
+ * method). The display control registers should read zero if they're
+ * not supported anyway.
+ */
+ if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_DPCD_REV,
intel_dp->edp_dpcd, sizeof(intel_dp->edp_dpcd)) ==
sizeof(intel_dp->edp_dpcd))
DRM_DEBUG_KMS("EDP DPCD : %*ph\n", (int) sizeof(intel_dp->edp_dpcd),
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index a19ec06..3ce9ba3 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -457,7 +457,6 @@ struct intel_crtc_scaler_state {
struct intel_pipe_wm {
struct intel_wm_level wm[5];
- struct intel_wm_level raw_wm[5];
uint32_t linetime;
bool fbc_wm_enabled;
bool pipe_enabled;
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 49de476..277a802 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -27,6 +27,7 @@
#include <linux/cpufreq.h>
#include <drm/drm_plane_helper.h>
+#include <drm/drm_atomic_helper.h>
#include "i915_drv.h"
#include "intel_drv.h"
#include "../../../platform/x86/intel_ips.h"
@@ -2017,9 +2018,9 @@ static void ilk_compute_wm_level(const struct drm_i915_private *dev_priv,
const struct intel_crtc *intel_crtc,
int level,
struct intel_crtc_state *cstate,
- struct intel_plane_state *pristate,
- struct intel_plane_state *sprstate,
- struct intel_plane_state *curstate,
+ const struct intel_plane_state *pristate,
+ const struct intel_plane_state *sprstate,
+ const struct intel_plane_state *curstate,
struct intel_wm_level *result)
{
uint16_t pri_latency = dev_priv->wm.pri_latency[level];
@@ -2341,28 +2342,24 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *cstate)
struct intel_pipe_wm *pipe_wm;
struct drm_device *dev = state->dev;
const struct drm_i915_private *dev_priv = to_i915(dev);
- struct intel_plane *intel_plane;
- struct intel_plane_state *pristate = NULL;
- struct intel_plane_state *sprstate = NULL;
- struct intel_plane_state *curstate = NULL;
+ struct drm_plane *plane;
+ const struct drm_plane_state *plane_state;
+ const struct intel_plane_state *pristate = NULL;
+ const struct intel_plane_state *sprstate = NULL;
+ const struct intel_plane_state *curstate = NULL;
int level, max_level = ilk_wm_max_level(dev), usable_level;
struct ilk_wm_maximums max;
pipe_wm = &cstate->wm.ilk.optimal;
- for_each_intel_plane_on_crtc(dev, intel_crtc, intel_plane) {
- struct intel_plane_state *ps;
+ drm_atomic_crtc_state_for_each_plane_state(plane, plane_state, &cstate->base) {
+ const struct intel_plane_state *ps = to_intel_plane_state(plane_state);
- ps = intel_atomic_get_existing_plane_state(state,
- intel_plane);
- if (!ps)
- continue;
-
- if (intel_plane->base.type == DRM_PLANE_TYPE_PRIMARY)
+ if (plane->type == DRM_PLANE_TYPE_PRIMARY)
pristate = ps;
- else if (intel_plane->base.type == DRM_PLANE_TYPE_OVERLAY)
+ else if (plane->type == DRM_PLANE_TYPE_OVERLAY)
sprstate = ps;
- else if (intel_plane->base.type == DRM_PLANE_TYPE_CURSOR)
+ else if (plane->type == DRM_PLANE_TYPE_CURSOR)
curstate = ps;
}
@@ -2384,11 +2381,9 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *cstate)
if (pipe_wm->sprites_scaled)
usable_level = 0;
- ilk_compute_wm_level(dev_priv, intel_crtc, 0, cstate,
- pristate, sprstate, curstate, &pipe_wm->raw_wm[0]);
-
memset(&pipe_wm->wm, 0, sizeof(pipe_wm->wm));
- pipe_wm->wm[0] = pipe_wm->raw_wm[0];
+ ilk_compute_wm_level(dev_priv, intel_crtc, 0, cstate,
+ pristate, sprstate, curstate, &pipe_wm->wm[0]);
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
pipe_wm->linetime = hsw_compute_linetime_wm(cstate);
@@ -2398,8 +2393,8 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *cstate)
ilk_compute_wm_reg_maximums(dev, 1, &max);
- for (level = 1; level <= max_level; level++) {
- struct intel_wm_level *wm = &pipe_wm->raw_wm[level];
+ for (level = 1; level <= usable_level; level++) {
+ struct intel_wm_level *wm = &pipe_wm->wm[level];
ilk_compute_wm_level(dev_priv, intel_crtc, level, cstate,
pristate, sprstate, curstate, wm);
@@ -2409,13 +2404,10 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *cstate)
* register maximums since such watermarks are
* always invalid.
*/
- if (level > usable_level)
- continue;
-
- if (ilk_validate_wm_level(level, &max, wm))
- pipe_wm->wm[level] = *wm;
- else
- usable_level = level;
+ if (!ilk_validate_wm_level(level, &max, wm)) {
+ memset(wm, 0, sizeof(*wm));
+ break;
+ }
}
return 0;
diff --git a/drivers/gpu/drm/mgag200/mgag200_main.c b/drivers/gpu/drm/mgag200/mgag200_main.c
index e79cbc2..fb03e30 100644
--- a/drivers/gpu/drm/mgag200/mgag200_main.c
+++ b/drivers/gpu/drm/mgag200/mgag200_main.c
@@ -145,6 +145,8 @@ static int mga_vram_init(struct mga_device *mdev)
}
mem = pci_iomap(mdev->dev->pdev, 0, 0);
+ if (!mem)
+ return -ENOMEM;
mdev->mc.vram_size = mga_probe_vram(mdev, mem);
diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
index 6ac692f..ea2e72a 100644
--- a/drivers/gpu/drm/msm/dp/dp_audio.c
+++ b/drivers/gpu/drm/msm/dp/dp_audio.c
@@ -23,13 +23,6 @@
#include "dp_audio.h"
#include "dp_panel.h"
-#define HEADER_BYTE_2_BIT 0
-#define PARITY_BYTE_2_BIT 8
-#define HEADER_BYTE_1_BIT 16
-#define PARITY_BYTE_1_BIT 24
-#define HEADER_BYTE_3_BIT 16
-#define PARITY_BYTE_3_BIT 24
-
struct dp_audio_private {
struct platform_device *ext_pdev;
struct platform_device *pdev;
@@ -44,75 +37,12 @@ struct dp_audio_private {
u32 channels;
struct completion hpd_comp;
+ struct workqueue_struct *notify_workqueue;
+ struct delayed_work notify_delayed_work;
struct dp_audio dp_audio;
};
-static u8 dp_audio_get_g0_value(u8 data)
-{
- u8 c[4];
- u8 g[4];
- u8 ret_data = 0;
- u8 i;
-
- for (i = 0; i < 4; i++)
- c[i] = (data >> i) & 0x01;
-
- g[0] = c[3];
- g[1] = c[0] ^ c[3];
- g[2] = c[1];
- g[3] = c[2];
-
- for (i = 0; i < 4; i++)
- ret_data = ((g[i] & 0x01) << i) | ret_data;
-
- return ret_data;
-}
-
-static u8 dp_audio_get_g1_value(u8 data)
-{
- u8 c[4];
- u8 g[4];
- u8 ret_data = 0;
- u8 i;
-
- for (i = 0; i < 4; i++)
- c[i] = (data >> i) & 0x01;
-
- g[0] = c[0] ^ c[3];
- g[1] = c[0] ^ c[1] ^ c[3];
- g[2] = c[1] ^ c[2];
- g[3] = c[2] ^ c[3];
-
- for (i = 0; i < 4; i++)
- ret_data = ((g[i] & 0x01) << i) | ret_data;
-
- return ret_data;
-}
-
-static u8 dp_audio_calculate_parity(u32 data)
-{
- u8 x0 = 0;
- u8 x1 = 0;
- u8 ci = 0;
- u8 iData = 0;
- u8 i = 0;
- u8 parity_byte;
- u8 num_byte = (data & 0xFF00) > 0 ? 8 : 2;
-
- for (i = 0; i < num_byte; i++) {
- iData = (data >> i*4) & 0xF;
-
- ci = iData ^ x1;
- x1 = x0 ^ dp_audio_get_g1_value(ci);
- x0 = dp_audio_get_g0_value(ci);
- }
-
- parity_byte = x1 | (x0 << 4);
-
- return parity_byte;
-}
-
static u32 dp_audio_get_header(struct dp_catalog_audio *catalog,
enum dp_catalog_audio_sdp_type sdp,
enum dp_catalog_audio_header_type header)
@@ -144,9 +74,10 @@ static void dp_audio_stream_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 1 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_STREAM, DP_AUDIO_SDP_HEADER_1);
+ value &= 0x0000ffff;
new_value = 0x02;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_1_BIT)
| (parity_byte << PARITY_BYTE_1_BIT));
pr_debug("Header Byte 1: value = 0x%x, parity_byte = 0x%x\n",
@@ -157,8 +88,9 @@ static void dp_audio_stream_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 2 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_STREAM, DP_AUDIO_SDP_HEADER_2);
- new_value = value;
- parity_byte = dp_audio_calculate_parity(new_value);
+ value &= 0xffff0000;
+ new_value = 0x0;
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_2_BIT)
| (parity_byte << PARITY_BYTE_2_BIT));
pr_debug("Header Byte 2: value = 0x%x, parity_byte = 0x%x\n",
@@ -170,9 +102,10 @@ static void dp_audio_stream_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 3 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_STREAM, DP_AUDIO_SDP_HEADER_3);
+ value &= 0x0000ffff;
new_value = audio->channels - 1;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_3_BIT)
| (parity_byte << PARITY_BYTE_3_BIT));
pr_debug("Header Byte 3: value = 0x%x, parity_byte = 0x%x\n",
@@ -191,9 +124,10 @@ static void dp_audio_timestamp_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 1 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_TIMESTAMP, DP_AUDIO_SDP_HEADER_1);
+ value &= 0x0000ffff;
new_value = 0x1;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_1_BIT)
| (parity_byte << PARITY_BYTE_1_BIT));
pr_debug("Header Byte 1: value = 0x%x, parity_byte = 0x%x\n",
@@ -204,9 +138,10 @@ static void dp_audio_timestamp_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 2 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_TIMESTAMP, DP_AUDIO_SDP_HEADER_2);
+ value &= 0xffff0000;
new_value = 0x17;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_2_BIT)
| (parity_byte << PARITY_BYTE_2_BIT));
pr_debug("Header Byte 2: value = 0x%x, parity_byte = 0x%x\n",
@@ -217,9 +152,10 @@ static void dp_audio_timestamp_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 3 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_TIMESTAMP, DP_AUDIO_SDP_HEADER_3);
+ value &= 0x0000ffff;
new_value = (0x0 | (0x11 << 2));
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_3_BIT)
| (parity_byte << PARITY_BYTE_3_BIT));
pr_debug("Header Byte 3: value = 0x%x, parity_byte = 0x%x\n",
@@ -237,9 +173,10 @@ static void dp_audio_infoframe_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 1 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_INFOFRAME, DP_AUDIO_SDP_HEADER_1);
+ value &= 0x0000ffff;
new_value = 0x84;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_1_BIT)
| (parity_byte << PARITY_BYTE_1_BIT));
pr_debug("Header Byte 1: value = 0x%x, parity_byte = 0x%x\n",
@@ -250,9 +187,10 @@ static void dp_audio_infoframe_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 2 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_INFOFRAME, DP_AUDIO_SDP_HEADER_2);
+ value &= 0xffff0000;
new_value = 0x1b;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_2_BIT)
| (parity_byte << PARITY_BYTE_2_BIT));
pr_debug("Header Byte 2: value = 0x%x, parity_byte = 0x%x\n",
@@ -263,9 +201,10 @@ static void dp_audio_infoframe_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 3 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_INFOFRAME, DP_AUDIO_SDP_HEADER_3);
+ value &= 0x0000ffff;
new_value = (0x0 | (0x11 << 2));
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_3_BIT)
| (parity_byte << PARITY_BYTE_3_BIT));
pr_debug("Header Byte 3: value = 0x%x, parity_byte = 0x%x\n",
@@ -283,9 +222,10 @@ static void dp_audio_copy_management_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 1 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_COPYMANAGEMENT, DP_AUDIO_SDP_HEADER_1);
+ value &= 0x0000ffff;
new_value = 0x05;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_1_BIT)
| (parity_byte << PARITY_BYTE_1_BIT));
pr_debug("Header Byte 1: value = 0x%x, parity_byte = 0x%x\n",
@@ -296,9 +236,10 @@ static void dp_audio_copy_management_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 2 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_COPYMANAGEMENT, DP_AUDIO_SDP_HEADER_2);
+ value &= 0xffff0000;
new_value = 0x0F;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_2_BIT)
| (parity_byte << PARITY_BYTE_2_BIT));
pr_debug("Header Byte 2: value = 0x%x, parity_byte = 0x%x\n",
@@ -309,9 +250,10 @@ static void dp_audio_copy_management_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 3 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_COPYMANAGEMENT, DP_AUDIO_SDP_HEADER_3);
+ value &= 0x0000ffff;
new_value = 0x0;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_3_BIT)
| (parity_byte << PARITY_BYTE_3_BIT));
pr_debug("Header Byte 3: value = 0x%x, parity_byte = 0x%x\n",
@@ -329,9 +271,10 @@ static void dp_audio_isrc_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 1 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_ISRC, DP_AUDIO_SDP_HEADER_1);
+ value &= 0x0000ffff;
new_value = 0x06;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_1_BIT)
| (parity_byte << PARITY_BYTE_1_BIT));
pr_debug("Header Byte 1: value = 0x%x, parity_byte = 0x%x\n",
@@ -342,9 +285,10 @@ static void dp_audio_isrc_sdp(struct dp_audio_private *audio)
/* Config header and parity byte 2 */
value = dp_audio_get_header(catalog,
DP_AUDIO_SDP_ISRC, DP_AUDIO_SDP_HEADER_2);
+ value &= 0xffff0000;
new_value = 0x0F;
- parity_byte = dp_audio_calculate_parity(new_value);
+ parity_byte = dp_header_get_parity(new_value);
value |= ((new_value << HEADER_BYTE_2_BIT)
| (parity_byte << PARITY_BYTE_2_BIT));
pr_debug("Header Byte 2: value = 0x%x, parity_byte = 0x%x\n",
@@ -465,12 +409,16 @@ static int dp_audio_info_setup(struct platform_device *pdev,
goto end;
}
+ mutex_lock(&audio->dp_audio.ops_lock);
+
audio->channels = params->num_of_channels;
dp_audio_setup_sdp(audio);
dp_audio_setup_acr(audio);
dp_audio_safe_to_exit_level(audio);
dp_audio_enable(audio, true);
+
+ mutex_unlock(&audio->dp_audio.ops_lock);
end:
return rc;
}
@@ -545,7 +493,9 @@ static void dp_audio_teardown_done(struct platform_device *pdev)
if (IS_ERR(audio))
return;
+ mutex_lock(&audio->dp_audio.ops_lock);
dp_audio_enable(audio, false);
+ mutex_unlock(&audio->dp_audio.ops_lock);
complete_all(&audio->hpd_comp);
@@ -585,6 +535,24 @@ static int dp_audio_ack_done(struct platform_device *pdev, u32 ack)
return rc;
}
+static int dp_audio_codec_ready(struct platform_device *pdev)
+{
+ int rc = 0;
+ struct dp_audio_private *audio;
+
+ audio = dp_audio_get_data(pdev);
+ if (IS_ERR(audio)) {
+ pr_err("invalid input\n");
+ rc = PTR_ERR(audio);
+ goto end;
+ }
+
+ queue_delayed_work(audio->notify_workqueue,
+ &audio->notify_delayed_work, HZ/4);
+end:
+ return rc;
+}
+
static int dp_audio_init_ext_disp(struct dp_audio_private *audio)
{
int rc = 0;
@@ -606,6 +574,7 @@ static int dp_audio_init_ext_disp(struct dp_audio_private *audio)
ops->get_intf_id = dp_audio_get_intf_id;
ops->teardown_done = dp_audio_teardown_done;
ops->acknowledge = dp_audio_ack_done;
+ ops->ready = dp_audio_codec_ready;
if (!audio->pdev->dev.of_node) {
pr_err("cannot find audio dev.of_node\n");
@@ -637,6 +606,31 @@ static int dp_audio_init_ext_disp(struct dp_audio_private *audio)
return rc;
}
+static int dp_audio_notify(struct dp_audio_private *audio, u32 state)
+{
+ int rc = 0;
+ struct msm_ext_disp_init_data *ext = &audio->ext_audio_data;
+
+ rc = ext->intf_ops.audio_notify(audio->ext_pdev,
+ EXT_DISPLAY_TYPE_DP, state);
+ if (rc) {
+ pr_err("failed to notify audio. state=%d err=%d\n", state, rc);
+ goto end;
+ }
+
+ reinit_completion(&audio->hpd_comp);
+ rc = wait_for_completion_timeout(&audio->hpd_comp, HZ * 5);
+ if (!rc) {
+ pr_err("timeout. state=%d err=%d\n", state, rc);
+ rc = -ETIMEDOUT;
+ goto end;
+ }
+
+ pr_debug("success\n");
+end:
+ return rc;
+}
+
static int dp_audio_on(struct dp_audio *dp_audio)
{
int rc = 0;
@@ -645,11 +639,14 @@ static int dp_audio_on(struct dp_audio *dp_audio)
if (!dp_audio) {
pr_err("invalid input\n");
- rc = -EINVAL;
- goto end;
+ return -EINVAL;
}
audio = container_of(dp_audio, struct dp_audio_private, dp_audio);
+ if (IS_ERR(audio)) {
+ pr_err("invalid input\n");
+ return -EINVAL;
+ }
ext = &audio->ext_audio_data;
@@ -663,21 +660,9 @@ static int dp_audio_on(struct dp_audio *dp_audio)
goto end;
}
- rc = ext->intf_ops.audio_notify(audio->ext_pdev,
- EXT_DISPLAY_TYPE_DP,
- EXT_DISPLAY_CABLE_CONNECT);
- if (rc) {
- pr_err("failed to notify audio, err=%d\n", rc);
+ rc = dp_audio_notify(audio, EXT_DISPLAY_CABLE_CONNECT);
+ if (rc)
goto end;
- }
-
- reinit_completion(&audio->hpd_comp);
- rc = wait_for_completion_timeout(&audio->hpd_comp, HZ * 5);
- if (!rc) {
- pr_err("timeout\n");
- rc = -ETIMEDOUT;
- goto end;
- }
pr_debug("success\n");
end:
@@ -689,6 +674,7 @@ static int dp_audio_off(struct dp_audio *dp_audio)
int rc = 0;
struct dp_audio_private *audio;
struct msm_ext_disp_init_data *ext;
+ bool work_pending = false;
if (!dp_audio) {
pr_err("invalid input\n");
@@ -698,21 +684,13 @@ static int dp_audio_off(struct dp_audio *dp_audio)
audio = container_of(dp_audio, struct dp_audio_private, dp_audio);
ext = &audio->ext_audio_data;
- rc = ext->intf_ops.audio_notify(audio->ext_pdev,
- EXT_DISPLAY_TYPE_DP,
- EXT_DISPLAY_CABLE_DISCONNECT);
- if (rc) {
- pr_err("failed to notify audio, err=%d\n", rc);
- goto end;
- }
+ work_pending = cancel_delayed_work_sync(&audio->notify_delayed_work);
+ if (work_pending)
+ pr_debug("pending notification work completed\n");
- reinit_completion(&audio->hpd_comp);
- rc = wait_for_completion_timeout(&audio->hpd_comp, HZ * 5);
- if (!rc) {
- pr_err("timeout\n");
- rc = -ETIMEDOUT;
+ rc = dp_audio_notify(audio, EXT_DISPLAY_CABLE_DISCONNECT);
+ if (rc)
goto end;
- }
pr_debug("success\n");
end:
@@ -728,6 +706,35 @@ static int dp_audio_off(struct dp_audio *dp_audio)
return rc;
}
+static void dp_audio_notify_work_fn(struct work_struct *work)
+{
+ struct dp_audio_private *audio;
+ struct delayed_work *dw = to_delayed_work(work);
+
+ audio = container_of(dw, struct dp_audio_private, notify_delayed_work);
+
+ dp_audio_notify(audio, EXT_DISPLAY_CABLE_CONNECT);
+}
+
+static int dp_audio_create_notify_workqueue(struct dp_audio_private *audio)
+{
+ audio->notify_workqueue = create_workqueue("sdm_dp_audio_notify");
+ if (IS_ERR_OR_NULL(audio->notify_workqueue)) {
+ pr_err("Error creating notify_workqueue\n");
+ return -EPERM;
+ }
+
+ INIT_DELAYED_WORK(&audio->notify_delayed_work, dp_audio_notify_work_fn);
+
+ return 0;
+}
+
+static void dp_audio_destroy_notify_workqueue(struct dp_audio_private *audio)
+{
+ if (audio->notify_workqueue)
+ destroy_workqueue(audio->notify_workqueue);
+}
+
struct dp_audio *dp_audio_get(struct platform_device *pdev,
struct dp_panel *panel,
struct dp_catalog_audio *catalog)
@@ -748,6 +755,10 @@ struct dp_audio *dp_audio_get(struct platform_device *pdev,
goto error;
}
+ rc = dp_audio_create_notify_workqueue(audio);
+ if (rc)
+ goto error_notify_workqueue;
+
init_completion(&audio->hpd_comp);
audio->pdev = pdev;
@@ -756,18 +767,23 @@ struct dp_audio *dp_audio_get(struct platform_device *pdev,
dp_audio = &audio->dp_audio;
+ mutex_init(&dp_audio->ops_lock);
+
dp_audio->on = dp_audio_on;
dp_audio->off = dp_audio_off;
rc = dp_audio_init_ext_disp(audio);
if (rc) {
- devm_kfree(&pdev->dev, audio);
- goto error;
+ goto error_ext_disp;
}
catalog->init(catalog);
return dp_audio;
+error_ext_disp:
+ dp_audio_destroy_notify_workqueue(audio);
+error_notify_workqueue:
+ devm_kfree(&pdev->dev, audio);
error:
return ERR_PTR(rc);
}
@@ -780,6 +796,9 @@ void dp_audio_put(struct dp_audio *dp_audio)
return;
audio = container_of(dp_audio, struct dp_audio_private, dp_audio);
+ mutex_destroy(&dp_audio->ops_lock);
+
+ dp_audio_destroy_notify_workqueue(audio);
devm_kfree(&audio->pdev->dev, audio);
}
diff --git a/drivers/gpu/drm/msm/dp/dp_audio.h b/drivers/gpu/drm/msm/dp/dp_audio.h
index d6e6b74..807444b 100644
--- a/drivers/gpu/drm/msm/dp/dp_audio.h
+++ b/drivers/gpu/drm/msm/dp/dp_audio.h
@@ -29,6 +29,8 @@ struct dp_audio {
u32 lane_count;
u32 bw_code;
+ struct mutex ops_lock;
+
/**
* on()
*
diff --git a/drivers/gpu/drm/msm/dp/dp_aux.c b/drivers/gpu/drm/msm/dp/dp_aux.c
index acbaec4..2d76d13 100644
--- a/drivers/gpu/drm/msm/dp/dp_aux.c
+++ b/drivers/gpu/drm/msm/dp/dp_aux.c
@@ -42,6 +42,7 @@ struct dp_aux_private {
bool no_send_stop;
u32 offset;
u32 segment;
+ atomic_t aborted;
struct drm_dp_aux drm_aux;
};
@@ -279,6 +280,20 @@ static void dp_aux_reconfig(struct dp_aux *dp_aux)
aux->catalog->reset(aux->catalog);
}
+static void dp_aux_abort_transaction(struct dp_aux *dp_aux)
+{
+ struct dp_aux_private *aux;
+
+ if (!dp_aux) {
+ pr_err("invalid input\n");
+ return;
+ }
+
+ aux = container_of(dp_aux, struct dp_aux_private, dp_aux);
+
+ atomic_set(&aux->aborted, 1);
+}
+
static void dp_aux_update_offset_and_segment(struct dp_aux_private *aux,
struct drm_dp_aux_msg *input_msg)
{
@@ -330,17 +345,19 @@ static void dp_aux_transfer_helper(struct dp_aux_private *aux,
aux->no_send_stop = true;
/*
- * Send the segment address for every i2c read in which the
- * middle-of-tranaction flag is set. This is required to support EDID
- * reads of more than 2 blocks as the segment address is reset to 0
+ * Send the segment address for i2c reads for segment > 0 and for which
+ * the middle-of-transaction flag is set. This is required to support
+ * EDID reads of more than 2 blocks as the segment address is reset to 0
* since we are overriding the middle-of-transaction flag for read
* transactions.
*/
- memset(&helper_msg, 0, sizeof(helper_msg));
- helper_msg.address = segment_address;
- helper_msg.buffer = &aux->segment;
- helper_msg.size = 1;
- dp_aux_cmd_fifo_tx(aux, &helper_msg);
+ if (aux->segment) {
+ memset(&helper_msg, 0, sizeof(helper_msg));
+ helper_msg.address = segment_address;
+ helper_msg.buffer = &aux->segment;
+ helper_msg.size = 1;
+ dp_aux_cmd_fifo_tx(aux, &helper_msg);
+ }
/*
* Send the offset address for every i2c read in which the
@@ -377,6 +394,11 @@ static ssize_t dp_aux_transfer(struct drm_dp_aux *drm_aux,
mutex_lock(&aux->mutex);
+ if (atomic_read(&aux->aborted)) {
+ ret = -ETIMEDOUT;
+ goto unlock_exit;
+ }
+
aux->native = msg->request & (DP_AUX_NATIVE_WRITE & DP_AUX_NATIVE_READ);
/* Ignore address only message */
@@ -411,7 +433,7 @@ static ssize_t dp_aux_transfer(struct drm_dp_aux *drm_aux,
}
ret = dp_aux_cmd_fifo_tx(aux, msg);
- if ((ret < 0) && aux->native) {
+ if ((ret < 0) && aux->native && !atomic_read(&aux->aborted)) {
aux->retry_cnt++;
if (!(aux->retry_cnt % retry_count))
aux->catalog->update_aux_cfg(aux->catalog,
@@ -467,6 +489,7 @@ static void dp_aux_init(struct dp_aux *dp_aux, struct dp_aux_cfg *aux_cfg)
aux->catalog->setup(aux->catalog, aux_cfg);
aux->catalog->reset(aux->catalog);
aux->catalog->enable(aux->catalog, true);
+ atomic_set(&aux->aborted, 0);
aux->retry_cnt = 0;
}
@@ -481,6 +504,7 @@ static void dp_aux_deinit(struct dp_aux *dp_aux)
aux = container_of(dp_aux, struct dp_aux_private, dp_aux);
+ atomic_set(&aux->aborted, 1);
aux->catalog->enable(aux->catalog, false);
}
@@ -558,6 +582,7 @@ struct dp_aux *dp_aux_get(struct device *dev, struct dp_catalog_aux *catalog,
dp_aux->drm_aux_register = dp_aux_register;
dp_aux->drm_aux_deregister = dp_aux_deregister;
dp_aux->reconfig = dp_aux_reconfig;
+ dp_aux->abort = dp_aux_abort_transaction;
return dp_aux;
error:
diff --git a/drivers/gpu/drm/msm/dp/dp_aux.h b/drivers/gpu/drm/msm/dp/dp_aux.h
index 85761ce..e8cb1cc 100644
--- a/drivers/gpu/drm/msm/dp/dp_aux.h
+++ b/drivers/gpu/drm/msm/dp/dp_aux.h
@@ -36,6 +36,7 @@ struct dp_aux {
void (*init)(struct dp_aux *aux, struct dp_aux_cfg *aux_cfg);
void (*deinit)(struct dp_aux *aux);
void (*reconfig)(struct dp_aux *aux);
+ void (*abort)(struct dp_aux *aux);
};
struct dp_aux *dp_aux_get(struct device *dev, struct dp_catalog_aux *catalog,
diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
index cf6fefa..cfb4436 100644
--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
+++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
@@ -84,7 +84,7 @@ static u32 dp_catalog_aux_read_data(struct dp_catalog_aux *aux)
}
dp_catalog_get_priv(aux);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_aux.base;
return dp_read(base + DP_AUX_DATA);
end:
@@ -104,7 +104,7 @@ static int dp_catalog_aux_write_data(struct dp_catalog_aux *aux)
}
dp_catalog_get_priv(aux);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_aux.base;
dp_write(base + DP_AUX_DATA, aux->data);
end:
@@ -124,7 +124,7 @@ static int dp_catalog_aux_write_trans(struct dp_catalog_aux *aux)
}
dp_catalog_get_priv(aux);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_aux.base;
dp_write(base + DP_AUX_TRANS_CTRL, aux->data);
end:
@@ -145,7 +145,7 @@ static int dp_catalog_aux_clear_trans(struct dp_catalog_aux *aux, bool read)
}
dp_catalog_get_priv(aux);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_aux.base;
if (read) {
data = dp_read(base + DP_AUX_TRANS_CTRL);
@@ -195,7 +195,7 @@ static void dp_catalog_aux_reset(struct dp_catalog_aux *aux)
}
dp_catalog_get_priv(aux);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_aux.base;
aux_ctrl = dp_read(base + DP_AUX_CTRL);
@@ -220,7 +220,7 @@ static void dp_catalog_aux_enable(struct dp_catalog_aux *aux, bool enable)
}
dp_catalog_get_priv(aux);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_aux.base;
aux_ctrl = dp_read(base + DP_AUX_CTRL);
@@ -297,7 +297,7 @@ static void dp_catalog_aux_get_irq(struct dp_catalog_aux *aux, bool cmd_busy)
{
u32 ack;
struct dp_catalog_private *catalog;
- void __iomem *base;
+ void __iomem *ahb_base;
if (!aux) {
pr_err("invalid input\n");
@@ -305,14 +305,14 @@ static void dp_catalog_aux_get_irq(struct dp_catalog_aux *aux, bool cmd_busy)
}
dp_catalog_get_priv(aux);
- base = catalog->io->ctrl_io.base;
+ ahb_base = catalog->io->dp_ahb.base;
- aux->isr = dp_read(base + DP_INTR_STATUS);
+ aux->isr = dp_read(ahb_base + DP_INTR_STATUS);
aux->isr &= ~DP_INTR_MASK1;
ack = aux->isr & DP_INTERRUPT_STATUS1;
ack <<= 1;
ack |= DP_INTR_MASK1;
- dp_write(base + DP_INTR_STATUS, ack);
+ dp_write(ahb_base + DP_INTR_STATUS, ack);
}
/* controller related catalog functions */
@@ -327,7 +327,7 @@ static u32 dp_catalog_ctrl_read_hdcp_status(struct dp_catalog_ctrl *ctrl)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_ahb.base;
return dp_read(base + DP_HDCP_STATUS);
}
@@ -337,7 +337,8 @@ static void dp_catalog_panel_setup_infoframe_sdp(struct dp_catalog_panel *panel)
struct dp_catalog_private *catalog;
struct drm_msm_ext_hdr_metadata *hdr;
void __iomem *base;
- u32 header, data;
+ u32 header, parity, data;
+ u8 buf[SZ_128], off = 0;
if (!panel) {
pr_err("invalid input\n");
@@ -345,70 +346,106 @@ static void dp_catalog_panel_setup_infoframe_sdp(struct dp_catalog_panel *panel)
}
dp_catalog_get_priv(panel);
- base = catalog->io->ctrl_io.base;
hdr = &panel->hdr_data.hdr_meta;
+ base = catalog->io->dp_link.base;
- header = dp_read(base + MMSS_DP_VSCEXT_0);
- header |= panel->hdr_data.vscext_header_byte1;
- dp_write(base + MMSS_DP_VSCEXT_0, header);
+ /* HEADER BYTE 1 */
+ header = panel->hdr_data.vscext_header_byte1;
+ parity = dp_header_get_parity(header);
+ data = ((header << HEADER_BYTE_1_BIT)
+ | (parity << PARITY_BYTE_1_BIT));
+ dp_write(base + MMSS_DP_VSCEXT_0, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- header = dp_read(base + MMSS_DP_VSCEXT_1);
- header |= panel->hdr_data.vscext_header_byte2;
- dp_write(base + MMSS_DP_VSCEXT_1, header);
+ /* HEADER BYTE 2 */
+ header = panel->hdr_data.vscext_header_byte2;
+ parity = dp_header_get_parity(header);
+ data = ((header << HEADER_BYTE_2_BIT)
+ | (parity << PARITY_BYTE_2_BIT));
+ dp_write(base + MMSS_DP_VSCEXT_1, data);
- header = dp_read(base + MMSS_DP_VSCEXT_1);
- header |= panel->hdr_data.vscext_header_byte3;
- dp_write(base + MMSS_DP_VSCEXT_1, header);
+ /* HEADER BYTE 3 */
+ header = panel->hdr_data.vscext_header_byte3;
+ parity = dp_header_get_parity(header);
+ data = ((header << HEADER_BYTE_3_BIT)
+ | (parity << PARITY_BYTE_3_BIT));
+ data |= dp_read(base + MMSS_DP_VSCEXT_1);
+ dp_write(base + MMSS_DP_VSCEXT_1, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- header = panel->hdr_data.version;
- header |= panel->hdr_data.length << 8;
- header |= hdr->eotf << 16;
- dp_write(base + MMSS_DP_VSCEXT_2, header);
+ data = panel->hdr_data.version;
+ data |= panel->hdr_data.length << 8;
+ data |= hdr->eotf << 16;
+ dp_write(base + MMSS_DP_VSCEXT_2, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
data = (DP_GET_LSB(hdr->display_primaries_x[0]) |
(DP_GET_MSB(hdr->display_primaries_x[0]) << 8) |
(DP_GET_LSB(hdr->display_primaries_y[0]) << 16) |
(DP_GET_MSB(hdr->display_primaries_y[0]) << 24));
dp_write(base + MMSS_DP_VSCEXT_3, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
data = (DP_GET_LSB(hdr->display_primaries_x[1]) |
(DP_GET_MSB(hdr->display_primaries_x[1]) << 8) |
(DP_GET_LSB(hdr->display_primaries_y[1]) << 16) |
(DP_GET_MSB(hdr->display_primaries_y[1]) << 24));
dp_write(base + MMSS_DP_VSCEXT_4, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
data = (DP_GET_LSB(hdr->display_primaries_x[2]) |
(DP_GET_MSB(hdr->display_primaries_x[2]) << 8) |
(DP_GET_LSB(hdr->display_primaries_y[2]) << 16) |
(DP_GET_MSB(hdr->display_primaries_y[2]) << 24));
dp_write(base + MMSS_DP_VSCEXT_5, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
data = (DP_GET_LSB(hdr->white_point_x) |
(DP_GET_MSB(hdr->white_point_x) << 8) |
(DP_GET_LSB(hdr->white_point_y) << 16) |
(DP_GET_MSB(hdr->white_point_y) << 24));
dp_write(base + MMSS_DP_VSCEXT_6, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
data = (DP_GET_LSB(hdr->max_luminance) |
(DP_GET_MSB(hdr->max_luminance) << 8) |
(DP_GET_LSB(hdr->min_luminance) << 16) |
(DP_GET_MSB(hdr->min_luminance) << 24));
dp_write(base + MMSS_DP_VSCEXT_7, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
data = (DP_GET_LSB(hdr->max_content_light_level) |
(DP_GET_MSB(hdr->max_content_light_level) << 8) |
(DP_GET_LSB(hdr->max_average_light_level) << 16) |
(DP_GET_MSB(hdr->max_average_light_level) << 24));
dp_write(base + MMSS_DP_VSCEXT_8, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- dp_write(base + MMSS_DP_VSCEXT_9, 0x00);
+ data = 0;
+ dp_write(base + MMSS_DP_VSCEXT_9, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
+
+ print_hex_dump(KERN_DEBUG, "[drm-dp] VSCEXT: ",
+ DUMP_PREFIX_NONE, 16, 4, buf, off, false);
}
static void dp_catalog_panel_setup_vsc_sdp(struct dp_catalog_panel *panel)
{
struct dp_catalog_private *catalog;
void __iomem *base;
- u32 value;
+ u32 header, parity, data;
+ u8 bpc, off = 0;
+ u8 buf[SZ_128];
if (!panel) {
pr_err("invalid input\n");
@@ -416,95 +453,147 @@ static void dp_catalog_panel_setup_vsc_sdp(struct dp_catalog_panel *panel)
}
dp_catalog_get_priv(panel);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
- value = dp_read(base + MMSS_DP_GENERIC0_0);
- value |= panel->hdr_data.vsc_header_byte1;
- dp_write(base + MMSS_DP_GENERIC0_0, value);
+ /* HEADER BYTE 1 */
+ header = panel->hdr_data.vsc_header_byte1;
+ parity = dp_header_get_parity(header);
+ data = ((header << HEADER_BYTE_1_BIT)
+ | (parity << PARITY_BYTE_1_BIT));
+ dp_write(base + MMSS_DP_GENERIC0_0, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- value = dp_read(base + MMSS_DP_GENERIC0_1);
- value |= panel->hdr_data.vsc_header_byte2;
- dp_write(base + MMSS_DP_GENERIC0_1, value);
+ /* HEADER BYTE 2 */
+ header = panel->hdr_data.vsc_header_byte2;
+ parity = dp_header_get_parity(header);
+ data = ((header << HEADER_BYTE_2_BIT)
+ | (parity << PARITY_BYTE_2_BIT));
+ dp_write(base + MMSS_DP_GENERIC0_1, data);
- value = dp_read(base + MMSS_DP_GENERIC0_1);
- value |= panel->hdr_data.vsc_header_byte3;
- dp_write(base + MMSS_DP_GENERIC0_1, value);
+ /* HEADER BYTE 3 */
+ header = panel->hdr_data.vsc_header_byte3;
+ parity = dp_header_get_parity(header);
+ data = ((header << HEADER_BYTE_3_BIT)
+ | (parity << PARITY_BYTE_3_BIT));
+ data |= dp_read(base + MMSS_DP_GENERIC0_1);
+ dp_write(base + MMSS_DP_GENERIC0_1, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- dp_write(base + MMSS_DP_GENERIC0_2, 0x00);
- dp_write(base + MMSS_DP_GENERIC0_3, 0x00);
- dp_write(base + MMSS_DP_GENERIC0_4, 0x00);
- dp_write(base + MMSS_DP_GENERIC0_5, 0x00);
+ data = 0;
+ dp_write(base + MMSS_DP_GENERIC0_2, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- value = (panel->hdr_data.colorimetry & 0xF) |
- ((panel->hdr_data.pixel_encoding & 0xF) << 4) |
- ((panel->hdr_data.bpc & 0x7) << 8) |
- ((panel->hdr_data.dynamic_range & 0x1) << 15) |
- ((panel->hdr_data.content_type & 0x7) << 16);
+ dp_write(base + MMSS_DP_GENERIC0_3, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- dp_write(base + MMSS_DP_GENERIC0_6, value);
- dp_write(base + MMSS_DP_GENERIC0_7, 0x00);
- dp_write(base + MMSS_DP_GENERIC0_8, 0x00);
- dp_write(base + MMSS_DP_GENERIC0_9, 0x00);
-}
+ dp_write(base + MMSS_DP_GENERIC0_4, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
-static void dp_catalog_panel_config_hdr(struct dp_catalog_panel *panel)
-{
- struct dp_catalog_private *catalog;
- void __iomem *base;
- u32 cfg, cfg2;
-
- if (!panel) {
- pr_err("invalid input\n");
- return;
- }
-
- dp_catalog_get_priv(panel);
- base = catalog->io->ctrl_io.base;
-
- cfg = dp_read(base + MMSS_DP_SDP_CFG);
- /* VSCEXT_SDP_EN */
- cfg |= BIT(16);
-
- /* GEN0_SDP_EN */
- cfg |= BIT(17);
-
- dp_write(base + MMSS_DP_SDP_CFG, cfg);
-
- cfg2 = dp_read(base + MMSS_DP_SDP_CFG2);
- /* Generic0 SDP Payload is 19 bytes which is > 16, so Bit16 is 1 */
- cfg2 |= BIT(16);
- dp_write(base + MMSS_DP_SDP_CFG2, cfg2);
-
- dp_catalog_panel_setup_vsc_sdp(panel);
- dp_catalog_panel_setup_infoframe_sdp(panel);
-
- cfg = dp_read(base + DP_MISC1_MISC0);
- /* Indicates presence of VSC */
- cfg |= BIT(6) << 8;
-
- dp_write(base + DP_MISC1_MISC0, cfg);
-
- cfg = dp_read(base + DP_CONFIGURATION_CTRL);
- /* Send VSC */
- cfg |= BIT(7);
+ dp_write(base + MMSS_DP_GENERIC0_5, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
switch (panel->hdr_data.bpc) {
default:
case 10:
- cfg |= BIT(9);
+ bpc = BIT(1);
break;
case 8:
- cfg |= BIT(8);
+ bpc = BIT(0);
+ break;
+ case 6:
+ bpc = 0;
break;
}
- dp_write(base + DP_CONFIGURATION_CTRL, cfg);
+ data = (panel->hdr_data.colorimetry & 0xF) |
+ ((panel->hdr_data.pixel_encoding & 0xF) << 4) |
+ (bpc << 8) |
+ ((panel->hdr_data.dynamic_range & 0x1) << 15) |
+ ((panel->hdr_data.content_type & 0x7) << 16);
- cfg = dp_read(base + DP_COMPRESSION_MODE_CTRL);
+ dp_write(base + MMSS_DP_GENERIC0_6, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
- /* Trigger SDP values in registers */
- cfg |= BIT(8);
- dp_write(base + DP_COMPRESSION_MODE_CTRL, cfg);
+ data = 0;
+ dp_write(base + MMSS_DP_GENERIC0_7, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
+
+ dp_write(base + MMSS_DP_GENERIC0_8, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
+
+ dp_write(base + MMSS_DP_GENERIC0_9, data);
+ memcpy(buf + off, &data, sizeof(data));
+ off += sizeof(data);
+
+ print_hex_dump(KERN_DEBUG, "[drm-dp] VSC: ",
+ DUMP_PREFIX_NONE, 16, 4, buf, off, false);
+}
+
+static void dp_catalog_panel_config_hdr(struct dp_catalog_panel *panel, bool en)
+{
+ struct dp_catalog_private *catalog;
+ void __iomem *base;
+ u32 cfg, cfg2, misc;
+
+ if (!panel) {
+ pr_err("invalid input\n");
+ return;
+ }
+
+ dp_catalog_get_priv(panel);
+ base = catalog->io->dp_link.base;
+
+ cfg = dp_read(base + MMSS_DP_SDP_CFG);
+ cfg2 = dp_read(base + MMSS_DP_SDP_CFG2);
+ misc = dp_read(base + DP_MISC1_MISC0);
+
+ if (en) {
+ /* VSCEXT_SDP_EN, GEN0_SDP_EN */
+ cfg |= BIT(16) | BIT(17);
+ dp_write(base + MMSS_DP_SDP_CFG, cfg);
+
+ /* EXTN_SDPSIZE GENERIC0_SDPSIZE */
+ cfg2 |= BIT(15) | BIT(16);
+ dp_write(base + MMSS_DP_SDP_CFG2, cfg2);
+
+ dp_catalog_panel_setup_vsc_sdp(panel);
+ dp_catalog_panel_setup_infoframe_sdp(panel);
+
+ /* indicates presence of VSC (BIT(6) of MISC1) */
+ misc |= BIT(14);
+
+ if (panel->hdr_data.hdr_meta.eotf)
+ pr_debug("Enabled\n");
+ else
+ pr_debug("Reset\n");
+ } else {
+ /* VSCEXT_SDP_EN, GEN0_SDP_EN */
+ cfg &= ~BIT(16) & ~BIT(17);
+ dp_write(base + MMSS_DP_SDP_CFG, cfg);
+
+ /* EXTN_SDPSIZE GENERIC0_SDPSIZE */
+ cfg2 &= ~BIT(15) & ~BIT(16);
+ dp_write(base + MMSS_DP_SDP_CFG2, cfg2);
+
+ /* switch back to MSA */
+ misc &= ~BIT(14);
+
+ pr_debug("Disabled\n");
+ }
+
+ dp_write(base + DP_MISC1_MISC0, misc);
+
+ dp_write(base + MMSS_DP_SDP_CFG3, 0x01);
+ dp_write(base + MMSS_DP_SDP_CFG3, 0x00);
}
static void dp_catalog_ctrl_update_transfer_unit(struct dp_catalog_ctrl *ctrl)
@@ -518,7 +607,7 @@ static void dp_catalog_ctrl_update_transfer_unit(struct dp_catalog_ctrl *ctrl)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
dp_write(base + DP_VALID_BOUNDARY, ctrl->valid_boundary);
dp_write(base + DP_TU, ctrl->dp_tu);
@@ -536,7 +625,7 @@ static void dp_catalog_ctrl_state_ctrl(struct dp_catalog_ctrl *ctrl, u32 state)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
dp_write(base + DP_STATE_CTRL, state);
}
@@ -544,7 +633,7 @@ static void dp_catalog_ctrl_state_ctrl(struct dp_catalog_ctrl *ctrl, u32 state)
static void dp_catalog_ctrl_config_ctrl(struct dp_catalog_ctrl *ctrl, u32 cfg)
{
struct dp_catalog_private *catalog;
- void __iomem *base;
+ void __iomem *link_base;
if (!ctrl) {
pr_err("invalid input\n");
@@ -552,11 +641,11 @@ static void dp_catalog_ctrl_config_ctrl(struct dp_catalog_ctrl *ctrl, u32 cfg)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ link_base = catalog->io->dp_link.base;
pr_debug("DP_CONFIGURATION_CTRL=0x%x\n", cfg);
- dp_write(base + DP_CONFIGURATION_CTRL, cfg);
+ dp_write(link_base + DP_CONFIGURATION_CTRL, cfg);
}
static void dp_catalog_ctrl_lane_mapping(struct dp_catalog_ctrl *ctrl)
@@ -570,9 +659,9 @@ static void dp_catalog_ctrl_lane_mapping(struct dp_catalog_ctrl *ctrl)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
- dp_write(base + DP_LOGICAL2PHYSCIAL_LANE_MAPPING, 0xe4);
+ dp_write(base + DP_LOGICAL2PHYSICAL_LANE_MAPPING, 0xe4);
}
static void dp_catalog_ctrl_mainlink_ctrl(struct dp_catalog_ctrl *ctrl,
@@ -588,7 +677,7 @@ static void dp_catalog_ctrl_mainlink_ctrl(struct dp_catalog_ctrl *ctrl,
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
if (enable) {
dp_write(base + DP_MAINLINK_CTRL, 0x02000000);
@@ -619,7 +708,7 @@ static void dp_catalog_ctrl_config_misc(struct dp_catalog_ctrl *ctrl,
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
misc_val |= (tb << 5);
misc_val |= BIT(0); /* Configure clock to synchronous mode */
@@ -685,7 +774,7 @@ static void dp_catalog_ctrl_config_msa(struct dp_catalog_ctrl *ctrl,
nvid *= 3;
}
- base_ctrl = catalog->io->ctrl_io.base;
+ base_ctrl = catalog->io->dp_link.base;
pr_debug("mvid=0x%x, nvid=0x%x\n", mvid, nvid);
dp_write(base_ctrl + DP_SOFTWARE_MVID, mvid);
dp_write(base_ctrl + DP_SOFTWARE_NVID, nvid);
@@ -705,7 +794,7 @@ static void dp_catalog_ctrl_set_pattern(struct dp_catalog_ctrl *ctrl,
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
bit = 1;
bit <<= (pattern - 1);
@@ -759,7 +848,57 @@ static void dp_catalog_ctrl_usb_reset(struct dp_catalog_ctrl *ctrl, bool flip)
dp_write(base + USB3_DP_COM_RESET_OVRD_CTRL, 0x00);
/* make sure phy is brought out of reset */
wmb();
+}
+static void dp_catalog_panel_tpg_cfg(struct dp_catalog_panel *panel,
+ bool enable)
+{
+ struct dp_catalog_private *catalog;
+ void __iomem *base;
+
+ if (!panel) {
+ pr_err("invalid input\n");
+ return;
+ }
+
+ dp_catalog_get_priv(panel);
+ base = catalog->io->dp_p0.base;
+
+ if (!enable) {
+ dp_write(base + MMSS_DP_TPG_MAIN_CONTROL, 0x0);
+ dp_write(base + MMSS_DP_BIST_ENABLE, 0x0);
+ dp_write(base + MMSS_DP_TIMING_ENGINE_EN, 0x0);
+ wmb(); /* ensure Timing generator is turned off */
+ return;
+ }
+
+ dp_write(base + MMSS_DP_INTF_CONFIG, 0x0);
+ dp_write(base + MMSS_DP_INTF_HSYNC_CTL, panel->hsync_ctl);
+ dp_write(base + MMSS_DP_INTF_VSYNC_PERIOD_F0, panel->vsync_period *
+ panel->hsync_period);
+ dp_write(base + MMSS_DP_INTF_VSYNC_PULSE_WIDTH_F0, panel->v_sync_width *
+ panel->hsync_period);
+ dp_write(base + MMSS_DP_INTF_VSYNC_PERIOD_F1, 0);
+ dp_write(base + MMSS_DP_INTF_VSYNC_PULSE_WIDTH_F1, 0);
+ dp_write(base + MMSS_DP_INTF_DISPLAY_HCTL, panel->display_hctl);
+ dp_write(base + MMSS_DP_INTF_ACTIVE_HCTL, 0);
+ dp_write(base + MMSS_INTF_DISPLAY_V_START_F0, panel->display_v_start);
+ dp_write(base + MMSS_DP_INTF_DISPLAY_V_END_F0, panel->display_v_end);
+ dp_write(base + MMSS_INTF_DISPLAY_V_START_F1, 0);
+ dp_write(base + MMSS_DP_INTF_DISPLAY_V_END_F1, 0);
+ dp_write(base + MMSS_DP_INTF_ACTIVE_V_START_F0, 0);
+ dp_write(base + MMSS_DP_INTF_ACTIVE_V_END_F0, 0);
+ dp_write(base + MMSS_DP_INTF_ACTIVE_V_START_F1, 0);
+ dp_write(base + MMSS_DP_INTF_ACTIVE_V_END_F1, 0);
+ dp_write(base + MMSS_DP_INTF_POLARITY_CTL, 0);
+ wmb(); /* ensure TPG registers are programmed */
+
+ dp_write(base + MMSS_DP_TPG_MAIN_CONTROL, 0x100);
+ dp_write(base + MMSS_DP_TPG_VIDEO_CONFIG, 0x5);
+ wmb(); /* ensure TPG config is programmed */
+ dp_write(base + MMSS_DP_BIST_ENABLE, 0x1);
+ dp_write(base + MMSS_DP_TIMING_ENGINE_EN, 0x1);
+ wmb(); /* ensure Timing generator is turned on */
}
static void dp_catalog_ctrl_reset(struct dp_catalog_ctrl *ctrl)
@@ -774,7 +913,7 @@ static void dp_catalog_ctrl_reset(struct dp_catalog_ctrl *ctrl)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_ahb.base;
sw_reset = dp_read(base + DP_SW_RESET);
@@ -799,7 +938,7 @@ static bool dp_catalog_ctrl_mainlink_ready(struct dp_catalog_ctrl *ctrl)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
while (--cnt) {
/* DP_MAINLINK_READY */
@@ -826,7 +965,7 @@ static void dp_catalog_ctrl_enable_irq(struct dp_catalog_ctrl *ctrl,
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_ahb.base;
if (enable) {
dp_write(base + DP_INTR_STATUS, DP_INTR_MASK1);
@@ -848,7 +987,7 @@ static void dp_catalog_ctrl_hpd_config(struct dp_catalog_ctrl *ctrl, bool en)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_aux.base;
if (en) {
u32 reftimer = dp_read(base + DP_DP_HPD_REFTIMER);
@@ -879,7 +1018,7 @@ static void dp_catalog_ctrl_get_interrupt(struct dp_catalog_ctrl *ctrl)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_ahb.base;
ctrl->isr = dp_read(base + DP_INTR_STATUS2);
ctrl->isr &= ~DP_INTR_MASK2;
@@ -900,7 +1039,7 @@ static void dp_catalog_ctrl_phy_reset(struct dp_catalog_ctrl *ctrl)
}
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_ahb.base;
dp_write(base + DP_PHY_CTRL, 0x5); /* bit 0 & 2 */
usleep_range(1000, 1010); /* h/w recommended delay */
@@ -989,7 +1128,7 @@ static void dp_catalog_ctrl_send_phy_pattern(struct dp_catalog_ctrl *ctrl,
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
dp_write(base + DP_STATE_CTRL, 0x0);
@@ -1017,7 +1156,7 @@ static void dp_catalog_ctrl_send_phy_pattern(struct dp_catalog_ctrl *ctrl,
/* 1111100000111110 */
dp_write(base + DP_TEST_80BIT_CUSTOM_PATTERN_REG2, 0x0000F83E);
break;
- case DP_TEST_PHY_PATTERN_HBR2_CTS_EYE_PATTERN:
+ case DP_TEST_PHY_PATTERN_CP2520_PATTERN_1:
value = BIT(16);
dp_write(base + DP_HBR2_COMPLIANCE_SCRAMBLER_RESET, value);
value |= 0xFC;
@@ -1025,6 +1164,10 @@ static void dp_catalog_ctrl_send_phy_pattern(struct dp_catalog_ctrl *ctrl,
dp_write(base + DP_MAINLINK_LEVELS, 0x2);
dp_write(base + DP_STATE_CTRL, 0x10);
break;
+ case DP_TEST_PHY_PATTERN_CP2520_PATTERN_3:
+ dp_write(base + DP_MAINLINK_CTRL, 0x11);
+ dp_write(base + DP_STATE_CTRL, 0x8);
+ break;
default:
pr_debug("No valid test pattern requested: 0x%x\n", pattern);
return;
@@ -1046,7 +1189,7 @@ static u32 dp_catalog_ctrl_read_phy_pattern(struct dp_catalog_ctrl *ctrl)
dp_catalog_get_priv(ctrl);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
return dp_read(base + DP_MAINLINK_READY);
}
@@ -1063,7 +1206,7 @@ static int dp_catalog_panel_timing_cfg(struct dp_catalog_panel *panel)
}
dp_catalog_get_priv(panel);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
dp_write(base + DP_TOTAL_HOR_VER, panel->total);
dp_write(base + DP_START_HOR_VER_FROM_SYNC, panel->sync_start);
@@ -1123,7 +1266,9 @@ static void dp_catalog_audio_config_sdp(struct dp_catalog_audio *audio)
return;
dp_catalog_get_priv(audio);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
+
+ sdp_cfg = dp_read(base + MMSS_DP_SDP_CFG);
/* AUDIO_TIMESTAMP_SDP_EN */
sdp_cfg |= BIT(1);
@@ -1162,7 +1307,7 @@ static void dp_catalog_audio_get_header(struct dp_catalog_audio *audio)
dp_catalog_get_priv(audio);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
sdp_map = catalog->audio_map;
sdp = audio->sdp_type;
header = audio->sdp_header;
@@ -1184,7 +1329,7 @@ static void dp_catalog_audio_set_header(struct dp_catalog_audio *audio)
dp_catalog_get_priv(audio);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
sdp_map = catalog->audio_map;
sdp = audio->sdp_type;
header = audio->sdp_header;
@@ -1202,7 +1347,7 @@ static void dp_catalog_audio_config_acr(struct dp_catalog_audio *audio)
dp_catalog_get_priv(audio);
select = audio->data;
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
acr_ctrl = select << 4 | BIT(31) | BIT(8) | BIT(14);
@@ -1219,7 +1364,7 @@ static void dp_catalog_audio_safe_to_exit_level(struct dp_catalog_audio *audio)
dp_catalog_get_priv(audio);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
safe_to_exit_level = audio->data;
mainlink_levels = dp_read(base + DP_MAINLINK_LEVELS);
@@ -1241,7 +1386,7 @@ static void dp_catalog_audio_enable(struct dp_catalog_audio *audio)
dp_catalog_get_priv(audio);
- base = catalog->io->ctrl_io.base;
+ base = catalog->io->dp_link.base;
enable = !!audio->data;
audio_ctrl = dp_read(base + MMSS_DP_AUDIO_CFG);
@@ -1258,6 +1403,131 @@ static void dp_catalog_audio_enable(struct dp_catalog_audio *audio)
wmb();
}
+static void dp_catalog_config_spd_header(struct dp_catalog_panel *panel)
+{
+ struct dp_catalog_private *catalog;
+ void __iomem *base;
+ u32 value, new_value;
+ u8 parity_byte;
+
+ if (!panel)
+ return;
+
+ dp_catalog_get_priv(panel);
+ base = catalog->io->dp_link.base;
+
+ /* Config header and parity byte 1 */
+ value = dp_read(base + MMSS_DP_GENERIC1_0);
+
+ new_value = 0x83;
+ parity_byte = dp_header_get_parity(new_value);
+ value |= ((new_value << HEADER_BYTE_1_BIT)
+ | (parity_byte << PARITY_BYTE_1_BIT));
+ pr_debug("Header Byte 1: value = 0x%x, parity_byte = 0x%x\n",
+ value, parity_byte);
+ dp_write(base + MMSS_DP_GENERIC1_0, value);
+
+ /* Config header and parity byte 2 */
+ value = dp_read(base + MMSS_DP_GENERIC1_1);
+
+ new_value = 0x1b;
+ parity_byte = dp_header_get_parity(new_value);
+ value |= ((new_value << HEADER_BYTE_2_BIT)
+ | (parity_byte << PARITY_BYTE_2_BIT));
+ pr_debug("Header Byte 2: value = 0x%x, parity_byte = 0x%x\n",
+ value, parity_byte);
+ dp_write(base + MMSS_DP_GENERIC1_1, value);
+
+ /* Config header and parity byte 3 */
+ value = dp_read(base + MMSS_DP_GENERIC1_1);
+
+ new_value = (0x0 | (0x12 << 2));
+ parity_byte = dp_header_get_parity(new_value);
+ value |= ((new_value << HEADER_BYTE_3_BIT)
+ | (parity_byte << PARITY_BYTE_3_BIT));
+ pr_debug("Header Byte 3: value = 0x%x, parity_byte = 0x%x\n",
+ new_value, parity_byte);
+ dp_write(base + MMSS_DP_GENERIC1_1, value);
+}
+
+static void dp_catalog_panel_config_spd(struct dp_catalog_panel *panel)
+{
+ struct dp_catalog_private *catalog;
+ void __iomem *base;
+ u32 spd_cfg = 0, spd_cfg2 = 0;
+ u8 *vendor = NULL, *product = NULL;
+ /*
+ * Source Device Information
+ * 00h unknown
+ * 01h Digital STB
+ * 02h DVD
+ * 03h D-VHS
+ * 04h HDD Video
+ * 05h DVC
+ * 06h DSC
+ * 07h Video CD
+ * 08h Game
+ * 09h PC general
+ * 0ah Bluray-Disc
+ * 0bh Super Audio CD
+ * 0ch HD DVD
+ * 0dh PMP
+ * 0eh-ffh reserved
+ */
+ u32 device_type = 0;
+
+ if (!panel)
+ return;
+
+ dp_catalog_get_priv(panel);
+ base = catalog->io->dp_link.base;
+
+ dp_catalog_config_spd_header(panel);
+
+ vendor = panel->spd_vendor_name;
+ product = panel->spd_product_description;
+
+ dp_write(base + MMSS_DP_GENERIC1_2, ((vendor[0] & 0x7f) |
+ ((vendor[1] & 0x7f) << 8) |
+ ((vendor[2] & 0x7f) << 16) |
+ ((vendor[3] & 0x7f) << 24)));
+ dp_write(base + MMSS_DP_GENERIC1_3, ((vendor[4] & 0x7f) |
+ ((vendor[5] & 0x7f) << 8) |
+ ((vendor[6] & 0x7f) << 16) |
+ ((vendor[7] & 0x7f) << 24)));
+ dp_write(base + MMSS_DP_GENERIC1_4, ((product[0] & 0x7f) |
+ ((product[1] & 0x7f) << 8) |
+ ((product[2] & 0x7f) << 16) |
+ ((product[3] & 0x7f) << 24)));
+ dp_write(base + MMSS_DP_GENERIC1_5, ((product[4] & 0x7f) |
+ ((product[5] & 0x7f) << 8) |
+ ((product[6] & 0x7f) << 16) |
+ ((product[7] & 0x7f) << 24)));
+ dp_write(base + MMSS_DP_GENERIC1_6, ((product[8] & 0x7f) |
+ ((product[9] & 0x7f) << 8) |
+ ((product[10] & 0x7f) << 16) |
+ ((product[11] & 0x7f) << 24)));
+ dp_write(base + MMSS_DP_GENERIC1_7, ((product[12] & 0x7f) |
+ ((product[13] & 0x7f) << 8) |
+ ((product[14] & 0x7f) << 16) |
+ ((product[15] & 0x7f) << 24)));
+ dp_write(base + MMSS_DP_GENERIC1_8, device_type);
+ dp_write(base + MMSS_DP_GENERIC1_9, 0x00);
+
+ spd_cfg = dp_read(base + MMSS_DP_SDP_CFG);
+ /* GENERIC1_SDP for SPD Infoframe */
+ spd_cfg |= BIT(18);
+ dp_write(base + MMSS_DP_SDP_CFG, spd_cfg);
+
+ spd_cfg2 = dp_read(base + MMSS_DP_SDP_CFG2);
+ /* 28 data bytes for SPD Infoframe with GENERIC1 set */
+ spd_cfg2 |= BIT(17);
+ dp_write(base + MMSS_DP_SDP_CFG2, spd_cfg2);
+
+ dp_write(base + MMSS_DP_SDP_CFG3, 0x1);
+ dp_write(base + MMSS_DP_SDP_CFG3, 0x0);
+}
+
struct dp_catalog *dp_catalog_get(struct device *dev, struct dp_io *io)
{
int rc = 0;
@@ -1309,6 +1579,8 @@ struct dp_catalog *dp_catalog_get(struct device *dev, struct dp_io *io)
struct dp_catalog_panel panel = {
.timing_cfg = dp_catalog_panel_timing_cfg,
.config_hdr = dp_catalog_panel_config_hdr,
+ .tpg_config = dp_catalog_panel_tpg_cfg,
+ .config_spd = dp_catalog_panel_config_spd,
};
if (!io) {
diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h
index eff8028..d03be6a 100644
--- a/drivers/gpu/drm/msm/dp/dp_catalog.h
+++ b/drivers/gpu/drm/msm/dp/dp_catalog.h
@@ -37,6 +37,11 @@
#define DP_INTR_CRC_UPDATED BIT(9)
struct dp_catalog_hdr_data {
+ u32 ext_header_byte0;
+ u32 ext_header_byte1;
+ u32 ext_header_byte2;
+ u32 ext_header_byte3;
+
u32 vsc_header_byte0;
u32 vsc_header_byte1;
u32 vsc_header_byte2;
@@ -109,6 +114,13 @@ struct dp_catalog_ctrl {
u32 (*read_phy_pattern)(struct dp_catalog_ctrl *ctrl);
};
+#define HEADER_BYTE_2_BIT 0
+#define PARITY_BYTE_2_BIT 8
+#define HEADER_BYTE_1_BIT 16
+#define PARITY_BYTE_1_BIT 24
+#define HEADER_BYTE_3_BIT 16
+#define PARITY_BYTE_3_BIT 24
+
enum dp_catalog_audio_sdp_type {
DP_AUDIO_SDP_STREAM,
DP_AUDIO_SDP_TIMESTAMP,
@@ -144,11 +156,24 @@ struct dp_catalog_panel {
u32 sync_start;
u32 width_blanking;
u32 dp_active;
+ u8 *spd_vendor_name;
+ u8 *spd_product_description;
struct dp_catalog_hdr_data hdr_data;
+ /* TPG */
+ u32 hsync_period;
+ u32 vsync_period;
+ u32 display_v_start;
+ u32 display_v_end;
+ u32 v_sync_width;
+ u32 hsync_ctl;
+ u32 display_hctl;
+
int (*timing_cfg)(struct dp_catalog_panel *panel);
- void (*config_hdr)(struct dp_catalog_panel *panel);
+ void (*config_hdr)(struct dp_catalog_panel *panel, bool en);
+ void (*tpg_config)(struct dp_catalog_panel *panel, bool enable);
+ void (*config_spd)(struct dp_catalog_panel *panel);
};
struct dp_catalog {
@@ -158,6 +183,71 @@ struct dp_catalog {
struct dp_catalog_panel panel;
};
+static inline u8 dp_ecc_get_g0_value(u8 data)
+{
+ u8 c[4];
+ u8 g[4];
+ u8 ret_data = 0;
+ u8 i;
+
+ for (i = 0; i < 4; i++)
+ c[i] = (data >> i) & 0x01;
+
+ g[0] = c[3];
+ g[1] = c[0] ^ c[3];
+ g[2] = c[1];
+ g[3] = c[2];
+
+ for (i = 0; i < 4; i++)
+ ret_data = ((g[i] & 0x01) << i) | ret_data;
+
+ return ret_data;
+}
+
+static inline u8 dp_ecc_get_g1_value(u8 data)
+{
+ u8 c[4];
+ u8 g[4];
+ u8 ret_data = 0;
+ u8 i;
+
+ for (i = 0; i < 4; i++)
+ c[i] = (data >> i) & 0x01;
+
+ g[0] = c[0] ^ c[3];
+ g[1] = c[0] ^ c[1] ^ c[3];
+ g[2] = c[1] ^ c[2];
+ g[3] = c[2] ^ c[3];
+
+ for (i = 0; i < 4; i++)
+ ret_data = ((g[i] & 0x01) << i) | ret_data;
+
+ return ret_data;
+}
+
+static inline u8 dp_header_get_parity(u32 data)
+{
+ u8 x0 = 0;
+ u8 x1 = 0;
+ u8 ci = 0;
+ u8 iData = 0;
+ u8 i = 0;
+ u8 parity_byte;
+ u8 num_byte = (data & 0xFF00) > 0 ? 8 : 2;
+
+ for (i = 0; i < num_byte; i++) {
+ iData = (data >> i*4) & 0xF;
+
+ ci = iData ^ x1;
+ x1 = x0 ^ dp_ecc_get_g1_value(ci);
+ x0 = dp_ecc_get_g0_value(ci);
+ }
+
+ parity_byte = x1 | (x0 << 4);
+
+ return parity_byte;
+}
+
struct dp_catalog *dp_catalog_get(struct device *dev, struct dp_io *io);
void dp_catalog_put(struct dp_catalog *catalog);
diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
index debc0a5..006f723 100644
--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
+++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
@@ -40,6 +40,7 @@
#define MR_LINK_SYMBOL_ERM 0x80
#define MR_LINK_PRBS7 0x100
#define MR_LINK_CUSTOM80 0x200
+#define MR_LINK_TRAINING4 0x40
struct dp_vc_tu_mapping_table {
u32 vic;
@@ -760,18 +761,18 @@ static int dp_ctrl_update_sink_vx_px(struct dp_ctrl_private *ctrl,
return drm_dp_dpcd_write(ctrl->aux->drm_aux, 0x103, buf, 4);
}
-static void dp_ctrl_update_vx_px(struct dp_ctrl_private *ctrl)
+static int dp_ctrl_update_vx_px(struct dp_ctrl_private *ctrl)
{
struct dp_link *link = ctrl->link;
ctrl->catalog->update_vx_px(ctrl->catalog,
link->phy_params.v_level, link->phy_params.p_level);
- dp_ctrl_update_sink_vx_px(ctrl, link->phy_params.v_level,
+ return dp_ctrl_update_sink_vx_px(ctrl, link->phy_params.v_level,
link->phy_params.p_level);
}
-static void dp_ctrl_train_pattern_set(struct dp_ctrl_private *ctrl,
+static int dp_ctrl_train_pattern_set(struct dp_ctrl_private *ctrl,
u8 pattern)
{
u8 buf[4];
@@ -779,7 +780,8 @@ static void dp_ctrl_train_pattern_set(struct dp_ctrl_private *ctrl,
pr_debug("sink: pattern=%x\n", pattern);
buf[0] = pattern;
- drm_dp_dpcd_write(ctrl->aux->drm_aux, DP_TRAINING_PATTERN_SET, buf, 1);
+ return drm_dp_dpcd_write(ctrl->aux->drm_aux,
+ DP_TRAINING_PATTERN_SET, buf, 1);
}
static int dp_ctrl_read_link_status(struct dp_ctrl_private *ctrl,
@@ -816,9 +818,18 @@ static int dp_ctrl_link_train_1(struct dp_ctrl_private *ctrl)
wmb();
ctrl->catalog->set_pattern(ctrl->catalog, 0x01);
- dp_ctrl_train_pattern_set(ctrl, DP_TRAINING_PATTERN_1 |
+ ret = dp_ctrl_train_pattern_set(ctrl, DP_TRAINING_PATTERN_1 |
DP_LINK_SCRAMBLING_DISABLE); /* train_1 */
- dp_ctrl_update_vx_px(ctrl);
+ if (ret <= 0) {
+ ret = -EINVAL;
+ return ret;
+ }
+
+ ret = dp_ctrl_update_vx_px(ctrl);
+ if (ret <= 0) {
+ ret = -EINVAL;
+ return ret;
+ }
tries = 0;
old_v_level = ctrl->link->phy_params.v_level;
@@ -855,7 +866,11 @@ static int dp_ctrl_link_train_1(struct dp_ctrl_private *ctrl)
pr_debug("clock recovery not done, adjusting vx px\n");
ctrl->link->adjust_levels(ctrl->link, link_status);
- dp_ctrl_update_vx_px(ctrl);
+ ret = dp_ctrl_update_vx_px(ctrl);
+ if (ret <= 0) {
+ ret = -EINVAL;
+ break;
+ }
}
return ret;
@@ -909,9 +924,18 @@ static int dp_ctrl_link_training_2(struct dp_ctrl_private *ctrl)
else
pattern = DP_TRAINING_PATTERN_2;
- dp_ctrl_update_vx_px(ctrl);
+ ret = dp_ctrl_update_vx_px(ctrl);
+ if (ret <= 0) {
+ ret = -EINVAL;
+ return ret;
+ }
ctrl->catalog->set_pattern(ctrl->catalog, pattern);
- dp_ctrl_train_pattern_set(ctrl, pattern | DP_RECOVERED_CLOCK_OUT_EN);
+ ret = dp_ctrl_train_pattern_set(ctrl,
+ pattern | DP_RECOVERED_CLOCK_OUT_EN);
+ if (ret <= 0) {
+ ret = -EINVAL;
+ return ret;
+ }
do {
drm_dp_link_train_channel_eq_delay(ctrl->panel->dpcd);
@@ -931,7 +955,11 @@ static int dp_ctrl_link_training_2(struct dp_ctrl_private *ctrl)
tries++;
ctrl->link->adjust_levels(ctrl->link, link_status);
- dp_ctrl_update_vx_px(ctrl);
+ ret = dp_ctrl_update_vx_px(ctrl);
+ if (ret <= 0) {
+ ret = -EINVAL;
+ break;
+ }
} while (1);
return ret;
@@ -953,9 +981,16 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl)
ctrl->link->link_params.bw_code);
link_info.capabilities = ctrl->panel->link_info.capabilities;
- drm_dp_link_configure(ctrl->aux->drm_aux, &link_info);
- drm_dp_dpcd_write(ctrl->aux->drm_aux, DP_MAIN_LINK_CHANNEL_CODING_SET,
- &encoding, 1);
+ ret = drm_dp_link_configure(ctrl->aux->drm_aux, &link_info);
+ if (ret)
+ goto end;
+
+ ret = drm_dp_dpcd_write(ctrl->aux->drm_aux,
+ DP_MAIN_LINK_CHANNEL_CODING_SET, &encoding, 1);
+ if (ret <= 0) {
+ ret = -EINVAL;
+ goto end;
+ }
ret = dp_ctrl_link_train_1(ctrl);
if (ret) {
@@ -991,11 +1026,6 @@ static int dp_ctrl_setup_main_link(struct dp_ctrl_private *ctrl, bool train)
ctrl->catalog->mainlink_ctrl(ctrl->catalog, true);
- ret = ctrl->link->psm_config(ctrl->link,
- &ctrl->panel->link_info, false);
- if (ret)
- goto end;
-
if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN)
goto end;
@@ -1072,7 +1102,8 @@ static int dp_ctrl_disable_mainlink_clocks(struct dp_ctrl_private *ctrl)
return ctrl->power->clk_enable(ctrl->power, DP_CTRL_PM, false);
}
-static int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip)
+static int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl,
+ bool flip, bool multi_func)
{
struct dp_ctrl_private *ctrl;
struct dp_catalog_ctrl *catalog;
@@ -1087,8 +1118,10 @@ static int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip)
ctrl->orientation = flip;
catalog = ctrl->catalog;
- catalog->usb_reset(ctrl->catalog, flip);
- catalog->phy_reset(ctrl->catalog);
+ if (!multi_func) {
+ catalog->usb_reset(ctrl->catalog, flip);
+ catalog->phy_reset(ctrl->catalog);
+ }
catalog->enable_irq(ctrl->catalog, true);
return 0;
@@ -1135,9 +1168,17 @@ static bool dp_ctrl_use_fixed_nvid(struct dp_ctrl_private *ctrl)
return false;
}
-static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl)
+static int dp_ctrl_link_maintenance(struct dp_ctrl *dp_ctrl)
{
int ret = 0;
+ struct dp_ctrl_private *ctrl;
+
+ if (!dp_ctrl) {
+ pr_err("Invalid input data\n");
+ return -EINVAL;
+ }
+
+ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
ctrl->dp_ctrl.push_idle(&ctrl->dp_ctrl);
ctrl->dp_ctrl.reset(&ctrl->dp_ctrl);
@@ -1181,9 +1222,17 @@ static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl)
return ret;
}
-static void dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
+static void dp_ctrl_process_phy_test_request(struct dp_ctrl *dp_ctrl)
{
int ret = 0;
+ struct dp_ctrl_private *ctrl;
+
+ if (!dp_ctrl) {
+ pr_err("Invalid input data\n");
+ return;
+ }
+
+ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
if (!ctrl->link->phy_params.phy_test_pattern_sel) {
pr_debug("no test pattern selected by sink\n");
@@ -1214,9 +1263,6 @@ static void dp_ctrl_send_phy_test_pattern(struct dp_ctrl_private *ctrl)
u32 pattern_sent = 0x0;
u32 pattern_requested = ctrl->link->phy_params.phy_test_pattern_sel;
- pr_debug("request: %s\n",
- dp_link_get_phy_test_pattern(pattern_requested));
-
ctrl->catalog->update_vx_px(ctrl->catalog,
ctrl->link->phy_params.v_level,
ctrl->link->phy_params.p_level);
@@ -1224,6 +1270,9 @@ static void dp_ctrl_send_phy_test_pattern(struct dp_ctrl_private *ctrl)
ctrl->link->send_test_response(ctrl->link);
pattern_sent = ctrl->catalog->read_phy_pattern(ctrl->catalog);
+ pr_debug("pattern_request: %s. pattern_sent: 0x%x\n",
+ dp_link_get_phy_test_pattern(pattern_requested),
+ pattern_sent);
switch (pattern_sent) {
case MR_LINK_TRAINING1:
@@ -1235,7 +1284,7 @@ static void dp_ctrl_send_phy_test_pattern(struct dp_ctrl_private *ctrl)
if ((pattern_requested ==
DP_TEST_PHY_PATTERN_SYMBOL_ERR_MEASUREMENT_CNT)
|| (pattern_requested ==
- DP_TEST_PHY_PATTERN_HBR2_CTS_EYE_PATTERN))
+ DP_TEST_PHY_PATTERN_CP2520_PATTERN_1))
success = true;
break;
case MR_LINK_PRBS7:
@@ -1247,42 +1296,20 @@ static void dp_ctrl_send_phy_test_pattern(struct dp_ctrl_private *ctrl)
DP_TEST_PHY_PATTERN_80_BIT_CUSTOM_PATTERN)
success = true;
break;
+ case MR_LINK_TRAINING4:
+ if (pattern_requested ==
+ DP_TEST_PHY_PATTERN_CP2520_PATTERN_3)
+ success = true;
+ break;
default:
success = false;
- return;
+ break;
}
pr_debug("%s: %s\n", success ? "success" : "failed",
dp_link_get_phy_test_pattern(pattern_requested));
}
-static void dp_ctrl_handle_sink_request(struct dp_ctrl *dp_ctrl)
-{
- struct dp_ctrl_private *ctrl;
- u32 sink_request = 0x0;
-
- if (!dp_ctrl) {
- pr_err("invalid input\n");
- return;
- }
-
- ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
- sink_request = ctrl->link->sink_request;
-
- if (sink_request & DP_TEST_LINK_PHY_TEST_PATTERN) {
- pr_info("PHY_TEST_PATTERN request\n");
- dp_ctrl_process_phy_test_request(ctrl);
- }
-
- if (sink_request & DP_LINK_STATUS_UPDATED)
- dp_ctrl_link_maintenance(ctrl);
-
- if (sink_request & DP_TEST_LINK_TRAINING) {
- ctrl->link->send_test_response(ctrl->link);
- dp_ctrl_link_maintenance(ctrl);
- }
-}
-
static void dp_ctrl_reset(struct dp_ctrl *dp_ctrl)
{
struct dp_ctrl_private *ctrl;
@@ -1455,7 +1482,8 @@ struct dp_ctrl *dp_ctrl_get(struct dp_ctrl_in *in)
dp_ctrl->abort = dp_ctrl_abort;
dp_ctrl->isr = dp_ctrl_isr;
dp_ctrl->reset = dp_ctrl_reset;
- dp_ctrl->handle_sink_request = dp_ctrl_handle_sink_request;
+ dp_ctrl->link_maintenance = dp_ctrl_link_maintenance;
+ dp_ctrl->process_phy_test_request = dp_ctrl_process_phy_test_request;
return dp_ctrl;
error:
diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.h b/drivers/gpu/drm/msm/dp/dp_ctrl.h
index d6d10ed..229c779 100644
--- a/drivers/gpu/drm/msm/dp/dp_ctrl.h
+++ b/drivers/gpu/drm/msm/dp/dp_ctrl.h
@@ -23,7 +23,7 @@
#include "dp_catalog.h"
struct dp_ctrl {
- int (*init)(struct dp_ctrl *dp_ctrl, bool flip);
+ int (*init)(struct dp_ctrl *dp_ctrl, bool flip, bool multi_func);
void (*deinit)(struct dp_ctrl *dp_ctrl);
int (*on)(struct dp_ctrl *dp_ctrl);
void (*off)(struct dp_ctrl *dp_ctrl);
@@ -31,7 +31,9 @@ struct dp_ctrl {
void (*push_idle)(struct dp_ctrl *dp_ctrl);
void (*abort)(struct dp_ctrl *dp_ctrl);
void (*isr)(struct dp_ctrl *dp_ctrl);
- void (*handle_sink_request)(struct dp_ctrl *dp_ctrl);
+ bool (*handle_sink_request)(struct dp_ctrl *dp_ctrl);
+ void (*process_phy_test_request)(struct dp_ctrl *dp_ctrl);
+ int (*link_maintenance)(struct dp_ctrl *dp_ctrl);
};
struct dp_ctrl_in {
diff --git a/drivers/gpu/drm/msm/dp/dp_debug.c b/drivers/gpu/drm/msm/dp/dp_debug.c
index 9ef070c..6342fef 100644
--- a/drivers/gpu/drm/msm/dp/dp_debug.c
+++ b/drivers/gpu/drm/msm/dp/dp_debug.c
@@ -23,6 +23,7 @@
#include "dp_ctrl.h"
#include "dp_debug.h"
#include "drm_connector.h"
+#include "sde_connector.h"
#include "dp_display.h"
#define DEBUG_NAME "drm_dp"
@@ -200,9 +201,13 @@ static ssize_t dp_debug_write_hpd(struct file *file,
if (kstrtoint(buf, 10, &hpd) != 0)
goto end;
- debug->usbpd->connect(debug->usbpd, hpd);
+ hpd &= 0x3;
+
+ debug->dp_debug.psm_enabled = !!(hpd & BIT(1));
+
+ debug->usbpd->simulate_connect(debug->usbpd, !!(hpd & BIT(0)));
end:
- return -len;
+ return len;
}
static ssize_t dp_debug_write_edid_modes(struct file *file,
@@ -280,6 +285,44 @@ static ssize_t dp_debug_bw_code_write(struct file *file,
return len;
}
+static ssize_t dp_debug_tpg_write(struct file *file,
+ const char __user *user_buff, size_t count, loff_t *ppos)
+{
+ struct dp_debug_private *debug = file->private_data;
+ char buf[SZ_8];
+ size_t len = 0;
+ u32 tpg_state = 0;
+
+ if (!debug)
+ return -ENODEV;
+
+ if (*ppos)
+ return 0;
+
+ /* Leave room for termination char */
+ len = min_t(size_t, count, SZ_8 - 1);
+ if (copy_from_user(buf, user_buff, len))
+ goto bail;
+
+ buf[len] = '\0';
+
+ if (kstrtoint(buf, 10, &tpg_state) != 0)
+ goto bail;
+
+ tpg_state &= 0x1;
+ pr_debug("tpg_state: %d\n", tpg_state);
+
+ if (tpg_state == debug->dp_debug.tpg_state)
+ goto bail;
+
+ if (debug->panel)
+ debug->panel->tpg_config(debug->panel, tpg_state);
+
+ debug->dp_debug.tpg_state = tpg_state;
+bail:
+ return len;
+}
+
static ssize_t dp_debug_read_connected(struct file *file,
char __user *user_buff, size_t count, loff_t *ppos)
{
@@ -335,6 +378,7 @@ static ssize_t dp_debug_read_edid_modes(struct file *file,
goto error;
}
+ mutex_lock(&connector->dev->mode_config.mutex);
list_for_each_entry(mode, &connector->modes, head) {
len += snprintf(buf + len, SZ_4K - len,
"%s %d %d %d %d %d %d %d %d %d %d 0x%x\n",
@@ -343,6 +387,7 @@ static ssize_t dp_debug_read_edid_modes(struct file *file,
mode->htotal, mode->vdisplay, mode->vsync_start,
mode->vsync_end, mode->vtotal, mode->flags);
}
+ mutex_unlock(&connector->dev->mode_config.mutex);
if (copy_to_user(user_buff, buf, len)) {
kfree(buf);
@@ -552,6 +597,223 @@ static ssize_t dp_debug_bw_code_read(struct file *file,
return len;
}
+static ssize_t dp_debug_tpg_read(struct file *file,
+ char __user *user_buff, size_t count, loff_t *ppos)
+{
+ struct dp_debug_private *debug = file->private_data;
+ char buf[SZ_8];
+ u32 len = 0;
+
+ if (!debug)
+ return -ENODEV;
+
+ if (*ppos)
+ return 0;
+
+ len += snprintf(buf, SZ_8, "%d\n", debug->dp_debug.tpg_state);
+
+ if (copy_to_user(user_buff, buf, len))
+ return -EFAULT;
+
+ *ppos += len;
+ return len;
+}
+
+static ssize_t dp_debug_write_hdr(struct file *file,
+ const char __user *user_buff, size_t count, loff_t *ppos)
+{
+ struct drm_connector *connector;
+ struct sde_connector *c_conn;
+ struct sde_connector_state *c_state;
+ struct dp_debug_private *debug = file->private_data;
+ char buf[SZ_1K];
+ size_t len = 0;
+
+ if (!debug)
+ return -ENODEV;
+
+ if (*ppos)
+ return 0;
+
+ connector = *debug->connector;
+ c_conn = to_sde_connector(connector);
+ c_state = to_sde_connector_state(connector->state);
+
+ /* Leave room for termination char */
+ len = min_t(size_t, count, SZ_1K - 1);
+ if (copy_from_user(buf, user_buff, len))
+ goto end;
+
+ buf[len] = '\0';
+
+ if (sscanf(buf, "%x %x %x %x %x %x %x %x %x %x %x %x %x %x %x",
+ &c_state->hdr_meta.hdr_supported,
+ &c_state->hdr_meta.hdr_state,
+ &c_state->hdr_meta.eotf,
+ &c_state->hdr_meta.display_primaries_x[0],
+ &c_state->hdr_meta.display_primaries_x[1],
+ &c_state->hdr_meta.display_primaries_x[2],
+ &c_state->hdr_meta.display_primaries_y[0],
+ &c_state->hdr_meta.display_primaries_y[1],
+ &c_state->hdr_meta.display_primaries_y[2],
+ &c_state->hdr_meta.white_point_x,
+ &c_state->hdr_meta.white_point_y,
+ &c_state->hdr_meta.max_luminance,
+ &c_state->hdr_meta.min_luminance,
+ &c_state->hdr_meta.max_content_light_level,
+ &c_state->hdr_meta.max_average_light_level) != 15) {
+ pr_err("invalid input\n");
+ len = -EINVAL;
+ }
+end:
+ return len;
+}
+
+static ssize_t dp_debug_read_hdr(struct file *file,
+ char __user *user_buff, size_t count, loff_t *ppos)
+{
+ struct dp_debug_private *debug = file->private_data;
+ char *buf;
+ u32 len = 0, i;
+ u32 max_size = SZ_4K;
+ int rc = 0;
+ struct drm_connector *connector;
+ struct sde_connector *c_conn;
+ struct sde_connector_state *c_state;
+ struct drm_msm_ext_hdr_metadata *hdr;
+
+ if (!debug) {
+ pr_err("invalid data\n");
+ rc = -ENODEV;
+ goto error;
+ }
+
+ connector = *debug->connector;
+
+ if (!connector) {
+ pr_err("connector is NULL\n");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (*ppos)
+ goto error;
+
+ buf = kzalloc(SZ_4K, GFP_KERNEL);
+ if (!buf) {
+ rc = -ENOMEM;
+ goto error;
+ }
+
+ c_conn = to_sde_connector(connector);
+ c_state = to_sde_connector_state(connector->state);
+
+ hdr = &c_state->hdr_meta;
+
+ rc = snprintf(buf + len, max_size,
+ "============SINK HDR PARAMETERS===========\n");
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "eotf = %d\n",
+ connector->hdr_eotf);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "type_one = %d\n",
+ connector->hdr_metadata_type_one);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "max_luminance = %d\n",
+ connector->hdr_max_luminance);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "avg_luminance = %d\n",
+ connector->hdr_avg_luminance);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "min_luminance = %d\n",
+ connector->hdr_min_luminance);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size,
+ "============VIDEO HDR PARAMETERS===========\n");
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "hdr_state = %d\n", hdr->hdr_state);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "hdr_supported = %d\n",
+ hdr->hdr_supported);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "eotf = %d\n", hdr->eotf);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "white_point_x = %d\n",
+ hdr->white_point_x);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "white_point_y = %d\n",
+ hdr->white_point_y);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "max_luminance = %d\n",
+ hdr->max_luminance);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "min_luminance = %d\n",
+ hdr->min_luminance);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "max_content_light_level = %d\n",
+ hdr->max_content_light_level);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "min_content_light_level = %d\n",
+ hdr->max_average_light_level);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ for (i = 0; i < HDR_PRIMARIES_COUNT; i++) {
+ rc = snprintf(buf + len, max_size, "primaries_x[%d] = %d\n",
+ i, hdr->display_primaries_x[i]);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+
+ rc = snprintf(buf + len, max_size, "primaries_y[%d] = %d\n",
+ i, hdr->display_primaries_y[i]);
+ if (dp_debug_check_buffer_overflow(rc, &max_size, &len))
+ goto error;
+ }
+
+ if (copy_to_user(user_buff, buf, len)) {
+ kfree(buf);
+ rc = -EFAULT;
+ goto error;
+ }
+
+ *ppos += len;
+ kfree(buf);
+
+ return len;
+error:
+ return rc;
+}
+
static const struct file_operations dp_debug_fops = {
.open = simple_open,
.read = dp_debug_read_info,
@@ -589,6 +851,18 @@ static const struct file_operations bw_code_fops = {
.write = dp_debug_bw_code_write,
};
+static const struct file_operations tpg_fops = {
+ .open = simple_open,
+ .read = dp_debug_tpg_read,
+ .write = dp_debug_tpg_write,
+};
+
+static const struct file_operations hdr_fops = {
+ .open = simple_open,
+ .write = dp_debug_write_hdr,
+ .read = dp_debug_read_hdr,
+};
+
static int dp_debug_init(struct dp_debug *dp_debug)
{
int rc = 0;
@@ -598,7 +872,10 @@ static int dp_debug_init(struct dp_debug *dp_debug)
dir = debugfs_create_dir(DEBUG_NAME, NULL);
if (IS_ERR_OR_NULL(dir)) {
- rc = PTR_ERR(dir);
+ if (!dir)
+ rc = -EINVAL;
+ else
+ rc = PTR_ERR(dir);
pr_err("[%s] debugfs create dir failed, rc = %d\n",
DEBUG_NAME, rc);
goto error;
@@ -669,9 +946,30 @@ static int dp_debug_init(struct dp_debug *dp_debug)
goto error_remove_dir;
}
+ file = debugfs_create_file("tpg_ctrl", 0644, dir,
+ debug, &tpg_fops);
+ if (IS_ERR_OR_NULL(file)) {
+ rc = PTR_ERR(file);
+ pr_err("[%s] debugfs tpg failed, rc=%d\n",
+ DEBUG_NAME, rc);
+ goto error_remove_dir;
+ }
+
+ file = debugfs_create_file("hdr", 0644, dir,
+ debug, &hdr_fops);
+
+ if (IS_ERR_OR_NULL(file)) {
+ rc = PTR_ERR(file);
+ pr_err("[%s] debugfs hdr failed, rc=%d\n",
+ DEBUG_NAME, rc);
+ goto error_remove_dir;
+ }
+
return 0;
error_remove_dir:
+ if (!file)
+ rc = -EINVAL;
debugfs_remove_recursive(dir);
error:
return rc;
diff --git a/drivers/gpu/drm/msm/dp/dp_debug.h b/drivers/gpu/drm/msm/dp/dp_debug.h
index d5a9301..3b2d23e 100644
--- a/drivers/gpu/drm/msm/dp/dp_debug.h
+++ b/drivers/gpu/drm/msm/dp/dp_debug.h
@@ -25,13 +25,16 @@
* @vdisplay: used to filter out vdisplay value
* @hdisplay: used to filter out hdisplay value
* @vrefresh: used to filter out vrefresh value
+ * @tpg_state: specifies whether tpg feature is enabled
*/
struct dp_debug {
bool debug_en;
+ bool psm_enabled;
int aspect_ratio;
int vdisplay;
int hdisplay;
int vrefresh;
+ bool tpg_state;
};
/**
diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
index 2c1ccfb..51cc57b 100644
--- a/drivers/gpu/drm/msm/dp/dp_display.c
+++ b/drivers/gpu/drm/msm/dp/dp_display.c
@@ -61,7 +61,6 @@ struct dp_display_private {
/* state variables */
bool core_initialized;
bool power_on;
- bool hpd_irq_on;
bool audio_supported;
struct platform_device *pdev;
@@ -84,9 +83,12 @@ struct dp_display_private {
struct dp_usbpd_cb usbpd_cb;
struct dp_display_mode mode;
struct dp_display dp_display;
+ struct msm_drm_private *priv;
- struct workqueue_struct *hdcp_workqueue;
+ struct workqueue_struct *wq;
struct delayed_work hdcp_cb_work;
+ struct work_struct connect_work;
+ struct work_struct attention_work;
struct mutex hdcp_mutex;
struct mutex session_lock;
int hdcp_status;
@@ -191,26 +193,13 @@ static void dp_display_notify_hdcp_status_cb(void *ptr,
dp->hdcp_status = status;
if (dp->dp_display.is_connected)
- queue_delayed_work(dp->hdcp_workqueue, &dp->hdcp_cb_work, HZ/4);
-}
-
-static int dp_display_create_hdcp_workqueue(struct dp_display_private *dp)
-{
- dp->hdcp_workqueue = create_workqueue("sdm_dp_hdcp");
- if (IS_ERR_OR_NULL(dp->hdcp_workqueue)) {
- pr_err("Error creating hdcp_workqueue\n");
- return -EPERM;
- }
-
- INIT_DELAYED_WORK(&dp->hdcp_cb_work, dp_display_hdcp_cb_work);
-
- return 0;
+ queue_delayed_work(dp->wq, &dp->hdcp_cb_work, HZ/4);
}
static void dp_display_destroy_hdcp_workqueue(struct dp_display_private *dp)
{
- if (dp->hdcp_workqueue)
- destroy_workqueue(dp->hdcp_workqueue);
+ if (dp->wq)
+ destroy_workqueue(dp->wq);
}
static void dp_display_update_hdcp_info(struct dp_display_private *dp)
@@ -277,7 +266,6 @@ static void dp_display_deinitialize_hdcp(struct dp_display_private *dp)
static int dp_display_initialize_hdcp(struct dp_display_private *dp)
{
struct sde_hdcp_init_data hdcp_init_data;
- struct resource *res;
int rc = 0;
if (!dp) {
@@ -287,29 +275,18 @@ static int dp_display_initialize_hdcp(struct dp_display_private *dp)
mutex_init(&dp->hdcp_mutex);
- rc = dp_display_create_hdcp_workqueue(dp);
- if (rc) {
- pr_err("Failed to create HDCP workqueue\n");
- goto error;
- }
-
- res = platform_get_resource_byname(dp->pdev,
- IORESOURCE_MEM, "dp_ctrl");
- if (!res) {
- pr_err("Error getting dp ctrl resource\n");
- rc = -EINVAL;
- goto error;
- }
-
- hdcp_init_data.phy_addr = res->start;
hdcp_init_data.client_id = HDCP_CLIENT_DP;
hdcp_init_data.drm_aux = dp->aux->drm_aux;
hdcp_init_data.cb_data = (void *)dp;
- hdcp_init_data.workq = dp->hdcp_workqueue;
+ hdcp_init_data.workq = dp->wq;
hdcp_init_data.mutex = &dp->hdcp_mutex;
hdcp_init_data.sec_access = true;
hdcp_init_data.notify_status = dp_display_notify_hdcp_status_cb;
hdcp_init_data.core_io = &dp->parser->io.ctrl_io;
+ hdcp_init_data.dp_ahb = &dp->parser->io.dp_ahb;
+ hdcp_init_data.dp_aux = &dp->parser->io.dp_aux;
+ hdcp_init_data.dp_link = &dp->parser->io.dp_link;
+ hdcp_init_data.dp_p0 = &dp->parser->io.dp_p0;
hdcp_init_data.qfprom_io = &dp->parser->io.qfprom_io;
hdcp_init_data.hdcp_io = &dp->parser->io.hdcp_io;
hdcp_init_data.revision = &dp->panel->link_info.revision;
@@ -324,8 +301,13 @@ static int dp_display_initialize_hdcp(struct dp_display_private *dp)
pr_debug("HDCP 1.3 initialized\n");
dp->hdcp.hdcp2 = sde_dp_hdcp2p2_init(&hdcp_init_data);
- if (!IS_ERR_OR_NULL(dp->hdcp.hdcp2))
- pr_debug("HDCP 2.2 initialized\n");
+ if (IS_ERR_OR_NULL(dp->hdcp.hdcp2)) {
+ pr_err("Error initializing HDCP 2.x\n");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ pr_debug("HDCP 2.2 initialized\n");
dp->hdcp.feature_enabled = true;
@@ -341,7 +323,6 @@ static int dp_display_bind(struct device *dev, struct device *master,
int rc = 0;
struct dp_display_private *dp;
struct drm_device *drm;
- struct msm_drm_private *priv;
struct platform_device *pdev = to_platform_device(dev);
if (!dev || !pdev || !master) {
@@ -361,25 +342,7 @@ static int dp_display_bind(struct device *dev, struct device *master,
}
dp->dp_display.drm_dev = drm;
- priv = drm->dev_private;
-
- rc = dp->aux->drm_aux_register(dp->aux);
- if (rc) {
- pr_err("DRM DP AUX register failed\n");
- goto end;
- }
-
- rc = dp->power->power_client_init(dp->power, &priv->phandle);
- if (rc) {
- pr_err("Power client create failed\n");
- goto end;
- }
-
- rc = dp_display_initialize_hdcp(dp);
- if (rc) {
- pr_err("HDCP initialization failed\n");
- goto end;
- }
+ dp->priv = drm->dev_private;
end:
return rc;
}
@@ -423,32 +386,25 @@ static bool dp_display_is_sink_count_zero(struct dp_display_private *dp)
(dp->link->sink_count.count == 0);
}
-static void dp_display_send_hpd_event(struct dp_display *dp_display)
+static void dp_display_send_hpd_event(struct dp_display_private *dp)
{
struct drm_device *dev = NULL;
- struct dp_display_private *dp;
struct drm_connector *connector;
char name[HPD_STRING_SIZE], status[HPD_STRING_SIZE],
bpp[HPD_STRING_SIZE], pattern[HPD_STRING_SIZE];
char *envp[5];
- if (!dp_display) {
- pr_err("invalid input\n");
- return;
- }
-
- dp = container_of(dp_display, struct dp_display_private, dp_display);
- if (!dp) {
- pr_err("invalid params\n");
- return;
- }
connector = dp->dp_display.connector;
- dev = dp_display->connector->dev;
+
+ if (!connector) {
+ pr_err("connector not set\n");
+ return;
+ }
connector->status = connector->funcs->detect(connector, false);
- pr_debug("[%s] status updated to %s\n",
- connector->name,
- drm_get_connector_status_name(connector->status));
+
+ dev = dp->dp_display.connector->dev;
+
snprintf(name, HPD_STRING_SIZE, "name=%s", connector->name);
snprintf(status, HPD_STRING_SIZE, "status=%s",
drm_get_connector_status_name(connector->status));
@@ -458,8 +414,7 @@ static void dp_display_send_hpd_event(struct dp_display *dp_display)
snprintf(pattern, HPD_STRING_SIZE, "pattern=%d",
dp->link->test_video.test_video_pattern);
- pr_debug("generating hotplug event [%s]:[%s] [%s] [%s]\n",
- name, status, bpp, pattern);
+ pr_debug("[%s]:[%s] [%s] [%s]\n", name, status, bpp, pattern);
envp[0] = name;
envp[1] = status;
envp[2] = bpp;
@@ -469,27 +424,55 @@ static void dp_display_send_hpd_event(struct dp_display *dp_display)
envp);
}
+static void dp_display_post_open(struct dp_display *dp_display)
+{
+ struct drm_connector *connector;
+ struct dp_display_private *dp;
+
+ if (!dp_display) {
+ pr_err("invalid input\n");
+ return;
+ }
+
+ dp = container_of(dp_display, struct dp_display_private, dp_display);
+ if (IS_ERR_OR_NULL(dp)) {
+ pr_err("invalid params\n");
+ return;
+ }
+
+ connector = dp->dp_display.connector;
+
+ if (!connector) {
+ pr_err("connector not set\n");
+ return;
+ }
+
+ /* if cable is already connected, send notification */
+ if (dp_display->is_connected)
+ dp_display_send_hpd_event(dp);
+ else
+ dp_display->post_open = NULL;
+
+}
+
static int dp_display_send_hpd_notification(struct dp_display_private *dp,
bool hpd)
{
- if ((hpd && dp->dp_display.is_connected) ||
- (!hpd && !dp->dp_display.is_connected)) {
- pr_info("HPD already %s\n", (hpd ? "on" : "off"));
- return 0;
- }
-
- /* reset video pattern flag on disconnect */
- if (!hpd)
- dp->panel->video_test = false;
-
dp->dp_display.is_connected = hpd;
+
+ /* in case, framework is not yet up, don't notify hpd */
+ if (dp->dp_display.post_open)
+ return 0;
+
reinit_completion(&dp->notification_comp);
- dp_display_send_hpd_event(&dp->dp_display);
+ dp_display_send_hpd_event(dp);
if (!wait_for_completion_timeout(&dp->notification_comp, HZ * 5)) {
pr_warn("%s timeout\n", hpd ? "connect" : "disconnect");
/* cancel any pending request */
dp->ctrl->abort(dp->ctrl);
+ dp->aux->abort(dp->aux);
+
return -EINVAL;
}
@@ -503,12 +486,23 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
dp->aux->init(dp->aux, dp->parser->aux_cfg);
- if (dp->link->psm_enabled)
- goto notify;
+ if (dp->debug->psm_enabled) {
+ dp->link->psm_config(dp->link, &dp->panel->link_info, false);
+ dp->debug->psm_enabled = false;
+ }
+
+ if (!dp->dp_display.connector)
+ return 0;
rc = dp->panel->read_sink_caps(dp->panel, dp->dp_display.connector);
- if (rc)
- goto notify;
+ if (rc) {
+ if (rc == -ETIMEDOUT) {
+ pr_err("Sink cap read failed, skip notification\n");
+ goto end;
+ } else {
+ goto notify;
+ }
+ }
dp->link->process_request(dp->link);
@@ -545,7 +539,7 @@ static void dp_display_host_init(struct dp_display_private *dp)
flip = true;
dp->power->init(dp->power, flip);
- dp->ctrl->init(dp->ctrl, flip);
+ dp->ctrl->init(dp->ctrl, flip, dp->usbpd->multi_func);
enable_irq(dp->irq);
dp->core_initialized = true;
}
@@ -563,22 +557,28 @@ static void dp_display_host_deinit(struct dp_display_private *dp)
dp->core_initialized = false;
}
-static void dp_display_process_hpd_low(struct dp_display_private *dp)
+static int dp_display_process_hpd_low(struct dp_display_private *dp)
{
- /* cancel any pending request */
- dp->ctrl->abort(dp->ctrl);
+ int rc = 0;
- if (dp_display_is_hdcp_enabled(dp) && dp->hdcp.ops->off) {
- cancel_delayed_work_sync(&dp->hdcp_cb_work);
- dp->hdcp.ops->off(dp->hdcp.data);
+ if (!dp->dp_display.is_connected) {
+ pr_debug("HPD already off\n");
+ return 0;
}
+ if (dp_display_is_hdcp_enabled(dp) && dp->hdcp.ops->off)
+ dp->hdcp.ops->off(dp->hdcp.data);
+
if (dp->audio_supported)
dp->audio->off(dp->audio);
- dp_display_send_hpd_notification(dp, false);
+ rc = dp_display_send_hpd_notification(dp, false);
dp->aux->deinit(dp->aux);
+
+ dp->panel->video_test = false;
+
+ return rc;
}
static int dp_display_usbpd_configure_cb(struct device *dev)
@@ -602,7 +602,7 @@ static int dp_display_usbpd_configure_cb(struct device *dev)
dp_display_host_init(dp);
if (dp->usbpd->hpd_high)
- dp_display_process_hpd_high(dp);
+ queue_work(dp->wq, &dp->connect_work);
end:
return rc;
}
@@ -622,6 +622,24 @@ static void dp_display_clean(struct dp_display_private *dp)
dp->power_on = false;
}
+static int dp_display_handle_disconnect(struct dp_display_private *dp)
+{
+ int rc;
+
+ rc = dp_display_process_hpd_low(dp);
+
+ mutex_lock(&dp->session_lock);
+ if (rc && dp->power_on)
+ dp_display_clean(dp);
+
+ if (!dp->usbpd->alt_mode_cfg_done)
+ dp_display_host_deinit(dp);
+
+ mutex_unlock(&dp->session_lock);
+
+ return rc;
+}
+
static int dp_display_usbpd_disconnect_cb(struct device *dev)
{
int rc = 0;
@@ -640,64 +658,87 @@ static int dp_display_usbpd_disconnect_cb(struct device *dev)
goto end;
}
+ if (dp->debug->psm_enabled)
+ dp->link->psm_config(dp->link, &dp->panel->link_info, true);
+
/* cancel any pending request */
dp->ctrl->abort(dp->ctrl);
+ dp->aux->abort(dp->aux);
- if (dp->audio_supported)
- dp->audio->off(dp->audio);
+ /* wait for idle state */
+ flush_workqueue(dp->wq);
- rc = dp_display_send_hpd_notification(dp, false);
-
- mutex_lock(&dp->session_lock);
-
- /* if cable is disconnected, reset psm_enabled flag */
- if (!dp->usbpd->alt_mode_cfg_done)
- dp->link->psm_enabled = false;
-
- if ((rc < 0) && dp->power_on)
- dp_display_clean(dp);
-
- dp_display_host_deinit(dp);
-
- mutex_unlock(&dp->session_lock);
+ dp_display_handle_disconnect(dp);
end:
return rc;
}
-static void dp_display_handle_video_request(struct dp_display_private *dp)
+static void dp_display_handle_maintenance_req(struct dp_display_private *dp)
{
- if (dp->link->sink_request & DP_TEST_LINK_VIDEO_PATTERN) {
- /* force disconnect followed by connect */
- dp->usbpd->connect(dp->usbpd, false);
- dp->panel->video_test = true;
- dp->usbpd->connect(dp->usbpd, true);
- dp->link->send_test_response(dp->link);
- }
+ mutex_lock(&dp->audio->ops_lock);
+
+ if (dp->audio_supported)
+ dp->audio->off(dp->audio);
+
+ dp->ctrl->link_maintenance(dp->ctrl);
+
+ if (dp->audio_supported)
+ dp->audio->on(dp->audio);
+
+ mutex_unlock(&dp->audio->ops_lock);
}
-static int dp_display_handle_hpd_irq(struct dp_display_private *dp)
+static void dp_display_attention_work(struct work_struct *work)
{
+ struct dp_display_private *dp = container_of(work,
+ struct dp_display_private, attention_work);
+
+ if (dp_display_is_hdcp_enabled(dp) && dp->hdcp.ops->cp_irq) {
+ if (!dp->hdcp.ops->cp_irq(dp->hdcp.data))
+ return;
+ }
+
if (dp->link->sink_request & DS_PORT_STATUS_CHANGED) {
- dp_display_send_hpd_notification(dp, false);
+ dp_display_handle_disconnect(dp);
if (dp_display_is_sink_count_zero(dp)) {
pr_debug("sink count is zero, nothing to do\n");
- return 0;
+ return;
}
- return dp_display_process_hpd_high(dp);
+ queue_work(dp->wq, &dp->connect_work);
+ return;
}
- dp->ctrl->handle_sink_request(dp->ctrl);
+ if (dp->link->sink_request & DP_TEST_LINK_VIDEO_PATTERN) {
+ dp_display_handle_disconnect(dp);
- dp_display_handle_video_request(dp);
+ dp->panel->video_test = true;
+ dp_display_send_hpd_notification(dp, true);
+ dp->link->send_test_response(dp->link);
- return 0;
+ return;
+ }
+
+ if (dp->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN) {
+ dp->ctrl->process_phy_test_request(dp->ctrl);
+ return;
+ }
+
+ if (dp->link->sink_request & DP_LINK_STATUS_UPDATED) {
+ dp_display_handle_maintenance_req(dp);
+ return;
+ }
+
+ if (dp->link->sink_request & DP_TEST_LINK_TRAINING) {
+ dp->link->send_test_response(dp->link);
+ dp_display_handle_maintenance_req(dp);
+ return;
+ }
}
static int dp_display_usbpd_attention_cb(struct device *dev)
{
- int rc = 0;
struct dp_display_private *dp;
if (!dev) {
@@ -711,32 +752,36 @@ static int dp_display_usbpd_attention_cb(struct device *dev)
return -ENODEV;
}
- if (dp->usbpd->hpd_irq) {
- dp->hpd_irq_on = true;
+ if (dp->usbpd->hpd_irq && dp->usbpd->hpd_high) {
+ dp->link->process_request(dp->link);
+ queue_work(dp->wq, &dp->attention_work);
+ } else if (dp->usbpd->hpd_high) {
+ queue_work(dp->wq, &dp->connect_work);
+ } else {
+ /* cancel any pending request */
+ dp->ctrl->abort(dp->ctrl);
+ dp->aux->abort(dp->aux);
- if (dp_display_is_hdcp_enabled(dp) && dp->hdcp.ops->cp_irq) {
- if (!dp->hdcp.ops->cp_irq(dp->hdcp.data))
- goto end;
- }
+ /* wait for idle state */
+ flush_workqueue(dp->wq);
- rc = dp->link->process_request(dp->link);
- /* check for any test request issued by sink */
- if (!rc)
- dp_display_handle_hpd_irq(dp);
-
- dp->hpd_irq_on = false;
- goto end;
+ dp_display_handle_disconnect(dp);
}
- if (!dp->usbpd->hpd_high) {
- dp_display_process_hpd_low(dp);
- goto end;
+ return 0;
+}
+
+static void dp_display_connect_work(struct work_struct *work)
+{
+ struct dp_display_private *dp = container_of(work,
+ struct dp_display_private, connect_work);
+
+ if (dp->dp_display.is_connected) {
+ pr_debug("HPD already on\n");
+ return;
}
- if (dp->usbpd->alt_mode_cfg_done)
- dp_display_process_hpd_high(dp);
-end:
- return rc;
+ dp_display_process_hpd_high(dp);
}
static void dp_display_deinit_sub_modules(struct dp_display_private *dp)
@@ -766,18 +811,6 @@ static int dp_init_sub_modules(struct dp_display_private *dp)
.dev = dev,
};
- cb->configure = dp_display_usbpd_configure_cb;
- cb->disconnect = dp_display_usbpd_disconnect_cb;
- cb->attention = dp_display_usbpd_attention_cb;
-
- dp->usbpd = dp_usbpd_get(dev, cb);
- if (IS_ERR(dp->usbpd)) {
- rc = PTR_ERR(dp->usbpd);
- pr_err("failed to initialize usbpd, rc = %d\n", rc);
- dp->usbpd = NULL;
- goto error;
- }
-
mutex_init(&dp->session_lock);
dp->parser = dp_parser_get(dp->pdev);
@@ -785,7 +818,7 @@ static int dp_init_sub_modules(struct dp_display_private *dp)
rc = PTR_ERR(dp->parser);
pr_err("failed to initialize parser, rc = %d\n", rc);
dp->parser = NULL;
- goto error_parser;
+ goto error;
}
rc = dp->parser->parse(dp->parser);
@@ -810,6 +843,12 @@ static int dp_init_sub_modules(struct dp_display_private *dp)
goto error_power;
}
+ rc = dp->power->power_client_init(dp->power, &dp->priv->phandle);
+ if (rc) {
+ pr_err("Power client create failed\n");
+ goto error_aux;
+ }
+
dp->aux = dp_aux_get(dev, &dp->catalog->aux, dp->parser->aux_cfg);
if (IS_ERR(dp->aux)) {
rc = PTR_ERR(dp->aux);
@@ -818,6 +857,12 @@ static int dp_init_sub_modules(struct dp_display_private *dp)
goto error_aux;
}
+ rc = dp->aux->drm_aux_register(dp->aux);
+ if (rc) {
+ pr_err("DRM DP AUX register failed\n");
+ goto error_link;
+ }
+
dp->link = dp_link_get(dev, dp->aux);
if (IS_ERR(dp->link)) {
rc = PTR_ERR(dp->link);
@@ -861,6 +906,18 @@ static int dp_init_sub_modules(struct dp_display_private *dp)
goto error_audio;
}
+ cb->configure = dp_display_usbpd_configure_cb;
+ cb->disconnect = dp_display_usbpd_disconnect_cb;
+ cb->attention = dp_display_usbpd_attention_cb;
+
+ dp->usbpd = dp_usbpd_get(dev, cb);
+ if (IS_ERR(dp->usbpd)) {
+ rc = PTR_ERR(dp->usbpd);
+ pr_err("failed to initialize usbpd, rc = %d\n", rc);
+ dp->usbpd = NULL;
+ goto error_usbpd;
+ }
+
dp->debug = dp_debug_get(dev, dp->panel, dp->usbpd,
dp->link, &dp->dp_display.connector);
if (IS_ERR(dp->debug)) {
@@ -872,6 +929,8 @@ static int dp_init_sub_modules(struct dp_display_private *dp)
return rc;
error_debug:
+ dp_usbpd_put(dp->usbpd);
+error_usbpd:
dp_audio_put(dp->audio);
error_audio:
dp_ctrl_put(dp->ctrl);
@@ -887,13 +946,40 @@ static int dp_init_sub_modules(struct dp_display_private *dp)
dp_catalog_put(dp->catalog);
error_catalog:
dp_parser_put(dp->parser);
-error_parser:
- dp_usbpd_put(dp->usbpd);
- mutex_destroy(&dp->session_lock);
error:
+ mutex_destroy(&dp->session_lock);
return rc;
}
+static void dp_display_post_init(struct dp_display *dp_display)
+{
+ int rc = 0;
+ struct dp_display_private *dp;
+
+ if (!dp_display) {
+ pr_err("invalid input\n");
+ rc = -EINVAL;
+ goto end;
+ }
+
+ dp = container_of(dp_display, struct dp_display_private, dp_display);
+ if (IS_ERR_OR_NULL(dp)) {
+ pr_err("invalid params\n");
+ rc = -EINVAL;
+ goto end;
+ }
+
+ rc = dp_init_sub_modules(dp);
+ if (rc)
+ goto end;
+
+ dp_display_initialize_hdcp(dp);
+
+ dp_display->post_init = NULL;
+end:
+ pr_debug("%s\n", rc ? "failed" : "success");
+}
+
static int dp_display_set_mode(struct dp_display *dp_display,
struct dp_display_mode *mode)
{
@@ -946,7 +1032,13 @@ static int dp_display_enable(struct dp_display *dp_display)
goto end;
}
+ dp->aux->init(dp->aux, dp->parser->aux_cfg);
+
rc = dp->ctrl->on(dp->ctrl);
+
+ if (dp->debug->tpg_state)
+ dp->panel->tpg_config(dp->panel, true);
+
if (!rc)
dp->power_on = true;
end:
@@ -972,25 +1064,27 @@ static int dp_display_post_enable(struct dp_display *dp_display)
goto end;
}
+ dp->panel->spd_config(dp->panel);
+
if (dp->audio_supported) {
dp->audio->bw_code = dp->link->link_params.bw_code;
dp->audio->lane_count = dp->link->link_params.lane_count;
dp->audio->on(dp->audio);
}
- complete_all(&dp->notification_comp);
-
dp_display_update_hdcp_info(dp);
if (dp_display_is_hdcp_enabled(dp)) {
cancel_delayed_work_sync(&dp->hdcp_cb_work);
dp->hdcp_status = HDCP_STATE_AUTHENTICATING;
- queue_delayed_work(dp->hdcp_workqueue,
- &dp->hdcp_cb_work, HZ / 2);
+ queue_delayed_work(dp->wq, &dp->hdcp_cb_work, HZ / 2);
}
-
end:
+ /* clear framework event notifier */
+ dp_display->post_open = NULL;
+
+ complete_all(&dp->notification_comp);
mutex_unlock(&dp->session_lock);
return 0;
}
@@ -1021,12 +1115,7 @@ static int dp_display_pre_disable(struct dp_display *dp_display)
dp->hdcp.ops->off(dp->hdcp.data);
}
- if (dp->usbpd->alt_mode_cfg_done && (dp->usbpd->hpd_high ||
- dp->usbpd->forced_disconnect))
- dp->link->psm_config(dp->link, &dp->panel->link_info, true);
-
dp->ctrl->push_idle(dp->ctrl);
-
end:
mutex_unlock(&dp->session_lock);
return 0;
@@ -1118,7 +1207,7 @@ static int dp_display_validate_mode(struct dp_display *dp, u32 mode_pclk_khz)
struct drm_dp_link *link_info;
u32 mode_rate_khz = 0, supported_rate_khz = 0, mode_bpp = 0;
- if (!dp || !mode_pclk_khz) {
+ if (!dp || !mode_pclk_khz || !dp->connector) {
pr_err("invalid params\n");
return -EINVAL;
}
@@ -1148,7 +1237,7 @@ static int dp_display_get_modes(struct dp_display *dp,
struct dp_display_private *dp_display;
int ret = 0;
- if (!dp) {
+ if (!dp || !dp->connector) {
pr_err("invalid params\n");
return 0;
}
@@ -1166,6 +1255,7 @@ static int dp_display_get_modes(struct dp_display *dp,
static int dp_display_pre_kickoff(struct dp_display *dp_display,
struct drm_msm_ext_hdr_metadata *hdr)
{
+ int rc = 0;
struct dp_display_private *dp;
if (!dp_display) {
@@ -1175,7 +1265,25 @@ static int dp_display_pre_kickoff(struct dp_display *dp_display,
dp = container_of(dp_display, struct dp_display_private, dp_display);
- return dp->panel->setup_hdr(dp->panel, hdr);
+ if (hdr->hdr_supported && dp->panel->hdr_supported(dp->panel))
+ rc = dp->panel->setup_hdr(dp->panel, hdr);
+
+ return rc;
+}
+
+static int dp_display_create_workqueue(struct dp_display_private *dp)
+{
+ dp->wq = create_singlethread_workqueue("drm_dp");
+ if (IS_ERR_OR_NULL(dp->wq)) {
+ pr_err("Error creating wq\n");
+ return -EPERM;
+ }
+
+ INIT_DELAYED_WORK(&dp->hdcp_cb_work, dp_display_hdcp_cb_work);
+ INIT_WORK(&dp->connect_work, dp_display_connect_work);
+ INIT_WORK(&dp->attention_work, dp_display_attention_work);
+
+ return 0;
}
static int dp_display_probe(struct platform_device *pdev)
@@ -1185,22 +1293,25 @@ static int dp_display_probe(struct platform_device *pdev)
if (!pdev || !pdev->dev.of_node) {
pr_err("pdev not found\n");
- return -ENODEV;
+ rc = -ENODEV;
+ goto bail;
}
dp = devm_kzalloc(&pdev->dev, sizeof(*dp), GFP_KERNEL);
- if (!dp)
- return -ENOMEM;
+ if (!dp) {
+ rc = -ENOMEM;
+ goto bail;
+ }
init_completion(&dp->notification_comp);
dp->pdev = pdev;
dp->name = "drm_dp";
- rc = dp_init_sub_modules(dp);
+ rc = dp_display_create_workqueue(dp);
if (rc) {
- devm_kfree(&pdev->dev, dp);
- return -EPROBE_DEFER;
+ pr_err("Failed to create workqueue\n");
+ goto error;
}
platform_set_drvdata(pdev, dp);
@@ -1218,16 +1329,20 @@ static int dp_display_probe(struct platform_device *pdev)
g_dp_display->unprepare = dp_display_unprepare;
g_dp_display->request_irq = dp_request_irq;
g_dp_display->get_debug = dp_get_debug;
- g_dp_display->send_hpd_event = dp_display_send_hpd_event;
+ g_dp_display->post_open = dp_display_post_open;
+ g_dp_display->post_init = dp_display_post_init;
g_dp_display->pre_kickoff = dp_display_pre_kickoff;
rc = component_add(&pdev->dev, &dp_display_comp_ops);
if (rc) {
pr_err("component add failed, rc=%d\n", rc);
- dp_display_deinit_sub_modules(dp);
- devm_kfree(&pdev->dev, dp);
+ goto error;
}
+ return 0;
+error:
+ devm_kfree(&pdev->dev, dp);
+bail:
return rc;
}
@@ -1290,7 +1405,7 @@ static int __init dp_display_init(void)
return ret;
}
-module_init(dp_display_init);
+late_initcall(dp_display_init);
static void __exit dp_display_cleanup(void)
{
diff --git a/drivers/gpu/drm/msm/dp/dp_display.h b/drivers/gpu/drm/msm/dp/dp_display.h
index 2d314c7..c55e6c8 100644
--- a/drivers/gpu/drm/msm/dp/dp_display.h
+++ b/drivers/gpu/drm/msm/dp/dp_display.h
@@ -42,9 +42,10 @@ struct dp_display {
int (*unprepare)(struct dp_display *dp_display);
int (*request_irq)(struct dp_display *dp_display);
struct dp_debug *(*get_debug)(struct dp_display *dp_display);
- void (*send_hpd_event)(struct dp_display *dp_display);
+ void (*post_open)(struct dp_display *dp_display);
int (*pre_kickoff)(struct dp_display *dp_display,
struct drm_msm_ext_hdr_metadata *hdr_meta);
+ void (*post_init)(struct dp_display *dp_display);
};
int dp_display_get_num_of_displays(void);
diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c
index 1915254..7746b8e 100644
--- a/drivers/gpu/drm/msm/dp/dp_drm.c
+++ b/drivers/gpu/drm/msm/dp/dp_drm.c
@@ -184,6 +184,9 @@ static void dp_bridge_disable(struct drm_bridge *drm_bridge)
bridge = to_dp_bridge(drm_bridge);
dp = bridge->display;
+ if (dp && dp->connector)
+ sde_connector_helper_bridge_disable(dp->connector);
+
rc = dp->pre_disable(dp);
if (rc) {
pr_err("[%d] DP display pre disable failed, rc=%d\n",
@@ -287,15 +290,18 @@ int dp_connector_pre_kickoff(struct drm_connector *connector,
return dp->pre_kickoff(dp, params->hdr_meta);
}
-int dp_connector_post_init(struct drm_connector *connector,
- void *info, void *display, struct msm_mode_info *mode_info)
+int dp_connector_post_init(struct drm_connector *connector, void *display)
{
struct dp_display *dp_display = display;
- if (!info || !dp_display)
+ if (!dp_display)
return -EINVAL;
dp_display->connector = connector;
+
+ if (dp_display->post_init)
+ dp_display->post_init(dp_display);
+
return 0;
}
@@ -377,7 +383,7 @@ enum drm_connector_status dp_connector_detect(struct drm_connector *conn,
return status;
}
-void dp_connector_send_hpd_event(void *display)
+void dp_connector_post_open(void *display)
{
struct dp_display *dp;
@@ -388,8 +394,8 @@ void dp_connector_send_hpd_event(void *display)
dp = display;
- if (dp->send_hpd_event)
- dp->send_hpd_event(dp);
+ if (dp->post_open)
+ dp->post_open(dp);
}
int dp_connector_get_modes(struct drm_connector *connector,
@@ -510,9 +516,6 @@ enum drm_mode_status dp_connector_mode_valid(struct drm_connector *connector,
mode->vrefresh = drm_mode_vrefresh(mode);
- if (mode->vrefresh > 60)
- return MODE_BAD;
-
if (mode->clock > dp_disp->max_pclk_khz)
return MODE_BAD;
diff --git a/drivers/gpu/drm/msm/dp/dp_drm.h b/drivers/gpu/drm/msm/dp/dp_drm.h
index e856be1..89b0a7e 100644
--- a/drivers/gpu/drm/msm/dp/dp_drm.h
+++ b/drivers/gpu/drm/msm/dp/dp_drm.h
@@ -45,15 +45,10 @@ int dp_connector_pre_kickoff(struct drm_connector *connector,
/**
* dp_connector_post_init - callback to perform additional initialization steps
* @connector: Pointer to drm connector structure
- * @info: Pointer to sde connector info structure
* @display: Pointer to private display handle
- * @mode_info: Pointer to mode info structure
* Returns: Zero on success
*/
-int dp_connector_post_init(struct drm_connector *connector,
- void *info,
- void *display,
- struct msm_mode_info *mode_info);
+int dp_connector_post_init(struct drm_connector *connector, void *display);
/**
* dp_connector_detect - callback to determine if connector is connected
@@ -100,7 +95,11 @@ int dp_connector_get_mode_info(const struct drm_display_mode *drm_mode,
int dp_connector_get_info(struct msm_display_info *info, void *display);
-void dp_connector_send_hpd_event(void *display);
+/**
+ * dp_connector_post_open - handle the post open functionalites
+ * @display: Pointer to private display structure
+ */
+void dp_connector_post_open(void *display);
int dp_drm_bridge_init(void *display,
struct drm_encoder *encoder);
diff --git a/drivers/gpu/drm/msm/dp/dp_hdcp2p2.c b/drivers/gpu/drm/msm/dp/dp_hdcp2p2.c
index 016e1b8..0e1490f 100644
--- a/drivers/gpu/drm/msm/dp/dp_hdcp2p2.c
+++ b/drivers/gpu/drm/msm/dp/dp_hdcp2p2.c
@@ -234,7 +234,7 @@ static void dp_hdcp2p2_reset(struct dp_hdcp2p2_ctrl *ctrl)
static void dp_hdcp2p2_set_interrupts(struct dp_hdcp2p2_ctrl *ctrl, bool enable)
{
- void __iomem *base = ctrl->init_data.core_io->base;
+ void __iomem *base = ctrl->init_data.dp_ahb->base;
struct dp_hdcp2p2_interrupts *intr = ctrl->intr;
while (intr && intr->reg) {
@@ -740,13 +740,13 @@ static int dp_hdcp2p2_isr(void *input)
struct dp_hdcp2p2_interrupts *intr;
u32 hdcp_int_val = 0;
- if (!ctrl || !ctrl->init_data.core_io) {
+ if (!ctrl || !ctrl->init_data.dp_ahb) {
pr_err("invalid input\n");
rc = -EINVAL;
goto end;
}
- io = ctrl->init_data.core_io;
+ io = ctrl->init_data.dp_ahb;
intr = ctrl->intr;
while (intr && intr->reg) {
diff --git a/drivers/gpu/drm/msm/dp/dp_link.c b/drivers/gpu/drm/msm/dp/dp_link.c
index 0cf488d..3ca247c 100644
--- a/drivers/gpu/drm/msm/dp/dp_link.c
+++ b/drivers/gpu/drm/msm/dp/dp_link.c
@@ -680,7 +680,8 @@ static bool dp_link_is_phy_test_pattern_supported(u32 phy_test_pattern_sel)
case DP_TEST_PHY_PATTERN_SYMBOL_ERR_MEASUREMENT_CNT:
case DP_TEST_PHY_PATTERN_PRBS7:
case DP_TEST_PHY_PATTERN_80_BIT_CUSTOM_PATTERN:
- case DP_TEST_PHY_PATTERN_HBR2_CTS_EYE_PATTERN:
+ case DP_TEST_PHY_PATTERN_CP2520_PATTERN_1:
+ case DP_TEST_PHY_PATTERN_CP2520_PATTERN_3:
return true;
default:
return false;
@@ -986,8 +987,6 @@ static int dp_link_psm_config(struct dp_link *dp_link,
if (ret)
pr_err("Failed to %s low power mode\n",
(enable ? "enter" : "exit"));
- else
- dp_link->psm_enabled = enable;
return ret;
}
diff --git a/drivers/gpu/drm/msm/dp/dp_link.h b/drivers/gpu/drm/msm/dp/dp_link.h
index b1d9249..6f79b6a 100644
--- a/drivers/gpu/drm/msm/dp/dp_link.h
+++ b/drivers/gpu/drm/msm/dp/dp_link.h
@@ -86,7 +86,6 @@ struct dp_link_params {
struct dp_link {
u32 sink_request;
u32 test_response;
- bool psm_enabled;
struct dp_link_sink_count sink_count;
struct dp_link_test_video test_video;
@@ -121,9 +120,12 @@ static inline char *dp_link_get_phy_test_pattern(u32 phy_test_pattern_sel)
case DP_TEST_PHY_PATTERN_80_BIT_CUSTOM_PATTERN:
return DP_LINK_ENUM_STR(
DP_TEST_PHY_PATTERN_80_BIT_CUSTOM_PATTERN);
- case DP_TEST_PHY_PATTERN_HBR2_CTS_EYE_PATTERN:
- return DP_LINK_ENUM_STR(
- DP_TEST_PHY_PATTERN_HBR2_CTS_EYE_PATTERN);
+ case DP_TEST_PHY_PATTERN_CP2520_PATTERN_1:
+ return DP_LINK_ENUM_STR(DP_TEST_PHY_PATTERN_CP2520_PATTERN_1);
+ case DP_TEST_PHY_PATTERN_CP2520_PATTERN_2:
+ return DP_LINK_ENUM_STR(DP_TEST_PHY_PATTERN_CP2520_PATTERN_2);
+ case DP_TEST_PHY_PATTERN_CP2520_PATTERN_3:
+ return DP_LINK_ENUM_STR(DP_TEST_PHY_PATTERN_CP2520_PATTERN_3);
default:
return "unknown";
}
diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
index c24356a..b5dd9bc 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.c
+++ b/drivers/gpu/drm/msm/dp/dp_panel.c
@@ -19,8 +19,46 @@
#define DP_PANEL_DEFAULT_BPP 24
#define DP_MAX_DS_PORT_COUNT 1
-enum {
- DP_LINK_RATE_MULTIPLIER = 27000000,
+#define DPRX_FEATURE_ENUMERATION_LIST 0x2210
+#define VSC_SDP_EXTENSION_FOR_COLORIMETRY_SUPPORTED BIT(3)
+#define VSC_EXT_VESA_SDP_SUPPORTED BIT(4)
+#define VSC_EXT_VESA_SDP_CHAINING_SUPPORTED BIT(5)
+
+enum dp_panel_hdr_pixel_encoding {
+ RGB,
+ YCbCr444,
+ YCbCr422,
+ YCbCr420,
+ YONLY,
+ RAW,
+};
+
+enum dp_panel_hdr_rgb_colorimetry {
+ sRGB,
+ RGB_WIDE_GAMUT_FIXED_POINT,
+ RGB_WIDE_GAMUT_FLOATING_POINT,
+ ADOBERGB,
+ DCI_P3,
+ CUSTOM_COLOR_PROFILE,
+ ITU_R_BT_2020_RGB,
+};
+
+enum dp_panel_hdr_dynamic_range {
+ VESA,
+ CEA,
+};
+
+enum dp_panel_hdr_content_type {
+ NOT_DEFINED,
+ GRAPHICS,
+ PHOTO,
+ VIDEO,
+ GAME,
+};
+
+enum dp_panel_hdr_state {
+ HDR_DISABLED,
+ HDR_ENABLED,
};
struct dp_panel_private {
@@ -29,9 +67,17 @@ struct dp_panel_private {
struct dp_aux *aux;
struct dp_link *link;
struct dp_catalog_panel *catalog;
- bool aux_cfg_update_done;
bool custom_edid;
bool custom_dpcd;
+ bool panel_on;
+ bool vsc_supported;
+ bool vscext_supported;
+ bool vscext_chaining_supported;
+ enum dp_panel_hdr_state hdr_state;
+ u8 spd_vendor_name[8];
+ u8 spd_product_description[16];
+ u8 major;
+ u8 minor;
};
static const struct dp_panel_info fail_safe = {
@@ -51,12 +97,19 @@ static const struct dp_panel_info fail_safe = {
.bpp = 24,
};
+/* OEM NAME */
+static const u8 vendor_name[8] = {81, 117, 97, 108, 99, 111, 109, 109};
+
+/* MODEL NAME */
+static const u8 product_desc[16] = {83, 110, 97, 112, 100, 114, 97, 103,
+ 111, 110, 0, 0, 0, 0, 0, 0};
+
static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
{
int rlen, rc = 0;
struct dp_panel_private *panel;
struct drm_dp_link *link_info;
- u8 *dpcd, major = 0, minor = 0;
+ u8 *dpcd, rx_feature;
u32 dfp_count = 0;
unsigned long caps = DP_LINK_CAP_ENHANCED_FRAMING;
@@ -76,7 +129,11 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
dp_panel->dpcd, (DP_RECEIVER_CAP_SIZE + 1));
if (rlen < (DP_RECEIVER_CAP_SIZE + 1)) {
pr_err("dpcd read failed, rlen=%d\n", rlen);
- rc = -EINVAL;
+ if (rlen == -ETIMEDOUT)
+ rc = rlen;
+ else
+ rc = -EINVAL;
+
goto end;
}
@@ -84,11 +141,33 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel)
DUMP_PREFIX_NONE, 8, 1, dp_panel->dpcd, rlen, false);
}
+ rlen = drm_dp_dpcd_read(panel->aux->drm_aux,
+ DPRX_FEATURE_ENUMERATION_LIST, &rx_feature, 1);
+ if (rlen != 1) {
+ pr_debug("failed to read DPRX_FEATURE_ENUMERATION_LIST\n");
+ panel->vsc_supported = false;
+ panel->vscext_supported = false;
+ panel->vscext_chaining_supported = false;
+ } else {
+ panel->vsc_supported = !!(rx_feature &
+ VSC_SDP_EXTENSION_FOR_COLORIMETRY_SUPPORTED);
+
+ panel->vscext_supported = !!(rx_feature &
+ VSC_EXT_VESA_SDP_SUPPORTED);
+
+ panel->vscext_chaining_supported = !!(rx_feature &
+ VSC_EXT_VESA_SDP_CHAINING_SUPPORTED);
+ }
+
+ pr_debug("vsc=%d, vscext=%d, vscext_chaining=%d\n",
+ panel->vsc_supported, panel->vscext_supported,
+ panel->vscext_chaining_supported);
+
link_info->revision = dp_panel->dpcd[DP_DPCD_REV];
- major = (link_info->revision >> 4) & 0x0f;
- minor = link_info->revision & 0x0f;
- pr_debug("version: %d.%d\n", major, minor);
+ panel->major = (link_info->revision >> 4) & 0x0f;
+ panel->minor = link_info->revision & 0x0f;
+ pr_debug("version: %d.%d\n", panel->major, panel->minor);
link_info->rate =
drm_dp_bw_code_to_link_rate(dp_panel->dpcd[DP_MAX_LINK_RATE]);
@@ -192,8 +271,6 @@ static int dp_panel_set_dpcd(struct dp_panel *dp_panel, u8 *dpcd)
static int dp_panel_read_edid(struct dp_panel *dp_panel,
struct drm_connector *connector)
{
- int retry_cnt = 0;
- const int max_retry = 10;
struct dp_panel_private *panel;
if (!dp_panel) {
@@ -208,24 +285,19 @@ static int dp_panel_read_edid(struct dp_panel *dp_panel,
return 0;
}
- do {
- sde_get_edid(connector, &panel->aux->drm_aux->ddc,
- (void **)&dp_panel->edid_ctrl);
- if (!dp_panel->edid_ctrl->edid) {
- pr_err("EDID read failed\n");
- retry_cnt++;
- panel->aux->reconfig(panel->aux);
- panel->aux_cfg_update_done = true;
- } else {
- u8 *buf = (u8 *)dp_panel->edid_ctrl->edid;
- u32 size = buf[0x7F] ? 256 : 128;
+ sde_get_edid(connector, &panel->aux->drm_aux->ddc,
+ (void **)&dp_panel->edid_ctrl);
+ if (!dp_panel->edid_ctrl->edid) {
+ pr_err("EDID read failed\n");
+ } else {
+ u8 *buf = (u8 *)dp_panel->edid_ctrl->edid;
+ u32 size = buf[0x7E] ? 256 : 128;
- print_hex_dump(KERN_DEBUG, "[drm-dp] SINK EDID: ",
- DUMP_PREFIX_NONE, 16, 1, buf, size, false);
+ print_hex_dump(KERN_DEBUG, "[drm-dp] SINK EDID: ",
+ DUMP_PREFIX_NONE, 16, 1, buf, size, false);
- return 0;
- }
- } while (retry_cnt < max_retry);
+ return 0;
+ }
return -EINVAL;
}
@@ -249,6 +321,10 @@ static int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
dp_panel->link_info.num_lanes) ||
((drm_dp_link_rate_to_bw_code(dp_panel->link_info.rate)) >
dp_panel->max_bw_code)) {
+ if ((rc == -ETIMEDOUT) || (rc == -ENODEV)) {
+ pr_err("DPCD read failed, return early\n");
+ return rc;
+ }
pr_err("panel dpcd read failed/incorrect, set default params\n");
dp_panel_set_default_link_params(dp_panel);
}
@@ -259,12 +335,6 @@ static int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
return rc;
}
- if (panel->aux_cfg_update_done) {
- pr_debug("read DPCD with updated AUX config\n");
- dp_panel_read_dpcd(dp_panel);
- panel->aux_cfg_update_done = false;
- }
-
return 0;
}
@@ -400,6 +470,58 @@ static void dp_panel_handle_sink_request(struct dp_panel *dp_panel)
}
}
+static void dp_panel_tpg_config(struct dp_panel *dp_panel, bool enable)
+{
+ u32 hsync_start_x, hsync_end_x;
+ struct dp_catalog_panel *catalog;
+ struct dp_panel_private *panel;
+ struct dp_panel_info *pinfo;
+
+ if (!dp_panel) {
+ pr_err("invalid input\n");
+ return;
+ }
+
+ panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
+ catalog = panel->catalog;
+ pinfo = &panel->dp_panel.pinfo;
+
+ if (!panel->panel_on) {
+ pr_debug("DP panel not enabled, handle TPG on next panel on\n");
+ return;
+ }
+
+ if (!enable) {
+ panel->catalog->tpg_config(catalog, false);
+ return;
+ }
+
+ /* TPG config */
+ catalog->hsync_period = pinfo->h_sync_width + pinfo->h_back_porch +
+ pinfo->h_active + pinfo->h_front_porch;
+ catalog->vsync_period = pinfo->v_sync_width + pinfo->v_back_porch +
+ pinfo->v_active + pinfo->v_front_porch;
+
+ catalog->display_v_start = ((pinfo->v_sync_width +
+ pinfo->v_back_porch) * catalog->hsync_period);
+ catalog->display_v_end = ((catalog->vsync_period -
+ pinfo->v_front_porch) * catalog->hsync_period) - 1;
+
+ catalog->display_v_start += pinfo->h_sync_width + pinfo->h_back_porch;
+ catalog->display_v_end -= pinfo->h_front_porch;
+
+ hsync_start_x = pinfo->h_back_porch + pinfo->h_sync_width;
+ hsync_end_x = catalog->hsync_period - pinfo->h_front_porch - 1;
+
+ catalog->v_sync_width = pinfo->v_sync_width;
+
+ catalog->hsync_ctl = (catalog->hsync_period << 16) |
+ pinfo->h_sync_width;
+ catalog->display_hctl = (hsync_end_x << 16) | hsync_start_x;
+
+ panel->catalog->tpg_config(catalog, true);
+}
+
static int dp_panel_timing_cfg(struct dp_panel *dp_panel)
{
int rc = 0;
@@ -459,6 +581,7 @@ static int dp_panel_timing_cfg(struct dp_panel *dp_panel)
catalog->dp_active = data;
panel->catalog->timing_cfg(catalog);
+ panel->panel_on = true;
end:
return rc;
}
@@ -533,6 +656,7 @@ static int dp_panel_deinit_panel_info(struct dp_panel *dp_panel)
sde_free_edid((void **)&dp_panel->edid_ctrl);
memset(&dp_panel->pinfo, 0, sizeof(dp_panel->pinfo));
+ panel->panel_on = false;
return rc;
}
@@ -563,37 +687,44 @@ static u32 dp_panel_get_min_req_link_rate(struct dp_panel *dp_panel)
return min_link_rate_khz;
}
-enum dp_panel_hdr_pixel_encoding {
- RGB,
- YCbCr444,
- YCbCr422,
- YCbCr420,
- YONLY,
- RAW,
-};
+static bool dp_panel_hdr_supported(struct dp_panel *dp_panel)
+{
+ struct dp_panel_private *panel;
-enum dp_panel_hdr_rgb_colorimetry {
- sRGB,
- RGB_WIDE_GAMUT_FIXED_POINT,
- RGB_WIDE_GAMUT_FLOATING_POINT,
- ADOBERGB,
- DCI_P3,
- CUSTOM_COLOR_PROFILE,
- ITU_R_BT_2020_RGB,
-};
+ if (!dp_panel) {
+ pr_err("invalid input\n");
+ return false;
+ }
-enum dp_panel_hdr_dynamic_range {
- VESA,
- CEA,
-};
+ panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
-enum dp_panel_hdr_content_type {
- NOT_DEFINED,
- GRAPHICS,
- PHOTO,
- VIDEO,
- GAME,
-};
+ return panel->major >= 1 && panel->vsc_supported &&
+ (panel->minor >= 4 || panel->vscext_supported);
+}
+
+static bool dp_panel_is_validate_hdr_state(struct dp_panel_private *panel,
+ struct drm_msm_ext_hdr_metadata *hdr_meta)
+{
+ struct drm_msm_ext_hdr_metadata *panel_hdr_meta =
+ &panel->catalog->hdr_data.hdr_meta;
+
+ if (!hdr_meta)
+ goto end;
+
+ /* bail out if HDR not active */
+ if (hdr_meta->hdr_state == HDR_DISABLED &&
+ panel->hdr_state == HDR_DISABLED)
+ goto end;
+
+ /* bail out if same meta data is received */
+ if (hdr_meta->hdr_state == HDR_ENABLED &&
+ panel_hdr_meta->eotf == hdr_meta->eotf)
+ goto end;
+
+ return true;
+end:
+ return false;
+}
static int dp_panel_setup_hdr(struct dp_panel *dp_panel,
struct drm_msm_ext_hdr_metadata *hdr_meta)
@@ -602,9 +733,6 @@ static int dp_panel_setup_hdr(struct dp_panel *dp_panel,
struct dp_panel_private *panel;
struct dp_catalog_hdr_data *hdr;
- if (!hdr_meta || !hdr_meta->hdr_state)
- goto end;
-
if (!dp_panel) {
pr_err("invalid input\n");
rc = -EINVAL;
@@ -612,34 +740,76 @@ static int dp_panel_setup_hdr(struct dp_panel *dp_panel,
}
panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
+
+ if (!dp_panel_is_validate_hdr_state(panel, hdr_meta))
+ goto end;
+
+ panel->hdr_state = hdr_meta->hdr_state;
+
hdr = &panel->catalog->hdr_data;
+ hdr->ext_header_byte0 = 0x00;
+ hdr->ext_header_byte1 = 0x04;
+ hdr->ext_header_byte2 = 0x1F;
+ hdr->ext_header_byte3 = 0x00;
+
hdr->vsc_header_byte0 = 0x00;
hdr->vsc_header_byte1 = 0x07;
hdr->vsc_header_byte2 = 0x05;
hdr->vsc_header_byte3 = 0x13;
+ hdr->vscext_header_byte0 = 0x00;
+ hdr->vscext_header_byte1 = 0x87;
+ hdr->vscext_header_byte2 = 0x1D;
+ hdr->vscext_header_byte3 = 0x13 << 2;
+
/* VSC SDP Payload for DB16 */
hdr->pixel_encoding = RGB;
hdr->colorimetry = ITU_R_BT_2020_RGB;
/* VSC SDP Payload for DB17 */
hdr->dynamic_range = CEA;
- hdr->bpc = 10;
/* VSC SDP Payload for DB18 */
hdr->content_type = GRAPHICS;
- hdr->vscext_header_byte0 = 0x00;
- hdr->vscext_header_byte1 = 0x87;
- hdr->vscext_header_byte2 = 0x1D;
- hdr->vscext_header_byte3 = 0x13 << 2;
+ hdr->bpc = dp_panel->pinfo.bpp / 3;
hdr->version = 0x01;
+ hdr->length = 0x1A;
- memcpy(&hdr->hdr_meta, hdr_meta, sizeof(hdr->hdr_meta));
+ if (panel->hdr_state)
+ memcpy(&hdr->hdr_meta, hdr_meta, sizeof(hdr->hdr_meta));
+ else
+ memset(&hdr->hdr_meta, 0, sizeof(hdr->hdr_meta));
- panel->catalog->config_hdr(panel->catalog);
+ panel->catalog->config_hdr(panel->catalog, panel->hdr_state);
+end:
+ return rc;
+}
+
+static int dp_panel_spd_config(struct dp_panel *dp_panel)
+{
+ int rc = 0;
+ struct dp_panel_private *panel;
+
+ if (!dp_panel) {
+ pr_err("invalid input\n");
+ rc = -EINVAL;
+ goto end;
+ }
+
+ if (!dp_panel->spd_enabled) {
+ pr_debug("SPD Infoframe not enabled\n");
+ goto end;
+ }
+
+ panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
+
+ panel->catalog->spd_vendor_name = panel->spd_vendor_name;
+ panel->catalog->spd_product_description =
+ panel->spd_product_description;
+ panel->catalog->config_spd(panel->catalog);
end:
return rc;
}
@@ -668,8 +838,10 @@ struct dp_panel *dp_panel_get(struct dp_panel_in *in)
panel->link = in->link;
dp_panel = &panel->dp_panel;
- panel->aux_cfg_update_done = false;
dp_panel->max_bw_code = DP_LINK_BW_8_1;
+ dp_panel->spd_enabled = true;
+ memcpy(panel->spd_vendor_name, vendor_name, (sizeof(u8) * 8));
+ memcpy(panel->spd_product_description, product_desc, (sizeof(u8) * 16));
dp_panel->init = dp_panel_init_panel_info;
dp_panel->deinit = dp_panel_deinit_panel_info;
@@ -681,9 +853,12 @@ struct dp_panel *dp_panel_get(struct dp_panel_in *in)
dp_panel->handle_sink_request = dp_panel_handle_sink_request;
dp_panel->set_edid = dp_panel_set_edid;
dp_panel->set_dpcd = dp_panel_set_dpcd;
+ dp_panel->tpg_config = dp_panel_tpg_config;
+ dp_panel->spd_config = dp_panel_spd_config;
+ dp_panel->setup_hdr = dp_panel_setup_hdr;
+ dp_panel->hdr_supported = dp_panel_hdr_supported;
dp_panel_edid_register(panel);
- dp_panel->setup_hdr = dp_panel_setup_hdr;
return dp_panel;
error:
diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h
index 0bd1b1d..2583f61 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.h
+++ b/drivers/gpu/drm/msm/dp/dp_panel.h
@@ -68,6 +68,7 @@ struct dp_panel {
struct sde_edid_ctrl *edid_ctrl;
struct dp_panel_info pinfo;
bool video_test;
+ bool spd_enabled;
u32 vic;
u32 max_pclk_khz;
@@ -90,6 +91,9 @@ struct dp_panel {
int (*set_dpcd)(struct dp_panel *dp_panel, u8 *dpcd);
int (*setup_hdr)(struct dp_panel *dp_panel,
struct drm_msm_ext_hdr_metadata *hdr_meta);
+ void (*tpg_config)(struct dp_panel *dp_panel, bool enable);
+ int (*spd_config)(struct dp_panel *dp_panel);
+ bool (*hdr_supported)(struct dp_panel *dp_panel);
};
/**
diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c b/drivers/gpu/drm/msm/dp/dp_parser.c
index 42d9429..c112cdc 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.c
+++ b/drivers/gpu/drm/msm/dp/dp_parser.c
@@ -22,7 +22,10 @@ static void dp_parser_unmap_io_resources(struct dp_parser *parser)
{
struct dp_io *io = &parser->io;
- msm_dss_iounmap(&io->ctrl_io);
+ msm_dss_iounmap(&io->dp_ahb);
+ msm_dss_iounmap(&io->dp_aux);
+ msm_dss_iounmap(&io->dp_link);
+ msm_dss_iounmap(&io->dp_p0);
msm_dss_iounmap(&io->phy_io);
msm_dss_iounmap(&io->ln_tx0_io);
msm_dss_iounmap(&io->ln_tx0_io);
@@ -47,7 +50,25 @@ static int dp_parser_ctrl_res(struct dp_parser *parser)
goto err;
}
- rc = msm_dss_ioremap_byname(pdev, &io->ctrl_io, "dp_ctrl");
+ rc = msm_dss_ioremap_byname(pdev, &io->dp_ahb, "dp_ahb");
+ if (rc) {
+ pr_err("unable to remap dp io resources\n");
+ goto err;
+ }
+
+ rc = msm_dss_ioremap_byname(pdev, &io->dp_aux, "dp_aux");
+ if (rc) {
+ pr_err("unable to remap dp io resources\n");
+ goto err;
+ }
+
+ rc = msm_dss_ioremap_byname(pdev, &io->dp_link, "dp_link");
+ if (rc) {
+ pr_err("unable to remap dp io resources\n");
+ goto err;
+ }
+
+ rc = msm_dss_ioremap_byname(pdev, &io->dp_p0, "dp_p0");
if (rc) {
pr_err("unable to remap dp io resources\n");
goto err;
diff --git a/drivers/gpu/drm/msm/dp/dp_parser.h b/drivers/gpu/drm/msm/dp/dp_parser.h
index 76a72a2..72da381 100644
--- a/drivers/gpu/drm/msm/dp/dp_parser.h
+++ b/drivers/gpu/drm/msm/dp/dp_parser.h
@@ -58,7 +58,10 @@ struct dp_display_data {
/**
* struct dp_ctrl_resource - controller's IO related data
*
- * @ctrl_io: controller's mapped memory address
+ * @dp_ahb: controller's ahb mapped memory address
+ * @dp_aux: controller's aux mapped memory address
+ * @dp_link: controller's link mapped memory address
+ * @dp_p0: controller's p0 mapped memory address
* @phy_io: phy's mapped memory address
* @ln_tx0_io: USB-DP lane TX0's mapped memory address
* @ln_tx1_io: USB-DP lane TX1's mapped memory address
@@ -70,6 +73,10 @@ struct dp_display_data {
*/
struct dp_io {
struct dss_io_data ctrl_io;
+ struct dss_io_data dp_ahb;
+ struct dss_io_data dp_aux;
+ struct dss_io_data dp_link;
+ struct dss_io_data dp_p0;
struct dss_io_data phy_io;
struct dss_io_data ln_tx0_io;
struct dss_io_data ln_tx1_io;
diff --git a/drivers/gpu/drm/msm/dp/dp_reg.h b/drivers/gpu/drm/msm/dp/dp_reg.h
index 25d035d..4e2194e 100644
--- a/drivers/gpu/drm/msm/dp/dp_reg.h
+++ b/drivers/gpu/drm/msm/dp/dp_reg.h
@@ -25,137 +25,158 @@
#define DP_INTR_STATUS2 (0x00000024)
#define DP_INTR_STATUS3 (0x00000028)
-#define DP_DP_HPD_CTRL (0x00000200)
-#define DP_DP_HPD_INT_STATUS (0x00000204)
-#define DP_DP_HPD_INT_ACK (0x00000208)
-#define DP_DP_HPD_INT_MASK (0x0000020C)
-#define DP_DP_HPD_REFTIMER (0x00000218)
-#define DP_DP_HPD_EVENT_TIME_0 (0x0000021C)
-#define DP_DP_HPD_EVENT_TIME_1 (0x00000220)
-#define DP_AUX_CTRL (0x00000230)
-#define DP_AUX_DATA (0x00000234)
-#define DP_AUX_TRANS_CTRL (0x00000238)
-#define DP_TIMEOUT_COUNT (0x0000023C)
-#define DP_AUX_LIMITS (0x00000240)
-#define DP_AUX_STATUS (0x00000244)
+#define DP_DP_HPD_CTRL (0x00000000)
+#define DP_DP_HPD_INT_STATUS (0x00000004)
+#define DP_DP_HPD_INT_ACK (0x00000008)
+#define DP_DP_HPD_INT_MASK (0x0000000C)
+#define DP_DP_HPD_REFTIMER (0x00000018)
+#define DP_DP_HPD_EVENT_TIME_0 (0x0000001C)
+#define DP_DP_HPD_EVENT_TIME_1 (0x00000020)
+#define DP_AUX_CTRL (0x00000030)
+#define DP_AUX_DATA (0x00000034)
+#define DP_AUX_TRANS_CTRL (0x00000038)
+#define DP_TIMEOUT_COUNT (0x0000003C)
+#define DP_AUX_LIMITS (0x00000040)
+#define DP_AUX_STATUS (0x00000044)
#define DP_DPCD_CP_IRQ (0x201)
#define DP_DPCD_RXSTATUS (0x69493)
-#define DP_INTERRUPT_TRANS_NUM (0x000002A0)
+#define DP_INTERRUPT_TRANS_NUM (0x000000A0)
-#define DP_MAINLINK_CTRL (0x00000400)
-#define DP_STATE_CTRL (0x00000404)
-#define DP_CONFIGURATION_CTRL (0x00000408)
-#define DP_SOFTWARE_MVID (0x00000410)
-#define DP_SOFTWARE_NVID (0x00000418)
-#define DP_TOTAL_HOR_VER (0x0000041C)
-#define DP_START_HOR_VER_FROM_SYNC (0x00000420)
-#define DP_HSYNC_VSYNC_WIDTH_POLARITY (0x00000424)
-#define DP_ACTIVE_HOR_VER (0x00000428)
-#define DP_MISC1_MISC0 (0x0000042C)
-#define DP_VALID_BOUNDARY (0x00000430)
-#define DP_VALID_BOUNDARY_2 (0x00000434)
-#define DP_LOGICAL2PHYSCIAL_LANE_MAPPING (0x00000438)
+#define DP_MAINLINK_CTRL (0x00000000)
+#define DP_STATE_CTRL (0x00000004)
+#define DP_CONFIGURATION_CTRL (0x00000008)
+#define DP_SOFTWARE_MVID (0x00000010)
+#define DP_SOFTWARE_NVID (0x00000018)
+#define DP_TOTAL_HOR_VER (0x0000001C)
+#define DP_START_HOR_VER_FROM_SYNC (0x00000020)
+#define DP_HSYNC_VSYNC_WIDTH_POLARITY (0x00000024)
+#define DP_ACTIVE_HOR_VER (0x00000028)
+#define DP_MISC1_MISC0 (0x0000002C)
+#define DP_VALID_BOUNDARY (0x00000030)
+#define DP_VALID_BOUNDARY_2 (0x00000034)
+#define DP_LOGICAL2PHYSICAL_LANE_MAPPING (0x00000038)
-#define DP_MAINLINK_READY (0x00000440)
-#define DP_MAINLINK_LEVELS (0x00000444)
-#define DP_TU (0x0000044C)
+#define DP_MAINLINK_READY (0x00000040)
+#define DP_MAINLINK_LEVELS (0x00000044)
+#define DP_TU (0x0000004C)
-#define DP_HBR2_COMPLIANCE_SCRAMBLER_RESET (0x00000454)
-#define DP_TEST_80BIT_CUSTOM_PATTERN_REG0 (0x000004C0)
-#define DP_TEST_80BIT_CUSTOM_PATTERN_REG1 (0x000004C4)
-#define DP_TEST_80BIT_CUSTOM_PATTERN_REG2 (0x000004C8)
+#define DP_HBR2_COMPLIANCE_SCRAMBLER_RESET (0x00000054)
+#define DP_TEST_80BIT_CUSTOM_PATTERN_REG0 (0x000000C0)
+#define DP_TEST_80BIT_CUSTOM_PATTERN_REG1 (0x000000C4)
+#define DP_TEST_80BIT_CUSTOM_PATTERN_REG2 (0x000000C8)
-#define MMSS_DP_MISC1_MISC0 (0x0000042C)
-#define MMSS_DP_AUDIO_TIMING_GEN (0x00000480)
-#define MMSS_DP_AUDIO_TIMING_RBR_32 (0x00000484)
-#define MMSS_DP_AUDIO_TIMING_HBR_32 (0x00000488)
-#define MMSS_DP_AUDIO_TIMING_RBR_44 (0x0000048C)
-#define MMSS_DP_AUDIO_TIMING_HBR_44 (0x00000490)
-#define MMSS_DP_AUDIO_TIMING_RBR_48 (0x00000494)
-#define MMSS_DP_AUDIO_TIMING_HBR_48 (0x00000498)
+#define MMSS_DP_MISC1_MISC0 (0x0000002C)
+#define MMSS_DP_AUDIO_TIMING_GEN (0x00000080)
+#define MMSS_DP_AUDIO_TIMING_RBR_32 (0x00000084)
+#define MMSS_DP_AUDIO_TIMING_HBR_32 (0x00000088)
+#define MMSS_DP_AUDIO_TIMING_RBR_44 (0x0000008C)
+#define MMSS_DP_AUDIO_TIMING_HBR_44 (0x00000090)
+#define MMSS_DP_AUDIO_TIMING_RBR_48 (0x00000094)
+#define MMSS_DP_AUDIO_TIMING_HBR_48 (0x00000098)
-#define MMSS_DP_PSR_CRC_RG (0x00000554)
-#define MMSS_DP_PSR_CRC_B (0x00000558)
+#define MMSS_DP_PSR_CRC_RG (0x00000154)
+#define MMSS_DP_PSR_CRC_B (0x00000158)
-#define DP_COMPRESSION_MODE_CTRL (0x00000580)
+#define DP_COMPRESSION_MODE_CTRL (0x00000180)
-#define MMSS_DP_AUDIO_CFG (0x00000600)
-#define MMSS_DP_AUDIO_STATUS (0x00000604)
-#define MMSS_DP_AUDIO_PKT_CTRL (0x00000608)
-#define MMSS_DP_AUDIO_PKT_CTRL2 (0x0000060C)
-#define MMSS_DP_AUDIO_ACR_CTRL (0x00000610)
-#define MMSS_DP_AUDIO_CTRL_RESET (0x00000614)
+#define MMSS_DP_AUDIO_CFG (0x00000200)
+#define MMSS_DP_AUDIO_STATUS (0x00000204)
+#define MMSS_DP_AUDIO_PKT_CTRL (0x00000208)
+#define MMSS_DP_AUDIO_PKT_CTRL2 (0x0000020C)
+#define MMSS_DP_AUDIO_ACR_CTRL (0x00000210)
+#define MMSS_DP_AUDIO_CTRL_RESET (0x00000214)
-#define MMSS_DP_SDP_CFG (0x00000628)
-#define MMSS_DP_SDP_CFG2 (0x0000062C)
-#define MMSS_DP_AUDIO_TIMESTAMP_0 (0x00000630)
-#define MMSS_DP_AUDIO_TIMESTAMP_1 (0x00000634)
+#define MMSS_DP_SDP_CFG (0x00000228)
+#define MMSS_DP_SDP_CFG2 (0x0000022C)
+#define MMSS_DP_SDP_CFG3 (0x0000024C)
+#define MMSS_DP_AUDIO_TIMESTAMP_0 (0x00000230)
+#define MMSS_DP_AUDIO_TIMESTAMP_1 (0x00000234)
-#define MMSS_DP_AUDIO_STREAM_0 (0x00000640)
-#define MMSS_DP_AUDIO_STREAM_1 (0x00000644)
+#define MMSS_DP_AUDIO_STREAM_0 (0x00000240)
+#define MMSS_DP_AUDIO_STREAM_1 (0x00000244)
-#define MMSS_DP_EXTENSION_0 (0x00000650)
-#define MMSS_DP_EXTENSION_1 (0x00000654)
-#define MMSS_DP_EXTENSION_2 (0x00000658)
-#define MMSS_DP_EXTENSION_3 (0x0000065C)
-#define MMSS_DP_EXTENSION_4 (0x00000660)
-#define MMSS_DP_EXTENSION_5 (0x00000664)
-#define MMSS_DP_EXTENSION_6 (0x00000668)
-#define MMSS_DP_EXTENSION_7 (0x0000066C)
-#define MMSS_DP_EXTENSION_8 (0x00000670)
-#define MMSS_DP_EXTENSION_9 (0x00000674)
-#define MMSS_DP_AUDIO_COPYMANAGEMENT_0 (0x00000678)
-#define MMSS_DP_AUDIO_COPYMANAGEMENT_1 (0x0000067C)
-#define MMSS_DP_AUDIO_COPYMANAGEMENT_2 (0x00000680)
-#define MMSS_DP_AUDIO_COPYMANAGEMENT_3 (0x00000684)
-#define MMSS_DP_AUDIO_COPYMANAGEMENT_4 (0x00000688)
-#define MMSS_DP_AUDIO_COPYMANAGEMENT_5 (0x0000068C)
-#define MMSS_DP_AUDIO_ISRC_0 (0x00000690)
-#define MMSS_DP_AUDIO_ISRC_1 (0x00000694)
-#define MMSS_DP_AUDIO_ISRC_2 (0x00000698)
-#define MMSS_DP_AUDIO_ISRC_3 (0x0000069C)
-#define MMSS_DP_AUDIO_ISRC_4 (0x000006A0)
-#define MMSS_DP_AUDIO_ISRC_5 (0x000006A4)
-#define MMSS_DP_AUDIO_INFOFRAME_0 (0x000006A8)
-#define MMSS_DP_AUDIO_INFOFRAME_1 (0x000006AC)
-#define MMSS_DP_AUDIO_INFOFRAME_2 (0x000006B0)
+#define MMSS_DP_EXTENSION_0 (0x00000250)
+#define MMSS_DP_EXTENSION_1 (0x00000254)
+#define MMSS_DP_EXTENSION_2 (0x00000258)
+#define MMSS_DP_EXTENSION_3 (0x0000025C)
+#define MMSS_DP_EXTENSION_4 (0x00000260)
+#define MMSS_DP_EXTENSION_5 (0x00000264)
+#define MMSS_DP_EXTENSION_6 (0x00000268)
+#define MMSS_DP_EXTENSION_7 (0x0000026C)
+#define MMSS_DP_EXTENSION_8 (0x00000270)
+#define MMSS_DP_EXTENSION_9 (0x00000274)
+#define MMSS_DP_AUDIO_COPYMANAGEMENT_0 (0x00000278)
+#define MMSS_DP_AUDIO_COPYMANAGEMENT_1 (0x0000027C)
+#define MMSS_DP_AUDIO_COPYMANAGEMENT_2 (0x00000280)
+#define MMSS_DP_AUDIO_COPYMANAGEMENT_3 (0x00000284)
+#define MMSS_DP_AUDIO_COPYMANAGEMENT_4 (0x00000288)
+#define MMSS_DP_AUDIO_COPYMANAGEMENT_5 (0x0000028C)
+#define MMSS_DP_AUDIO_ISRC_0 (0x00000290)
+#define MMSS_DP_AUDIO_ISRC_1 (0x00000294)
+#define MMSS_DP_AUDIO_ISRC_2 (0x00000298)
+#define MMSS_DP_AUDIO_ISRC_3 (0x0000029C)
+#define MMSS_DP_AUDIO_ISRC_4 (0x000002A0)
+#define MMSS_DP_AUDIO_ISRC_5 (0x000002A4)
+#define MMSS_DP_AUDIO_INFOFRAME_0 (0x000002A8)
+#define MMSS_DP_AUDIO_INFOFRAME_1 (0x000002AC)
+#define MMSS_DP_AUDIO_INFOFRAME_2 (0x000002B0)
-#define MMSS_DP_GENERIC0_0 (0x00000700)
-#define MMSS_DP_GENERIC0_1 (0x00000704)
-#define MMSS_DP_GENERIC0_2 (0x00000708)
-#define MMSS_DP_GENERIC0_3 (0x0000070C)
-#define MMSS_DP_GENERIC0_4 (0x00000710)
-#define MMSS_DP_GENERIC0_5 (0x00000714)
-#define MMSS_DP_GENERIC0_6 (0x00000718)
-#define MMSS_DP_GENERIC0_7 (0x0000071C)
-#define MMSS_DP_GENERIC0_8 (0x00000720)
-#define MMSS_DP_GENERIC0_9 (0x00000724)
-#define MMSS_DP_GENERIC1_0 (0x00000728)
-#define MMSS_DP_GENERIC1_1 (0x0000072C)
-#define MMSS_DP_GENERIC1_2 (0x00000730)
-#define MMSS_DP_GENERIC1_3 (0x00000734)
-#define MMSS_DP_GENERIC1_4 (0x00000738)
-#define MMSS_DP_GENERIC1_5 (0x0000073C)
-#define MMSS_DP_GENERIC1_6 (0x00000740)
-#define MMSS_DP_GENERIC1_7 (0x00000744)
-#define MMSS_DP_GENERIC1_8 (0x00000748)
-#define MMSS_DP_GENERIC1_9 (0x0000074C)
+#define MMSS_DP_GENERIC0_0 (0x00000300)
+#define MMSS_DP_GENERIC0_1 (0x00000304)
+#define MMSS_DP_GENERIC0_2 (0x00000308)
+#define MMSS_DP_GENERIC0_3 (0x0000030C)
+#define MMSS_DP_GENERIC0_4 (0x00000310)
+#define MMSS_DP_GENERIC0_5 (0x00000314)
+#define MMSS_DP_GENERIC0_6 (0x00000318)
+#define MMSS_DP_GENERIC0_7 (0x0000031C)
+#define MMSS_DP_GENERIC0_8 (0x00000320)
+#define MMSS_DP_GENERIC0_9 (0x00000324)
+#define MMSS_DP_GENERIC1_0 (0x00000328)
+#define MMSS_DP_GENERIC1_1 (0x0000032C)
+#define MMSS_DP_GENERIC1_2 (0x00000330)
+#define MMSS_DP_GENERIC1_3 (0x00000334)
+#define MMSS_DP_GENERIC1_4 (0x00000338)
+#define MMSS_DP_GENERIC1_5 (0x0000033C)
+#define MMSS_DP_GENERIC1_6 (0x00000340)
+#define MMSS_DP_GENERIC1_7 (0x00000344)
+#define MMSS_DP_GENERIC1_8 (0x00000348)
+#define MMSS_DP_GENERIC1_9 (0x0000034C)
-#define MMSS_DP_VSCEXT_0 (0x000006D0)
-#define MMSS_DP_VSCEXT_1 (0x000006D4)
-#define MMSS_DP_VSCEXT_2 (0x000006D8)
-#define MMSS_DP_VSCEXT_3 (0x000006DC)
-#define MMSS_DP_VSCEXT_4 (0x000006E0)
-#define MMSS_DP_VSCEXT_5 (0x000006E4)
-#define MMSS_DP_VSCEXT_6 (0x000006E8)
-#define MMSS_DP_VSCEXT_7 (0x000006EC)
-#define MMSS_DP_VSCEXT_8 (0x000006F0)
-#define MMSS_DP_VSCEXT_9 (0x000006F4)
+#define MMSS_DP_VSCEXT_0 (0x000002D0)
+#define MMSS_DP_VSCEXT_1 (0x000002D4)
+#define MMSS_DP_VSCEXT_2 (0x000002D8)
+#define MMSS_DP_VSCEXT_3 (0x000002DC)
+#define MMSS_DP_VSCEXT_4 (0x000002E0)
+#define MMSS_DP_VSCEXT_5 (0x000002E4)
+#define MMSS_DP_VSCEXT_6 (0x000002E8)
+#define MMSS_DP_VSCEXT_7 (0x000002EC)
+#define MMSS_DP_VSCEXT_8 (0x000002F0)
+#define MMSS_DP_VSCEXT_9 (0x000002F4)
-#define MMSS_DP_TIMING_ENGINE_EN (0x00000A10)
-#define MMSS_DP_ASYNC_FIFO_CONFIG (0x00000A88)
+#define MMSS_DP_BIST_ENABLE (0x00000000)
+#define MMSS_DP_TIMING_ENGINE_EN (0x00000010)
+#define MMSS_DP_INTF_CONFIG (0x00000014)
+#define MMSS_DP_INTF_HSYNC_CTL (0x00000018)
+#define MMSS_DP_INTF_VSYNC_PERIOD_F0 (0x0000001C)
+#define MMSS_DP_INTF_VSYNC_PERIOD_F1 (0x00000020)
+#define MMSS_DP_INTF_VSYNC_PULSE_WIDTH_F0 (0x00000024)
+#define MMSS_DP_INTF_VSYNC_PULSE_WIDTH_F1 (0x00000028)
+#define MMSS_INTF_DISPLAY_V_START_F0 (0x0000002C)
+#define MMSS_INTF_DISPLAY_V_START_F1 (0x00000030)
+#define MMSS_DP_INTF_DISPLAY_V_END_F0 (0x00000034)
+#define MMSS_DP_INTF_DISPLAY_V_END_F1 (0x00000038)
+#define MMSS_DP_INTF_ACTIVE_V_START_F0 (0x0000003C)
+#define MMSS_DP_INTF_ACTIVE_V_START_F1 (0x00000040)
+#define MMSS_DP_INTF_ACTIVE_V_END_F0 (0x00000044)
+#define MMSS_DP_INTF_ACTIVE_V_END_F1 (0x00000048)
+#define MMSS_DP_INTF_DISPLAY_HCTL (0x0000004C)
+#define MMSS_DP_INTF_ACTIVE_HCTL (0x00000050)
+#define MMSS_DP_INTF_POLARITY_CTL (0x00000058)
+#define MMSS_DP_TPG_MAIN_CONTROL (0x00000060)
+#define MMSS_DP_TPG_VIDEO_CONFIG (0x00000064)
+#define MMSS_DP_ASYNC_FIFO_CONFIG (0x00000088)
/*DP PHY Register offsets */
#define DP_PHY_REVISION_ID0 (0x00000000)
@@ -197,14 +218,14 @@
/* DP HDCP 1.3 registers */
#define DP_HDCP_CTRL (0x0A0)
#define DP_HDCP_STATUS (0x0A4)
-#define DP_HDCP_SW_UPPER_AKSV (0x298)
-#define DP_HDCP_SW_LOWER_AKSV (0x29C)
-#define DP_HDCP_ENTROPY_CTRL0 (0x750)
-#define DP_HDCP_ENTROPY_CTRL1 (0x75C)
+#define DP_HDCP_SW_UPPER_AKSV (0x098)
+#define DP_HDCP_SW_LOWER_AKSV (0x09C)
+#define DP_HDCP_ENTROPY_CTRL0 (0x350)
+#define DP_HDCP_ENTROPY_CTRL1 (0x35C)
#define DP_HDCP_SHA_STATUS (0x0C8)
#define DP_HDCP_RCVPORT_DATA2_0 (0x0B0)
-#define DP_HDCP_RCVPORT_DATA3 (0x2A4)
-#define DP_HDCP_RCVPORT_DATA4 (0x2A8)
+#define DP_HDCP_RCVPORT_DATA3 (0x0A4)
+#define DP_HDCP_RCVPORT_DATA4 (0x0A8)
#define DP_HDCP_RCVPORT_DATA5 (0x0C0)
#define DP_HDCP_RCVPORT_DATA6 (0x0C4)
diff --git a/drivers/gpu/drm/msm/dp/dp_usbpd.c b/drivers/gpu/drm/msm/dp/dp_usbpd.c
index 98781abb..3ddc499 100644
--- a/drivers/gpu/drm/msm/dp/dp_usbpd.c
+++ b/drivers/gpu/drm/msm/dp/dp_usbpd.c
@@ -64,6 +64,7 @@ struct dp_usbpd_capabilities {
};
struct dp_usbpd_private {
+ bool forced_disconnect;
u32 vdo;
struct device *dev;
struct usbpd *pd;
@@ -345,7 +346,7 @@ static void dp_usbpd_response_cb(struct usbpd_svid_handler *hdlr, u8 cmd,
dp_usbpd_send_event(pd, DP_USBPD_EVT_STATUS);
break;
case USBPD_SVDM_ATTENTION:
- if (pd->dp_usbpd.forced_disconnect)
+ if (pd->forced_disconnect)
break;
pd->vdo = *vdos;
@@ -396,7 +397,7 @@ static void dp_usbpd_response_cb(struct usbpd_svid_handler *hdlr, u8 cmd,
}
}
-static int dp_usbpd_connect(struct dp_usbpd *dp_usbpd, bool hpd)
+static int dp_usbpd_simulate_connect(struct dp_usbpd *dp_usbpd, bool hpd)
{
int rc = 0;
struct dp_usbpd_private *pd;
@@ -410,7 +411,7 @@ static int dp_usbpd_connect(struct dp_usbpd *dp_usbpd, bool hpd)
pd = container_of(dp_usbpd, struct dp_usbpd_private, dp_usbpd);
dp_usbpd->hpd_high = hpd;
- dp_usbpd->forced_disconnect = !hpd;
+ pd->forced_disconnect = !hpd;
if (hpd)
pd->dp_cb->configure(pd->dev);
@@ -469,7 +470,7 @@ struct dp_usbpd *dp_usbpd_get(struct device *dev, struct dp_usbpd_cb *cb)
}
dp_usbpd = &usbpd->dp_usbpd;
- dp_usbpd->connect = dp_usbpd_connect;
+ dp_usbpd->simulate_connect = dp_usbpd_simulate_connect;
return dp_usbpd;
error:
diff --git a/drivers/gpu/drm/msm/dp/dp_usbpd.h b/drivers/gpu/drm/msm/dp/dp_usbpd.h
index 5b392f5..e70ad7d 100644
--- a/drivers/gpu/drm/msm/dp/dp_usbpd.h
+++ b/drivers/gpu/drm/msm/dp/dp_usbpd.h
@@ -49,7 +49,7 @@ enum dp_usbpd_port {
* @hpd_irq: Change in the status since last message
* @alt_mode_cfg_done: bool to specify alt mode status
* @debug_en: bool to specify debug mode
- * @connect: simulate disconnect or connect for debug mode
+ * @simulate_connect: simulate disconnect or connect for debug mode
*/
struct dp_usbpd {
enum dp_usbpd_port port;
@@ -63,9 +63,8 @@ struct dp_usbpd {
bool hpd_irq;
bool alt_mode_cfg_done;
bool debug_en;
- bool forced_disconnect;
- int (*connect)(struct dp_usbpd *dp_usbpd, bool hpd);
+ int (*simulate_connect)(struct dp_usbpd *dp_usbpd, bool hpd);
};
/**
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.c b/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.c
index a74216b..1f10e3c 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.c
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.c
@@ -955,12 +955,30 @@ static int dsi_message_tx(struct dsi_ctrl *dsi_ctrl,
u8 *cmdbuf;
struct dsi_mode_info *timing;
+ /* override cmd fetch mode during secure session */
+ if (dsi_ctrl->secure_mode) {
+ flags &= ~DSI_CTRL_CMD_FETCH_MEMORY;
+ flags |= DSI_CTRL_CMD_FIFO_STORE;
+ pr_debug("[%s] override to TPG during secure session\n",
+ dsi_ctrl->name);
+ }
+
rc = mipi_dsi_create_packet(&packet, msg);
if (rc) {
pr_err("Failed to create message packet, rc=%d\n", rc);
goto error;
}
+ /* fail cmds more than the supported size in TPG mode */
+ if ((flags & DSI_CTRL_CMD_FIFO_STORE) &&
+ (msg->tx_len > DSI_CTRL_MAX_CMD_FIFO_STORE_SIZE)) {
+ pr_err("[%s] TPG cmd size:%zd not supported, secure:%d\n",
+ dsi_ctrl->name, msg->tx_len,
+ dsi_ctrl->secure_mode);
+ rc = -ENOTSUPP;
+ goto error;
+ }
+
rc = dsi_ctrl_copy_and_pad_cmd(dsi_ctrl,
&packet,
&buffer,
@@ -1554,6 +1572,7 @@ static int dsi_ctrl_dev_probe(struct platform_device *pdev)
mutex_unlock(&dsi_ctrl_list_lock);
mutex_init(&dsi_ctrl->ctrl_lock);
+ dsi_ctrl->secure_mode = false;
dsi_ctrl->pdev = pdev;
platform_set_drvdata(pdev, dsi_ctrl);
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.h b/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.h
index a33bbfe..f5b08a0 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.h
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_ctrl.h
@@ -46,6 +46,9 @@
#define DSI_CTRL_CMD_FETCH_MEMORY 0x20
#define DSI_CTRL_CMD_LAST_COMMAND 0x40
+/* max size supported for dsi cmd transfer using TPG */
+#define DSI_CTRL_MAX_CMD_FIFO_STORE_SIZE 64
+
/**
* enum dsi_power_state - defines power states for dsi controller.
* @DSI_CTRL_POWER_VREG_OFF: Digital and analog supplies for DSI controller
@@ -191,8 +194,9 @@ struct dsi_ctrl_interrupts {
* Origin is top left of this CTRL.
* @tx_cmd_buf: Tx command buffer.
* @cmd_buffer_iova: cmd buffer mapped address.
- * @vaddr: CPU virtual address of cmd buffer.
* @cmd_buffer_size: Size of command buffer.
+ * @vaddr: CPU virtual address of cmd buffer.
+ * @secure_mode: Indicates if secure-session is in progress
* @debugfs_root: Root for debugfs entries.
* @misr_enable: Frame MISR enable/disable
* @misr_cache: Cached Frame MISR value
@@ -236,6 +240,7 @@ struct dsi_ctrl {
u32 cmd_buffer_iova;
u32 cmd_len;
void *vaddr;
+ u32 secure_mode;
/* Debug Information */
struct dentry *debugfs_root;
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_display.c b/drivers/gpu/drm/msm/dsi-staging/dsi_display.c
index c60b41e..d92a71d 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_display.c
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_display.c
@@ -36,6 +36,8 @@
#define MISR_BUFF_SIZE 256
+#define MAX_NAME_SIZE 64
+
static DEFINE_MUTEX(dsi_display_list_lock);
static LIST_HEAD(dsi_display_list);
static char dsi_display_primary[MAX_CMDLINE_PARAM_LEN];
@@ -212,6 +214,135 @@ static int dsi_display_cmd_engine_disable(struct dsi_display *display)
return rc;
}
+static void dsi_display_aspace_cb_locked(void *cb_data, bool is_detach)
+{
+ struct dsi_display *display;
+ struct dsi_display_ctrl *display_ctrl;
+ int rc, cnt;
+
+ if (!cb_data) {
+ pr_err("aspace cb called with invalid cb_data\n");
+ return;
+ }
+ display = (struct dsi_display *)cb_data;
+
+ /*
+ * acquire panel_lock to make sure no commands are in-progress
+ * while detaching the non-secure context banks
+ */
+ dsi_panel_acquire_panel_lock(display->panel);
+
+ if (is_detach) {
+ /* invalidate the stored iova */
+ display->cmd_buffer_iova = 0;
+
+ /* return the virtual address mapping */
+ msm_gem_put_vaddr_locked(display->tx_cmd_buf);
+ msm_gem_vunmap(display->tx_cmd_buf);
+
+ } else {
+ rc = msm_gem_get_iova_locked(display->tx_cmd_buf,
+ display->aspace, &(display->cmd_buffer_iova));
+ if (rc) {
+ pr_err("failed to get the iova rc %d\n", rc);
+ goto end;
+ }
+
+ display->vaddr =
+ (void *) msm_gem_get_vaddr_locked(display->tx_cmd_buf);
+
+ if (IS_ERR_OR_NULL(display->vaddr)) {
+ pr_err("failed to get va rc %d\n", rc);
+ goto end;
+ }
+ }
+
+ for (cnt = 0; cnt < display->ctrl_count; cnt++) {
+ display_ctrl = &display->ctrl[cnt];
+ display_ctrl->ctrl->cmd_buffer_size = display->cmd_buffer_size;
+ display_ctrl->ctrl->cmd_buffer_iova = display->cmd_buffer_iova;
+ display_ctrl->ctrl->vaddr = display->vaddr;
+ }
+
+end:
+ /* release panel_lock */
+ dsi_panel_release_panel_lock(display->panel);
+}
+
+/* Allocate memory for cmd dma tx buffer */
+static int dsi_host_alloc_cmd_tx_buffer(struct dsi_display *display)
+{
+ int rc = 0, cnt = 0;
+ struct dsi_display_ctrl *display_ctrl;
+
+ mutex_lock(&display->drm_dev->struct_mutex);
+ display->tx_cmd_buf = msm_gem_new(display->drm_dev,
+ SZ_4K,
+ MSM_BO_UNCACHED);
+ mutex_unlock(&display->drm_dev->struct_mutex);
+
+ if ((display->tx_cmd_buf) == NULL) {
+ pr_err("Failed to allocate cmd tx buf memory\n");
+ rc = -ENOMEM;
+ goto error;
+ }
+
+ display->cmd_buffer_size = SZ_4K;
+
+ display->aspace = msm_gem_smmu_address_space_get(
+ display->drm_dev, MSM_SMMU_DOMAIN_UNSECURE);
+ if (!display->aspace) {
+ pr_err("failed to get aspace\n");
+ rc = -EINVAL;
+ goto free_gem;
+ }
+ /* register to aspace */
+ rc = msm_gem_address_space_register_cb(display->aspace,
+ dsi_display_aspace_cb_locked, (void *)display);
+ if (rc) {
+ pr_err("failed to register callback %d", rc);
+ goto free_gem;
+ }
+
+ rc = msm_gem_get_iova(display->tx_cmd_buf, display->aspace,
+ &(display->cmd_buffer_iova));
+ if (rc) {
+ pr_err("failed to get the iova rc %d\n", rc);
+ goto free_aspace_cb;
+ }
+
+ display->vaddr =
+ (void *) msm_gem_get_vaddr(display->tx_cmd_buf);
+ if (IS_ERR_OR_NULL(display->vaddr)) {
+ pr_err("failed to get va rc %d\n", rc);
+ rc = -EINVAL;
+ goto put_iova;
+ }
+
+ for (cnt = 0; cnt < display->ctrl_count; cnt++) {
+ display_ctrl = &display->ctrl[cnt];
+ display_ctrl->ctrl->cmd_buffer_size = SZ_4K;
+ display_ctrl->ctrl->cmd_buffer_iova =
+ display->cmd_buffer_iova;
+ display_ctrl->ctrl->vaddr = display->vaddr;
+ display_ctrl->ctrl->tx_cmd_buf = display->tx_cmd_buf;
+ }
+
+ return rc;
+
+put_iova:
+ msm_gem_put_iova(display->tx_cmd_buf, display->aspace);
+free_aspace_cb:
+ msm_gem_address_space_unregister_cb(display->aspace,
+ dsi_display_aspace_cb_locked, display);
+free_gem:
+ mutex_lock(&display->drm_dev->struct_mutex);
+ msm_gem_free_object(display->tx_cmd_buf);
+ mutex_unlock(&display->drm_dev->struct_mutex);
+error:
+ return rc;
+}
+
static bool dsi_display_validate_reg_read(struct dsi_panel *panel)
{
int i, j = 0;
@@ -261,6 +392,9 @@ static int dsi_display_read_status(struct dsi_display_ctrl *ctrl,
if (!panel)
return -EINVAL;
+ /* acquire panel_lock to make sure no commands are in progress */
+ dsi_panel_acquire_panel_lock(panel);
+
config = &(panel->esd_config);
lenp = config->status_valid_params ?: config->status_cmds_rlen;
count = config->status_cmd.count;
@@ -287,6 +421,8 @@ static int dsi_display_read_status(struct dsi_display_ctrl *ctrl,
}
error:
+ /* release panel_lock */
+ dsi_panel_release_panel_lock(panel);
return rc;
}
@@ -323,6 +459,14 @@ static int dsi_display_status_reg_read(struct dsi_display *display)
m_ctrl = &display->ctrl[display->cmd_master_idx];
+ if (display->tx_cmd_buf == NULL) {
+ rc = dsi_host_alloc_cmd_tx_buffer(display);
+ if (rc) {
+ pr_err("failed to allocate cmd tx buffer memory\n");
+ goto done;
+ }
+ }
+
rc = dsi_display_cmd_engine_enable(display);
if (rc) {
pr_err("cmd engine enable failed\n");
@@ -351,9 +495,9 @@ static int dsi_display_status_reg_read(struct dsi_display *display)
goto exit;
}
}
-
exit:
dsi_display_cmd_engine_disable(display);
+done:
return rc;
}
@@ -719,6 +863,8 @@ static int dsi_display_debugfs_init(struct dsi_display *display)
{
int rc = 0;
struct dentry *dir, *dump_file, *misr_data;
+ char name[MAX_NAME_SIZE];
+ int i;
dir = debugfs_create_dir(display->name, NULL);
if (IS_ERR_OR_NULL(dir)) {
@@ -752,6 +898,49 @@ static int dsi_display_debugfs_init(struct dsi_display *display)
goto error_remove_dir;
}
+ for (i = 0; i < display->ctrl_count; i++) {
+ struct msm_dsi_phy *phy = display->ctrl[i].phy;
+
+ if (!phy || !phy->name)
+ continue;
+
+ snprintf(name, ARRAY_SIZE(name),
+ "%s_allow_phy_power_off", phy->name);
+ dump_file = debugfs_create_bool(name, 0600, dir,
+ &phy->allow_phy_power_off);
+ if (IS_ERR_OR_NULL(dump_file)) {
+ rc = PTR_ERR(dump_file);
+ pr_err("[%s] debugfs create %s failed, rc=%d\n",
+ display->name, name, rc);
+ goto error_remove_dir;
+ }
+
+ snprintf(name, ARRAY_SIZE(name),
+ "%s_regulator_min_datarate_bps", phy->name);
+ dump_file = debugfs_create_u32(name, 0600, dir,
+ &phy->regulator_min_datarate_bps);
+ if (IS_ERR_OR_NULL(dump_file)) {
+ rc = PTR_ERR(dump_file);
+ pr_err("[%s] debugfs create %s failed, rc=%d\n",
+ display->name, name, rc);
+ goto error_remove_dir;
+ }
+ }
+
+ if (!debugfs_create_bool("ulps_enable", 0600, dir,
+ &display->panel->ulps_enabled)) {
+ pr_err("[%s] debugfs create ulps enable file failed\n",
+ display->name);
+ goto error_remove_dir;
+ }
+
+ if (!debugfs_create_bool("ulps_suspend_enable", 0600, dir,
+ &display->panel->ulps_suspend_enabled)) {
+ pr_err("[%s] debugfs create ulps-suspend enable file failed\n",
+ display->name);
+ goto error_remove_dir;
+ }
+
display->root = dir;
return rc;
error_remove_dir:
@@ -788,12 +977,17 @@ static int dsi_display_is_ulps_req_valid(struct dsi_display *display,
pr_debug("checking ulps req validity\n");
- if (!dsi_panel_ulps_feature_enabled(display->panel))
+ if (!dsi_panel_ulps_feature_enabled(display->panel) &&
+ !display->panel->ulps_suspend_enabled) {
+ pr_debug("%s: ULPS feature is not enabled\n", __func__);
return false;
+ }
- /* TODO: ULPS during suspend */
- if (!dsi_panel_initialized(display->panel))
+ if (!dsi_panel_initialized(display->panel) &&
+ !display->panel->ulps_suspend_enabled) {
+ pr_debug("%s: panel not yet initialized\n", __func__);
return false;
+ }
if (enable && display->ulps_enabled) {
pr_debug("ULPS already enabled\n");
@@ -969,7 +1163,7 @@ static int dsi_display_phy_enable(struct dsi_display *display);
/**
* dsi_display_phy_idle_on() - enable DSI PHY while coming out of idle screen.
* @dsi_display: DSI display handle.
- * @enable: enable/disable DSI PHY.
+ * @mmss_clamp: True if clamp is enabled.
*
* Return: error code.
*/
@@ -1016,7 +1210,6 @@ static int dsi_display_phy_idle_on(struct dsi_display *display,
/**
* dsi_display_phy_idle_off() - disable DSI PHY while going to idle screen.
* @dsi_display: DSI display handle.
- * @enable: enable/disable DSI PHY.
*
* Return: error code.
*/
@@ -1031,9 +1224,16 @@ static int dsi_display_phy_idle_off(struct dsi_display *display)
return -EINVAL;
}
- if (!display->panel->allow_phy_power_off) {
- pr_debug("panel doesn't support this feature\n");
- return 0;
+ for (i = 0; i < display->ctrl_count; i++) {
+ struct msm_dsi_phy *phy = display->ctrl[i].phy;
+
+ if (!phy)
+ continue;
+
+ if (!phy->allow_phy_power_off) {
+ pr_debug("phy doesn't support this feature\n");
+ return 0;
+ }
}
m_ctrl = &display->ctrl[display->cmd_master_idx];
@@ -1896,9 +2096,7 @@ static ssize_t dsi_host_transfer(struct mipi_dsi_host *host,
const struct mipi_dsi_msg *msg)
{
struct dsi_display *display = to_dsi_display(host);
- struct dsi_display_ctrl *display_ctrl;
- struct msm_gem_address_space *aspace = NULL;
- int rc = 0, cnt = 0;
+ int rc = 0;
if (!host || !msg) {
pr_err("Invalid params\n");
@@ -1928,49 +2126,10 @@ static ssize_t dsi_host_transfer(struct mipi_dsi_host *host,
}
if (display->tx_cmd_buf == NULL) {
- mutex_lock(&display->drm_dev->struct_mutex);
- display->tx_cmd_buf = msm_gem_new(display->drm_dev,
- SZ_4K,
- MSM_BO_UNCACHED);
- mutex_unlock(&display->drm_dev->struct_mutex);
-
- display->cmd_buffer_size = SZ_4K;
-
- if ((display->tx_cmd_buf) == NULL) {
- pr_err("value of display->tx_cmd_buf is NULL");
- goto error_disable_cmd_engine;
- }
-
- aspace = msm_gem_smmu_address_space_get(display->drm_dev,
- MSM_SMMU_DOMAIN_UNSECURE);
- if (!aspace) {
- pr_err("failed to get aspace\n");
- rc = -EINVAL;
- goto free_gem;
- }
-
- rc = msm_gem_get_iova(display->tx_cmd_buf, aspace,
- &(display->cmd_buffer_iova));
+ rc = dsi_host_alloc_cmd_tx_buffer(display);
if (rc) {
- pr_err("failed to get the iova rc %d\n", rc);
- goto free_gem;
- }
-
- display->vaddr =
- (void *) msm_gem_get_vaddr(display->tx_cmd_buf);
-
- if (IS_ERR_OR_NULL(display->vaddr)) {
- pr_err("failed to get va rc %d\n", rc);
- rc = -EINVAL;
- goto put_iova;
- }
-
- for (cnt = 0; cnt < display->ctrl_count; cnt++) {
- display_ctrl = &display->ctrl[cnt];
- display_ctrl->ctrl->cmd_buffer_size = SZ_4K;
- display_ctrl->ctrl->cmd_buffer_iova =
- display->cmd_buffer_iova;
- display_ctrl->ctrl->vaddr = display->vaddr;
+ pr_err("failed to allocate cmd tx buffer memory\n");
+ goto error_disable_cmd_engine;
}
}
@@ -2003,13 +2162,6 @@ static ssize_t dsi_host_transfer(struct mipi_dsi_host *host,
pr_err("[%s] failed to disable all DSI clocks, rc=%d\n",
display->name, rc);
}
- return rc;
-put_iova:
- msm_gem_put_iova(display->tx_cmd_buf, aspace);
-free_gem:
- mutex_lock(&display->drm_dev->struct_mutex);
- msm_gem_free_object(display->tx_cmd_buf);
- mutex_unlock(&display->drm_dev->struct_mutex);
error:
return rc;
}
@@ -2227,22 +2379,27 @@ int dsi_pre_clkoff_cb(void *priv,
if ((clk & DSI_LINK_CLK) && (new_state == DSI_CLK_OFF)) {
/*
* If ULPS feature is enabled, enter ULPS first.
+ * However, when blanking the panel, we should enter ULPS
+ * only if ULPS during suspend feature is enabled.
*/
- if (dsi_panel_initialized(display->panel) &&
- dsi_panel_ulps_feature_enabled(display->panel)) {
+ if (!dsi_panel_initialized(display->panel)) {
+ if (display->panel->ulps_suspend_enabled)
+ rc = dsi_display_set_ulps(display, true);
+ } else if (dsi_panel_ulps_feature_enabled(display->panel)) {
rc = dsi_display_set_ulps(display, true);
- if (rc) {
- pr_err("%s: failed enable ulps, rc = %d\n",
- __func__, rc);
- }
}
+ if (rc)
+ pr_err("%s: failed enable ulps, rc = %d\n",
+ __func__, rc);
}
if ((clk & DSI_CORE_CLK) && (new_state == DSI_CLK_OFF)) {
/*
- * Enable DSI clamps only if entering idle power collapse.
+ * Enable DSI clamps only if entering idle power collapse or
+ * when ULPS during suspend is enabled..
*/
- if (dsi_panel_initialized(display->panel)) {
+ if (dsi_panel_initialized(display->panel) ||
+ display->panel->ulps_suspend_enabled) {
dsi_display_phy_idle_off(display);
rc = dsi_display_set_clamp(display, true);
if (rc)
@@ -3061,6 +3218,20 @@ static int dsi_display_set_mode_sub(struct dsi_display *display,
}
}
+ for (i = 0; i < display->ctrl_count; i++) {
+ ctrl = &display->ctrl[i];
+
+ if (!ctrl->phy || !ctrl->ctrl)
+ continue;
+
+ rc = dsi_phy_set_clk_freq(ctrl->phy, &ctrl->ctrl->clk_freq);
+ if (rc) {
+ pr_err("[%s] failed to set phy clk freq, rc=%d\n",
+ display->name, rc);
+ goto error;
+ }
+ }
+
if (priv_info->phy_timing_len) {
for (i = 0; i < display->ctrl_count; i++) {
ctrl = &display->ctrl[i];
@@ -3834,7 +4005,7 @@ int dsi_display_get_info(struct msm_display_info *info, void *disp)
return rc;
}
-int dsi_display_get_mode_count(struct dsi_display *display,
+static int dsi_display_get_mode_count_no_lock(struct dsi_display *display,
u32 *count)
{
struct dsi_dfps_capabilities dfps_caps;
@@ -3846,15 +4017,13 @@ int dsi_display_get_mode_count(struct dsi_display *display,
return -EINVAL;
}
- mutex_lock(&display->display_lock);
-
*count = display->panel->num_timing_nodes;
rc = dsi_panel_get_dfps_caps(display->panel, &dfps_caps);
if (rc) {
pr_err("[%s] failed to get dfps caps from panel\n",
display->name);
- goto done;
+ return rc;
}
num_dfps_rates = !dfps_caps.dfps_support ? 1 :
@@ -3864,7 +4033,22 @@ int dsi_display_get_mode_count(struct dsi_display *display,
/* Inflate num_of_modes by fps in dfps */
*count = display->panel->num_timing_nodes * num_dfps_rates;
-done:
+ return 0;
+}
+
+int dsi_display_get_mode_count(struct dsi_display *display,
+ u32 *count)
+{
+ int rc;
+
+ if (!display || !display->panel) {
+ pr_err("invalid display:%d panel:%d\n", display != NULL,
+ display ? display->panel != NULL : 0);
+ return -EINVAL;
+ }
+
+ mutex_lock(&display->display_lock);
+ rc = dsi_display_get_mode_count_no_lock(display, count);
mutex_unlock(&display->display_lock);
return 0;
@@ -3877,20 +4061,36 @@ void dsi_display_put_mode(struct dsi_display *display,
}
int dsi_display_get_modes(struct dsi_display *display,
- struct dsi_display_mode *modes)
+ struct dsi_display_mode **out_modes)
{
struct dsi_dfps_capabilities dfps_caps;
- u32 num_dfps_rates, panel_mode_count;
+ u32 num_dfps_rates, panel_mode_count, total_mode_count;
u32 mode_idx, array_idx = 0;
- int i, rc = 0;
+ int i, rc = -EINVAL;
- if (!display || !modes) {
+ if (!display || !out_modes) {
pr_err("Invalid params\n");
return -EINVAL;
}
+ *out_modes = NULL;
+
mutex_lock(&display->display_lock);
+ rc = dsi_display_get_mode_count_no_lock(display, &total_mode_count);
+ if (rc)
+ goto error;
+
+ /* free any previously probed modes */
+ kfree(display->modes);
+
+ display->modes = kcalloc(total_mode_count, sizeof(*display->modes),
+ GFP_KERNEL);
+ if (!display->modes) {
+ rc = -ENOMEM;
+ goto error;
+ }
+
rc = dsi_panel_get_dfps_caps(display->panel, &dfps_caps);
if (rc) {
pr_err("[%s] failed to get dfps caps from panel\n",
@@ -3931,12 +4131,14 @@ int dsi_display_get_modes(struct dsi_display *display,
}
for (i = 0; i < num_dfps_rates; i++) {
- struct dsi_display_mode *sub_mode = &modes[array_idx];
+ struct dsi_display_mode *sub_mode =
+ &display->modes[array_idx];
u32 curr_refresh_rate;
if (!sub_mode) {
pr_err("invalid mode data\n");
- return -EFAULT;
+ rc = -EFAULT;
+ goto error;
}
memcpy(sub_mode, &panel_mode, sizeof(panel_mode));
@@ -3960,11 +4162,57 @@ int dsi_display_get_modes(struct dsi_display *display,
}
}
+ *out_modes = display->modes;
+ rc = 0;
+
error:
+ if (rc)
+ kfree(display->modes);
+
mutex_unlock(&display->display_lock);
return rc;
}
+int dsi_display_find_mode(struct dsi_display *display,
+ const struct dsi_display_mode *cmp,
+ struct dsi_display_mode **out_mode)
+{
+ u32 count, i;
+ int rc;
+
+ if (!display || !out_mode)
+ return -EINVAL;
+
+ *out_mode = NULL;
+
+ rc = dsi_display_get_mode_count(display, &count);
+ if (rc)
+ return rc;
+
+ mutex_lock(&display->display_lock);
+ for (i = 0; i < count; i++) {
+ struct dsi_display_mode *m = &display->modes[i];
+
+ if (cmp->timing.v_active == m->timing.v_active &&
+ cmp->timing.h_active == m->timing.h_active &&
+ cmp->timing.refresh_rate == m->timing.refresh_rate) {
+ *out_mode = m;
+ rc = 0;
+ break;
+ }
+ }
+ mutex_unlock(&display->display_lock);
+
+ if (!*out_mode) {
+ pr_err("[%s] failed to find mode for v_active %u h_active %u rate %u\n",
+ display->name, cmp->timing.v_active,
+ cmp->timing.h_active, cmp->timing.refresh_rate);
+ rc = -ENOENT;
+ }
+
+ return rc;
+}
+
/**
* dsi_display_validate_mode_vrr() - Validate if varaible refresh case.
* @display: DSI display handle.
@@ -4329,8 +4577,9 @@ static void dsi_display_handle_lp_rx_timeout(struct work_struct *work)
void *data;
u32 version = 0;
- display = container_of(work, struct dsi_display, fifo_overflow_work);
- if (!display || (display->panel->panel_mode != DSI_OP_VIDEO_MODE))
+ display = container_of(work, struct dsi_display, lp_rx_timeout_work);
+ if (!display || !display->panel ||
+ (display->panel->panel_mode != DSI_OP_VIDEO_MODE))
return;
pr_debug("handle DSI LP RX Timeout error\n");
@@ -4511,17 +4760,27 @@ int dsi_display_prepare(struct dsi_display *display)
goto error_panel_post_unprep;
}
- rc = dsi_display_phy_sw_reset(display);
- if (rc) {
- pr_err("[%s] failed to reset phy, rc=%d\n", display->name, rc);
- goto error_ctrl_clk_off;
- }
+ /*
+ * If ULPS during suspend feature is enabled, then DSI PHY was
+ * left on during suspend. In this case, we do not need to reset/init
+ * PHY. This would have already been done when the CORE clocks are
+ * turned on. However, if cont splash is disabled, the first time DSI
+ * is powered on, phy init needs to be done unconditionally.
+ */
+ if (!display->panel->ulps_suspend_enabled || !display->ulps_enabled) {
+ rc = dsi_display_phy_sw_reset(display);
+ if (rc) {
+ pr_err("[%s] failed to reset phy, rc=%d\n",
+ display->name, rc);
+ goto error_ctrl_clk_off;
+ }
- rc = dsi_display_phy_enable(display);
- if (rc) {
- pr_err("[%s] failed to enable DSI PHY, rc=%d\n",
- display->name, rc);
- goto error_ctrl_clk_off;
+ rc = dsi_display_phy_enable(display);
+ if (rc) {
+ pr_err("[%s] failed to enable DSI PHY, rc=%d\n",
+ display->name, rc);
+ goto error_ctrl_clk_off;
+ }
}
rc = dsi_display_set_clk_src(display);
@@ -5008,10 +5267,12 @@ int dsi_display_unprepare(struct dsi_display *display)
pr_err("[%s] failed to deinit controller, rc=%d\n",
display->name, rc);
- rc = dsi_display_phy_disable(display);
- if (rc)
- pr_err("[%s] failed to disable DSI PHY, rc=%d\n",
- display->name, rc);
+ if (!display->panel->ulps_suspend_enabled) {
+ rc = dsi_display_phy_disable(display);
+ if (rc)
+ pr_err("[%s] failed to disable DSI PHY, rc=%d\n",
+ display->name, rc);
+ }
rc = dsi_display_clk_ctrl(display->dsi_clk_handle,
DSI_CORE_CLK, DSI_CLK_OFF);
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_display.h b/drivers/gpu/drm/msm/dsi-staging/dsi_display.h
index 1600add..87b9fd5 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_display.h
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_display.h
@@ -124,6 +124,7 @@ struct dsi_display_clk_info {
* struct dsi_display - dsi display information
* @pdev: Pointer to platform device.
* @drm_dev: DRM device associated with the display.
+ * @drm_conn: Pointer to DRM connector associated with the display
* @name: Name of the display.
* @display_type: Display type as defined in device tree.
* @list: List pointer.
@@ -134,6 +135,7 @@ struct dsi_display_clk_info {
* @ctrl: Controller information for DSI display.
* @panel: Handle to DSI panel.
* @panel_of: pHandle to DSI panel.
+ * @modes: Array of probed DSI modes
* @type: DSI display type.
* @clk_master_idx: The master controller for controlling clocks. This is an
* index into the ctrl[MAX_DSI_CTRLS_PER_DISPLAY] array.
@@ -161,6 +163,7 @@ struct dsi_display_clk_info {
struct dsi_display {
struct platform_device *pdev;
struct drm_device *drm_dev;
+ struct drm_connector *drm_conn;
const char *name;
const char *display_type;
@@ -176,6 +179,8 @@ struct dsi_display {
struct dsi_panel *panel;
struct device_node *panel_of;
+ struct dsi_display_mode *modes;
+
enum dsi_display_type type;
u32 clk_master_idx;
u32 cmd_master_idx;
@@ -194,6 +199,7 @@ struct dsi_display {
u32 cmd_buffer_size;
u32 cmd_buffer_iova;
void *vaddr;
+ struct msm_gem_address_space *aspace;
struct mipi_dsi_host host;
struct dsi_bridge *bridge;
@@ -301,15 +307,13 @@ int dsi_display_get_mode_count(struct dsi_display *display, u32 *count);
/**
* dsi_display_get_modes() - get modes supported by display
* @display: Handle to display.
- * @modes; Pointer to array of modes. Memory allocated should be
- * big enough to store (count * struct dsi_display_mode)
- * elements. If modes pointer is NULL, number of modes will
- * be stored in the memory pointed to by count.
+ * @modes; Output param, list of DSI modes. Number of modes matches
+ * count returned by dsi_display_get_mode_count
*
* Return: error code.
*/
int dsi_display_get_modes(struct dsi_display *display,
- struct dsi_display_mode *modes);
+ struct dsi_display_mode **modes);
/**
* dsi_display_put_mode() - free up mode created for the display
@@ -322,6 +326,17 @@ void dsi_display_put_mode(struct dsi_display *display,
struct dsi_display_mode *mode);
/**
+ * dsi_display_find_mode() - retrieve cached DSI mode given relevant params
+ * @display: Handle to display.
+ * @cmp: Mode to use as comparison to find original
+ * @out_mode: Output parameter, pointer to retrieved mode
+ *
+ * Return: error code.
+ */
+int dsi_display_find_mode(struct dsi_display *display,
+ const struct dsi_display_mode *cmp,
+ struct dsi_display_mode **out_mode);
+/**
* dsi_display_validate_mode() - validates if mode is supported by display
* @display: Handle to display.
* @mode: Mode to be validated.
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_display_test.c b/drivers/gpu/drm/msm/dsi-staging/dsi_display_test.c
index 6e41f36..1cec9e1 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_display_test.c
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_display_test.c
@@ -28,7 +28,6 @@ static void dsi_display_test_work(struct work_struct *work)
struct dsi_display *display;
struct dsi_display_mode *modes;
u32 count = 0;
- u32 size = 0;
int rc = 0;
test = container_of(work, struct dsi_display_test, test_work);
@@ -40,14 +39,7 @@ static void dsi_display_test_work(struct work_struct *work)
goto test_fail;
}
- size = count * sizeof(*modes);
- modes = kzalloc(size, GFP_KERNEL);
- if (!modes) {
- rc = -ENOMEM;
- goto test_fail;
- }
-
- rc = dsi_display_get_modes(display, modes);
+ rc = dsi_display_get_modes(display, &modes);
if (rc) {
pr_err("failed to get modes, rc=%d\n", rc);
goto test_fail_free_modes;
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_drm.c b/drivers/gpu/drm/msm/dsi-staging/dsi_drm.c
index 91da637..fd50256 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_drm.c
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_drm.c
@@ -197,12 +197,17 @@ static void dsi_bridge_enable(struct drm_bridge *bridge)
static void dsi_bridge_disable(struct drm_bridge *bridge)
{
int rc = 0;
+ struct dsi_display *display;
struct dsi_bridge *c_bridge = to_dsi_bridge(bridge);
if (!bridge) {
pr_err("Invalid params\n");
return;
}
+ display = c_bridge->display;
+
+ if (display && display->drm_conn)
+ sde_connector_helper_bridge_disable(display->drm_conn);
rc = dsi_display_pre_disable(c_bridge->display);
if (rc) {
@@ -263,16 +268,38 @@ static bool dsi_bridge_mode_fixup(struct drm_bridge *bridge,
{
int rc = 0;
struct dsi_bridge *c_bridge = to_dsi_bridge(bridge);
- struct dsi_display_mode dsi_mode, cur_dsi_mode;
+ struct dsi_display *display;
+ struct dsi_display_mode dsi_mode, cur_dsi_mode, *panel_dsi_mode;
struct drm_display_mode cur_mode;
+ struct drm_crtc_state *crtc_state;
+
+ crtc_state = container_of(mode, struct drm_crtc_state, mode);
if (!bridge || !mode || !adjusted_mode) {
pr_err("Invalid params\n");
return false;
}
+ display = c_bridge->display;
+ if (!display) {
+ pr_err("Invalid params\n");
+ return false;
+ }
+
convert_to_dsi_mode(mode, &dsi_mode);
+ /*
+ * retrieve dsi mode from dsi driver's cache since not safe to take
+ * the drm mode config mutex in all paths
+ */
+ rc = dsi_display_find_mode(display, &dsi_mode, &panel_dsi_mode);
+ if (rc)
+ return rc;
+
+ /* propagate the private info to the adjusted_mode derived dsi mode */
+ dsi_mode.priv_info = panel_dsi_mode->priv_info;
+ dsi_mode.dsi_mode_flags = panel_dsi_mode->dsi_mode_flags;
+
rc = dsi_display_validate_mode(c_bridge->display, &dsi_mode,
DSI_VALIDATE_FLAG_ALLOW_ADJUST);
if (rc) {
@@ -280,9 +307,10 @@ static bool dsi_bridge_mode_fixup(struct drm_bridge *bridge,
return false;
}
- if (bridge->encoder && bridge->encoder->crtc) {
+ if (bridge->encoder && bridge->encoder->crtc &&
+ crtc_state->crtc) {
- convert_to_dsi_mode(&bridge->encoder->crtc->state->mode,
+ convert_to_dsi_mode(&crtc_state->crtc->state->mode,
&cur_dsi_mode);
rc = dsi_display_validate_mode_vrr(c_bridge->display,
&cur_dsi_mode, &dsi_mode);
@@ -290,7 +318,7 @@ static bool dsi_bridge_mode_fixup(struct drm_bridge *bridge,
pr_debug("[%s] vrr mode mismatch failure rc=%d\n",
c_bridge->display->name, rc);
- cur_mode = bridge->encoder->crtc->mode;
+ cur_mode = crtc_state->crtc->mode;
if (!drm_mode_equal(&cur_mode, adjusted_mode) &&
(!(dsi_mode.dsi_mode_flags &
@@ -298,6 +326,7 @@ static bool dsi_bridge_mode_fixup(struct drm_bridge *bridge,
dsi_mode.dsi_mode_flags |= DSI_MODE_FLAG_DMS;
}
+ /* convert back to drm mode, propagating the private info & flags */
dsi_convert_to_drm_mode(&dsi_mode, adjusted_mode);
return true;
@@ -356,7 +385,7 @@ static const struct drm_bridge_funcs dsi_bridge_ops = {
.mode_set = dsi_bridge_mode_set,
};
-int dsi_conn_post_init(struct drm_connector *connector,
+int dsi_conn_set_info_blob(struct drm_connector *connector,
void *info, void *display, struct msm_mode_info *mode_info)
{
struct dsi_display *dsi_display = display;
@@ -365,6 +394,8 @@ int dsi_conn_post_init(struct drm_connector *connector,
if (!info || !dsi_display)
return -EINVAL;
+ dsi_display->drm_conn = connector;
+
sde_kms_info_add_keystr(info,
"display type", dsi_display->display_type);
@@ -529,7 +560,6 @@ int dsi_connector_get_modes(struct drm_connector *connector,
void *display)
{
u32 count = 0;
- u32 size = 0;
struct dsi_display_mode *modes = NULL;
struct drm_display_mode drm_mode;
int rc, i;
@@ -545,21 +575,14 @@ int dsi_connector_get_modes(struct drm_connector *connector,
rc = dsi_display_get_mode_count(display, &count);
if (rc) {
pr_err("failed to get num of modes, rc=%d\n", rc);
- goto error;
- }
-
- size = count * sizeof(*modes);
- modes = kzalloc(size, GFP_KERNEL);
- if (!modes) {
- count = 0;
goto end;
}
- rc = dsi_display_get_modes(display, modes);
+ rc = dsi_display_get_modes(display, &modes);
if (rc) {
pr_err("failed to get modes, rc=%d\n", rc);
count = 0;
- goto error;
+ goto end;
}
for (i = 0; i < count; i++) {
@@ -573,14 +596,12 @@ int dsi_connector_get_modes(struct drm_connector *connector,
drm_mode.hdisplay,
drm_mode.vdisplay);
count = -ENOMEM;
- goto error;
+ goto end;
}
m->width_mm = connector->display_info.width_mm;
m->height_mm = connector->display_info.height_mm;
drm_mode_probed_add(connector, m);
}
-error:
- kfree(modes);
end:
pr_debug("MODE COUNT =%d\n\n", count);
return count;
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_drm.h b/drivers/gpu/drm/msm/dsi-staging/dsi_drm.h
index 9a47969..ec58479 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_drm.h
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_drm.h
@@ -33,14 +33,14 @@ struct dsi_bridge {
};
/**
- * dsi_conn_post_init - callback to perform additional initialization steps
+ * dsi_conn_set_info_blob - callback to perform info blob initialization
* @connector: Pointer to drm connector structure
* @info: Pointer to sde connector info structure
* @display: Pointer to private display handle
* @mode_info: Pointer to mode info structure
* Returns: Zero on success
*/
-int dsi_conn_post_init(struct drm_connector *connector,
+int dsi_conn_set_info_blob(struct drm_connector *connector,
void *info,
void *display,
struct msm_mode_info *mode_info);
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_panel.c b/drivers/gpu/drm/msm/dsi-staging/dsi_panel.c
index af34ad0..7671496 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_panel.c
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_panel.c
@@ -1670,8 +1670,14 @@ static int dsi_panel_parse_misc_features(struct dsi_panel *panel,
panel->ulps_enabled =
of_property_read_bool(of_node, "qcom,ulps-enabled");
- if (panel->ulps_enabled)
- pr_debug("ulps_enabled:%d\n", panel->ulps_enabled);
+ pr_info("%s: ulps feature %s\n", __func__,
+ (panel->ulps_enabled ? "enabled" : "disabled"));
+
+ panel->ulps_suspend_enabled =
+ of_property_read_bool(of_node, "qcom,suspend-ulps-enabled");
+
+ pr_info("%s: ulps during suspend feature %s", __func__,
+ (panel->ulps_suspend_enabled ? "enabled" : "disabled"));
panel->te_using_watchdog_timer = of_property_read_bool(of_node,
"qcom,mdss-dsi-te-using-wd");
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_panel.h b/drivers/gpu/drm/msm/dsi-staging/dsi_panel.h
index 0e81538..06199f4 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_panel.h
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_panel.h
@@ -170,6 +170,7 @@ struct dsi_panel {
bool lp11_init;
bool ulps_enabled;
+ bool ulps_suspend_enabled;
bool allow_phy_power_off;
bool panel_initialized;
@@ -191,6 +192,16 @@ static inline bool dsi_panel_initialized(struct dsi_panel *panel)
return panel->panel_initialized;
}
+static inline void dsi_panel_acquire_panel_lock(struct dsi_panel *panel)
+{
+ mutex_lock(&panel->panel_lock);
+}
+
+static inline void dsi_panel_release_panel_lock(struct dsi_panel *panel)
+{
+ mutex_unlock(&panel->panel_lock);
+}
+
struct dsi_panel *dsi_panel_get(struct device *parent,
struct device_node *of_node,
int topology_override);
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_phy.c b/drivers/gpu/drm/msm/dsi-staging/dsi_phy.c
index 4210f77..2567f04 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_phy.c
@@ -33,6 +33,8 @@
#define DSI_PHY_DEFAULT_LABEL "MDSS PHY CTRL"
+#define BITS_PER_BYTE 8
+
struct dsi_phy_list_item {
struct msm_dsi_phy *phy;
struct list_head list;
@@ -290,6 +292,14 @@ static int dsi_phy_settings_init(struct platform_device *pdev,
/* Actual timing values are dependent on panel */
timing->count_per_lane = phy->ver_info->timing_cfg_count;
+
+ phy->allow_phy_power_off = of_property_read_bool(pdev->dev.of_node,
+ "qcom,panel-allow-phy-poweroff");
+
+ of_property_read_u32(pdev->dev.of_node,
+ "qcom,dsi-phy-regulator-min-datarate-bps",
+ &phy->regulator_min_datarate_bps);
+
return 0;
err:
lane->count_per_lane = 0;
@@ -641,7 +651,8 @@ int dsi_phy_set_power_state(struct msm_dsi_phy *dsi_phy, bool enable)
goto error;
}
- if (dsi_phy->dsi_phy_state == DSI_PHY_ENGINE_OFF) {
+ if (dsi_phy->dsi_phy_state == DSI_PHY_ENGINE_OFF &&
+ dsi_phy->regulator_required) {
rc = dsi_pwr_enable_regulator(
&dsi_phy->pwr_info.phy_pwr, true);
if (rc) {
@@ -652,7 +663,8 @@ int dsi_phy_set_power_state(struct msm_dsi_phy *dsi_phy, bool enable)
}
}
} else {
- if (dsi_phy->dsi_phy_state == DSI_PHY_ENGINE_OFF) {
+ if (dsi_phy->dsi_phy_state == DSI_PHY_ENGINE_ON &&
+ dsi_phy->regulator_required) {
rc = dsi_pwr_enable_regulator(
&dsi_phy->pwr_info.phy_pwr, false);
if (rc) {
@@ -896,6 +908,8 @@ int dsi_phy_idle_ctrl(struct msm_dsi_phy *phy, bool enable)
return -EINVAL;
}
+ pr_debug("[%s] enable=%d\n", phy->name, enable);
+
mutex_lock(&phy->phy_lock);
if (enable) {
if (phy->hw.ops.phy_idle_on)
@@ -904,7 +918,17 @@ int dsi_phy_idle_ctrl(struct msm_dsi_phy *phy, bool enable)
if (phy->hw.ops.regulator_enable)
phy->hw.ops.regulator_enable(&phy->hw,
&phy->cfg.regulators);
+
+ if (phy->hw.ops.enable)
+ phy->hw.ops.enable(&phy->hw, &phy->cfg);
+
+ phy->dsi_phy_state = DSI_PHY_ENGINE_ON;
} else {
+ phy->dsi_phy_state = DSI_PHY_ENGINE_OFF;
+
+ if (phy->hw.ops.disable)
+ phy->hw.ops.disable(&phy->hw, &phy->cfg);
+
if (phy->hw.ops.phy_idle_off)
phy->hw.ops.phy_idle_off(&phy->hw);
}
@@ -914,6 +938,41 @@ int dsi_phy_idle_ctrl(struct msm_dsi_phy *phy, bool enable)
}
/**
+ * dsi_phy_set_clk_freq() - set DSI PHY clock frequency setting
+ * @phy: DSI PHY handle
+ * @clk_freq: link clock frequency
+ *
+ * Return: error code.
+ */
+int dsi_phy_set_clk_freq(struct msm_dsi_phy *phy,
+ struct link_clk_freq *clk_freq)
+{
+ if (!phy || !clk_freq) {
+ pr_err("Invalid params\n");
+ return -EINVAL;
+ }
+
+ phy->regulator_required = clk_freq->byte_clk_rate >
+ (phy->regulator_min_datarate_bps / BITS_PER_BYTE);
+
+ /*
+ * DSI PLL needs 0p9 LDO1A for Powering DSI PLL block.
+ * PLL driver can vote for this regulator in PLL driver file, but for
+ * the usecase where we come out of idle(static screen), if PLL and
+ * PHY vote for regulator ,there will be performance delays as both
+ * votes go through RPM to enable regulators.
+ */
+ phy->regulator_required = true;
+ pr_debug("[%s] lane_datarate=%u min_datarate=%u required=%d\n",
+ phy->name,
+ clk_freq->byte_clk_rate * BITS_PER_BYTE,
+ phy->regulator_min_datarate_bps,
+ phy->regulator_required);
+
+ return 0;
+}
+
+/**
* dsi_phy_set_timing_params() - timing parameters for the panel
* @phy: DSI PHY handle
* @timing: array holding timing params.
diff --git a/drivers/gpu/drm/msm/dsi-staging/dsi_phy.h b/drivers/gpu/drm/msm/dsi-staging/dsi_phy.h
index c462d4b..a158812 100644
--- a/drivers/gpu/drm/msm/dsi-staging/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi-staging/dsi_phy.h
@@ -67,6 +67,9 @@ enum phy_engine_state {
* @mode: Current mode.
* @data_lanes: Number of data lanes used.
* @dst_format: Destination format.
+ * @allow_phy_power_off: True if PHY is allowed to power off when idle
+ * @regulator_min_datarate_bps: Minimum per lane data rate to turn on regulator
+ * @regulator_required: True if phy regulator is required
*/
struct msm_dsi_phy {
struct platform_device *pdev;
@@ -88,6 +91,10 @@ struct msm_dsi_phy {
struct dsi_mode_info mode;
enum dsi_data_lanes data_lanes;
enum dsi_pixel_format dst_format;
+
+ bool allow_phy_power_off;
+ u32 regulator_min_datarate_bps;
+ bool regulator_required;
};
/**
@@ -211,6 +218,16 @@ int dsi_phy_clk_cb_register(struct msm_dsi_phy *phy,
int dsi_phy_idle_ctrl(struct msm_dsi_phy *phy, bool enable);
/**
+ * dsi_phy_set_clk_freq() - set DSI PHY clock frequency setting
+ * @phy: DSI PHY handle
+ * @clk_freq: link clock frequency
+ *
+ * Return: error code.
+ */
+int dsi_phy_set_clk_freq(struct msm_dsi_phy *phy,
+ struct link_clk_freq *clk_freq);
+
+/**
* dsi_phy_set_timing_params() - timing parameters for the panel
* @phy: DSI PHY handle
* @timing: array holding timing params.
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 06742d6..33778f8e 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -1373,7 +1373,7 @@ void msm_mode_object_event_notify(struct drm_mode_object *obj,
if (node->event.type != event->type ||
obj->id != node->info.object_id)
continue;
- len = event->length + sizeof(struct drm_msm_event_resp);
+ len = event->length + sizeof(struct msm_drm_event);
if (node->base.file_priv->event_space < len) {
DRM_ERROR("Insufficient space %d for event %x len %d\n",
node->base.file_priv->event_space, event->type,
@@ -1387,7 +1387,8 @@ void msm_mode_object_event_notify(struct drm_mode_object *obj,
notify->base.event = ¬ify->event;
notify->base.pid = node->base.pid;
notify->event.type = node->event.type;
- notify->event.length = len;
+ notify->event.length = event->length +
+ sizeof(struct drm_msm_event_resp);
memcpy(¬ify->info, &node->info, sizeof(notify->info));
memcpy(notify->data, payload, event->length);
ret = drm_event_reserve_init_locked(dev, node->base.file_priv,
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 975cfdd..e5c3082 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -122,6 +122,7 @@ enum msm_mdp_plane_property {
PLANE_PROP_BLEND_OP,
PLANE_PROP_SRC_CONFIG,
PLANE_PROP_FB_TRANSLATION_MODE,
+ PLANE_PROP_MULTIRECT_MODE,
/* total # of properties */
PLANE_PROP_COUNT
@@ -162,6 +163,7 @@ enum msm_mdp_crtc_property {
enum msm_mdp_conn_property {
/* blob properties, always put these first */
CONNECTOR_PROP_SDE_INFO,
+ CONNECTOR_PROP_MODE_INFO,
CONNECTOR_PROP_HDR_INFO,
CONNECTOR_PROP_EXT_HDR_INFO,
CONNECTOR_PROP_PP_DITHER,
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index cc75fb5..2581caf 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -34,8 +34,8 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
struct msm_gpu *gpu, uint32_t nr_bos, uint32_t nr_cmds)
{
struct msm_gem_submit *submit;
- uint64_t sz = sizeof(*submit) + (nr_bos * sizeof(submit->bos[0])) +
- (nr_cmds * sizeof(submit->cmd[0]));
+ uint64_t sz = sizeof(*submit) + ((u64)nr_bos * sizeof(submit->bos[0])) +
+ ((u64)nr_cmds * sizeof(submit->cmd[0]));
if (sz > SIZE_MAX)
return NULL;
diff --git a/drivers/gpu/drm/msm/msm_prop.c b/drivers/gpu/drm/msm/msm_prop.c
index ce84b7a..8e85c81 100644
--- a/drivers/gpu/drm/msm/msm_prop.c
+++ b/drivers/gpu/drm/msm/msm_prop.c
@@ -128,6 +128,20 @@ static void _msm_property_set_dirty_no_lock(
&property_state->dirty_list);
}
+bool msm_property_is_dirty(
+ struct msm_property_info *info,
+ struct msm_property_state *property_state,
+ uint32_t property_idx)
+{
+ if (!info || !property_state || !property_state->values ||
+ property_idx >= info->property_count) {
+ DRM_ERROR("invalid argument(s), idx %u\n", property_idx);
+ return false;
+ }
+
+ return !list_empty(&property_state->values[property_idx].dirty_node);
+}
+
/**
* _msm_property_install_integer - install standard drm range property
* @info: Pointer to property info container struct
diff --git a/drivers/gpu/drm/msm/msm_prop.h b/drivers/gpu/drm/msm/msm_prop.h
index d257a8c..ecc1d0b 100644
--- a/drivers/gpu/drm/msm/msm_prop.h
+++ b/drivers/gpu/drm/msm/msm_prop.h
@@ -316,6 +316,19 @@ int msm_property_set_dirty(struct msm_property_info *info,
int property_idx);
/**
+ * msm_property_is_dirty - check whether a property is dirty
+ * Note: Intended for use during atomic_check before pop_dirty usage
+ * @info: Pointer to property info container struct
+ * @property_state: Pointer to property state container struct
+ * @property_idx: Property index
+ * Returns: true if dirty, false otherwise
+ */
+bool msm_property_is_dirty(
+ struct msm_property_info *info,
+ struct msm_property_state *property_state,
+ uint32_t property_idx);
+
+/**
* msm_property_atomic_set - helper function for atomic property set callback
* @info: Pointer to property info container struct
* @property_state: Pointer to local state structure
diff --git a/drivers/gpu/drm/msm/sde/sde_connector.c b/drivers/gpu/drm/msm/sde/sde_connector.c
index 04fa865..e2d937b 100644
--- a/drivers/gpu/drm/msm/sde/sde_connector.c
+++ b/drivers/gpu/drm/msm/sde/sde_connector.c
@@ -21,6 +21,7 @@
#include "dsi_drm.h"
#include "dsi_display.h"
#include "sde_crtc.h"
+#include "sde_rm.h"
#define BL_NODE_NAME_SIZE 32
@@ -424,6 +425,20 @@ void sde_connector_schedule_status_work(struct drm_connector *connector,
}
}
+void sde_connector_helper_bridge_disable(struct drm_connector *connector)
+{
+ int rc;
+
+ if (!connector)
+ return;
+
+ /* trigger a final connector pre-kickoff for power mode updates */
+ rc = sde_connector_pre_kickoff(connector);
+ if (rc)
+ SDE_ERROR("conn %d final pre kickoff failed %d\n",
+ connector->base.id, rc);
+}
+
static int _sde_connector_update_power_locked(struct sde_connector *c_conn)
{
struct drm_connector *connector;
@@ -475,12 +490,10 @@ static int _sde_connector_update_power_locked(struct sde_connector *c_conn)
return rc;
}
-static int _sde_connector_update_bl_scale(struct sde_connector *c_conn, int idx)
+static int _sde_connector_update_bl_scale(struct sde_connector *c_conn)
{
- struct drm_connector conn;
struct dsi_display *dsi_display;
struct dsi_backlight_config *bl_config;
- uint64_t value;
int rc = 0;
if (!c_conn) {
@@ -488,7 +501,6 @@ static int _sde_connector_update_bl_scale(struct sde_connector *c_conn, int idx)
return -EINVAL;
}
- conn = c_conn->base;
dsi_display = c_conn->display;
if (!dsi_display || !dsi_display->panel) {
SDE_ERROR("Invalid params(s) dsi_display %pK, panel %pK\n",
@@ -498,22 +510,16 @@ static int _sde_connector_update_bl_scale(struct sde_connector *c_conn, int idx)
}
bl_config = &dsi_display->panel->bl_config;
- value = sde_connector_get_property(conn.state, idx);
- if (idx == CONNECTOR_PROP_BL_SCALE) {
- if (value > MAX_BL_SCALE_LEVEL)
- bl_config->bl_scale = MAX_BL_SCALE_LEVEL;
- else
- bl_config->bl_scale = (u32)value;
- } else if (idx == CONNECTOR_PROP_AD_BL_SCALE) {
- if (value > MAX_AD_BL_SCALE_LEVEL)
- bl_config->bl_scale_ad = MAX_AD_BL_SCALE_LEVEL;
- else
- bl_config->bl_scale_ad = (u32)value;
- } else {
- SDE_DEBUG("invalid idx %d\n", idx);
- return 0;
- }
+ if (c_conn->bl_scale > MAX_BL_SCALE_LEVEL)
+ bl_config->bl_scale = MAX_BL_SCALE_LEVEL;
+ else
+ bl_config->bl_scale = c_conn->bl_scale;
+
+ if (c_conn->bl_scale_ad > MAX_AD_BL_SCALE_LEVEL)
+ bl_config->bl_scale_ad = MAX_AD_BL_SCALE_LEVEL;
+ else
+ bl_config->bl_scale_ad = c_conn->bl_scale_ad;
SDE_DEBUG("bl_scale = %u, bl_scale_ad = %u, bl_level = %u\n",
bl_config->bl_scale, bl_config->bl_scale_ad,
@@ -555,7 +561,7 @@ int sde_connector_pre_kickoff(struct drm_connector *connector)
break;
case CONNECTOR_PROP_BL_SCALE:
case CONNECTOR_PROP_AD_BL_SCALE:
- _sde_connector_update_bl_scale(c_conn, idx);
+ _sde_connector_update_bl_scale(c_conn);
break;
default:
/* nothing to do for most properties */
@@ -563,6 +569,12 @@ int sde_connector_pre_kickoff(struct drm_connector *connector)
}
}
+ /* Special handling for postproc properties */
+ if (c_conn->bl_scale_dirty) {
+ _sde_connector_update_bl_scale(c_conn);
+ c_conn->bl_scale_dirty = false;
+ }
+
if (!c_conn->ops.pre_kickoff)
return 0;
@@ -576,23 +588,26 @@ int sde_connector_pre_kickoff(struct drm_connector *connector)
return rc;
}
-void sde_connector_clk_ctrl(struct drm_connector *connector, bool enable)
+int sde_connector_clk_ctrl(struct drm_connector *connector, bool enable)
{
struct sde_connector *c_conn;
struct dsi_display *display;
u32 state = enable ? DSI_CLK_ON : DSI_CLK_OFF;
+ int rc = 0;
if (!connector) {
SDE_ERROR("invalid connector\n");
- return;
+ return -EINVAL;
}
c_conn = to_sde_connector(connector);
display = (struct dsi_display *) c_conn->display;
if (display && c_conn->ops.clk_ctrl)
- c_conn->ops.clk_ctrl(display->mdp_clk_handle,
+ rc = c_conn->ops.clk_ctrl(display->mdp_clk_handle,
DSI_ALL_CLKS, state);
+
+ return rc;
}
static void sde_connector_destroy(struct drm_connector *connector)
@@ -618,6 +633,10 @@ static void sde_connector_destroy(struct drm_connector *connector)
drm_property_unreference_blob(c_conn->blob_hdr);
if (c_conn->blob_dither)
drm_property_unreference_blob(c_conn->blob_dither);
+ if (c_conn->blob_mode_info)
+ drm_property_unreference_blob(c_conn->blob_mode_info);
+ if (c_conn->blob_ext_hdr)
+ drm_property_unreference_blob(c_conn->blob_ext_hdr);
msm_property_destroy(&c_conn->property_info);
if (c_conn->bl_device)
@@ -995,6 +1014,9 @@ static int sde_connector_atomic_set_property(struct drm_connector *connector,
}
break;
case CONNECTOR_PROP_RETIRE_FENCE:
+ if (!val)
+ goto end;
+
rc = sde_fence_create(&c_conn->retire_fence, &fence_fd, 0);
if (rc) {
SDE_ERROR("fence create failed rc:%d\n", rc);
@@ -1016,6 +1038,19 @@ static int sde_connector_atomic_set_property(struct drm_connector *connector,
if (rc)
SDE_ERROR_CONN(c_conn, "invalid roi_v1, rc: %d\n", rc);
break;
+ /* CONNECTOR_PROP_BL_SCALE and CONNECTOR_PROP_AD_BL_SCALE are
+ * color-processing properties. These two properties require
+ * special handling since they don't quite fit the current standard
+ * atomic set property framework.
+ */
+ case CONNECTOR_PROP_BL_SCALE:
+ c_conn->bl_scale = val;
+ c_conn->bl_scale_dirty = true;
+ break;
+ case CONNECTOR_PROP_AD_BL_SCALE:
+ c_conn->bl_scale_ad = val;
+ c_conn->bl_scale_dirty = true;
+ break;
default:
break;
}
@@ -1075,12 +1110,14 @@ static int sde_connector_atomic_get_property(struct drm_connector *connector,
c_state = to_sde_connector_state(state);
idx = msm_property_index(&c_conn->property_info, property);
- if (idx == CONNECTOR_PROP_RETIRE_FENCE)
- rc = sde_fence_create(&c_conn->retire_fence, val, 0);
- else
+ if (idx == CONNECTOR_PROP_RETIRE_FENCE) {
+ *val = ~0;
+ rc = 0;
+ } else {
/* get cached property value */
rc = msm_property_atomic_get(&c_conn->property_info,
&c_state->property_state, property, val);
+ }
/* allow for custom override */
if (c_conn->ops.get_property)
@@ -1369,12 +1406,39 @@ static void sde_connector_early_unregister(struct drm_connector *connector)
/* debugfs under connector->debugfs are deleted by drm_debugfs */
}
+static int sde_connector_fill_modes(struct drm_connector *connector,
+ uint32_t max_width, uint32_t max_height)
+{
+ int rc, mode_count = 0;
+ struct sde_connector *sde_conn = NULL;
+
+ sde_conn = to_sde_connector(connector);
+ if (!sde_conn) {
+ SDE_ERROR("invalid arguments\n");
+ return 0;
+ }
+
+ mode_count = drm_helper_probe_single_connector_modes(connector,
+ max_width, max_height);
+
+ rc = sde_connector_set_blob_data(connector,
+ connector->state,
+ CONNECTOR_PROP_MODE_INFO);
+ if (rc) {
+ SDE_ERROR_CONN(sde_conn,
+ "failed to setup mode info prop, rc = %d\n", rc);
+ return 0;
+ }
+
+ return mode_count;
+}
+
static const struct drm_connector_funcs sde_connector_ops = {
.dpms = sde_connector_dpms,
.reset = sde_connector_atomic_reset,
.detect = sde_connector_detect,
.destroy = sde_connector_destroy,
- .fill_modes = drm_helper_probe_single_connector_modes,
+ .fill_modes = sde_connector_fill_modes,
.atomic_duplicate_state = sde_connector_atomic_duplicate_state,
.atomic_destroy_state = sde_connector_atomic_destroy_state,
.atomic_set_property = sde_connector_atomic_set_property,
@@ -1387,7 +1451,7 @@ static const struct drm_connector_funcs sde_connector_ops = {
static int sde_connector_get_modes(struct drm_connector *connector)
{
struct sde_connector *c_conn;
- int ret = 0;
+ int mode_count = 0;
if (!connector) {
SDE_ERROR("invalid connector\n");
@@ -1399,11 +1463,16 @@ static int sde_connector_get_modes(struct drm_connector *connector)
SDE_DEBUG("missing get_modes callback\n");
return 0;
}
- ret = c_conn->ops.get_modes(connector, c_conn->display);
- if (ret)
- sde_connector_update_hdr_props(connector);
- return ret;
+ mode_count = c_conn->ops.get_modes(connector, c_conn->display);
+ if (!mode_count) {
+ SDE_ERROR_CONN(c_conn, "failed to get modes\n");
+ return 0;
+ }
+
+ sde_connector_update_hdr_props(connector);
+
+ return mode_count;
}
static enum drm_mode_status
@@ -1500,13 +1569,90 @@ static const struct drm_connector_helper_funcs sde_connector_helper_ops = {
.best_encoder = sde_connector_best_encoder,
};
-int sde_connector_set_info(struct drm_connector *conn,
- struct drm_connector_state *state)
+static int sde_connector_populate_mode_info(struct drm_connector *conn,
+ struct sde_kms_info *info)
+{
+ struct msm_drm_private *priv;
+ struct sde_kms *sde_kms;
+ struct sde_connector *c_conn = NULL;
+ struct drm_display_mode *mode;
+ struct msm_mode_info mode_info;
+ int rc = 0;
+
+ if (!conn || !conn->dev || !conn->dev->dev_private) {
+ SDE_ERROR("invalid arguments\n");
+ return -EINVAL;
+ }
+
+ priv = conn->dev->dev_private;
+ sde_kms = to_sde_kms(priv->kms);
+
+ c_conn = to_sde_connector(conn);
+ if (!c_conn->ops.get_mode_info) {
+ SDE_ERROR_CONN(c_conn, "get_mode_info not defined\n");
+ return -EINVAL;
+ }
+
+ list_for_each_entry(mode, &conn->modes, head) {
+ int topology_idx = 0;
+
+ memset(&mode_info, 0, sizeof(mode_info));
+
+ rc = c_conn->ops.get_mode_info(mode, &mode_info,
+ sde_kms->catalog->max_mixer_width,
+ c_conn->display);
+ if (rc) {
+ SDE_ERROR_CONN(c_conn,
+ "failed to get mode info for mode %s\n",
+ mode->name);
+ continue;
+ }
+
+ sde_kms_info_add_keystr(info, "mode_name", mode->name);
+
+ topology_idx = (int)sde_rm_get_topology_name(
+ mode_info.topology);
+ if (topology_idx < SDE_RM_TOPOLOGY_MAX) {
+ sde_kms_info_add_keystr(info, "topology",
+ e_topology_name[topology_idx].name);
+ } else {
+ SDE_ERROR_CONN(c_conn, "invalid topology\n");
+ continue;
+ }
+
+ if (!mode_info.roi_caps.num_roi)
+ continue;
+
+ sde_kms_info_add_keyint(info, "partial_update_num_roi",
+ mode_info.roi_caps.num_roi);
+ sde_kms_info_add_keyint(info, "partial_update_xstart",
+ mode_info.roi_caps.align.xstart_pix_align);
+ sde_kms_info_add_keyint(info, "partial_update_walign",
+ mode_info.roi_caps.align.width_pix_align);
+ sde_kms_info_add_keyint(info, "partial_update_wmin",
+ mode_info.roi_caps.align.min_width);
+ sde_kms_info_add_keyint(info, "partial_update_ystart",
+ mode_info.roi_caps.align.ystart_pix_align);
+ sde_kms_info_add_keyint(info, "partial_update_halign",
+ mode_info.roi_caps.align.height_pix_align);
+ sde_kms_info_add_keyint(info, "partial_update_hmin",
+ mode_info.roi_caps.align.min_height);
+ sde_kms_info_add_keyint(info, "partial_update_roimerge",
+ mode_info.roi_caps.merge_rois);
+ }
+
+ return rc;
+}
+
+int sde_connector_set_blob_data(struct drm_connector *conn,
+ struct drm_connector_state *state,
+ enum msm_mdp_conn_property prop_id)
{
struct sde_kms_info *info;
struct sde_connector *c_conn = NULL;
struct sde_connector_state *sde_conn_state = NULL;
struct msm_mode_info mode_info;
+ struct drm_property_blob *blob = NULL;
int rc = 0;
c_conn = to_sde_connector(conn);
@@ -1515,43 +1661,63 @@ int sde_connector_set_info(struct drm_connector *conn,
return -EINVAL;
}
- if (!c_conn->ops.post_init) {
- SDE_DEBUG_CONN(c_conn, "post_init not defined\n");
- return 0;
- }
-
- memset(&mode_info, 0, sizeof(mode_info));
-
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
- if (state) {
- sde_conn_state = to_sde_connector_state(state);
- memcpy(&mode_info, &sde_conn_state->mode_info,
- sizeof(sde_conn_state->mode_info));
- } else {
- /**
- * connector state is assigned only on first atomic_commit.
- * But this function is allowed to be invoked during
- * probe/init sequence. So not throwing an error.
- */
- SDE_DEBUG_CONN(c_conn, "invalid connector state\n");
- }
-
sde_kms_info_reset(info);
- rc = c_conn->ops.post_init(&c_conn->base, info,
- c_conn->display, &mode_info);
- if (rc) {
- SDE_ERROR_CONN(c_conn, "post-init failed, %d\n", rc);
+
+ switch (prop_id) {
+ case CONNECTOR_PROP_SDE_INFO:
+ memset(&mode_info, 0, sizeof(mode_info));
+
+ if (state) {
+ sde_conn_state = to_sde_connector_state(state);
+ memcpy(&mode_info, &sde_conn_state->mode_info,
+ sizeof(sde_conn_state->mode_info));
+ } else {
+ /**
+ * connector state is assigned only on first
+ * atomic_commit. But this function is allowed to be
+ * invoked during probe/init sequence. So not throwing
+ * an error.
+ */
+ SDE_DEBUG_CONN(c_conn, "invalid connector state\n");
+ }
+
+ if (c_conn->ops.set_info_blob) {
+ rc = c_conn->ops.set_info_blob(conn, info,
+ c_conn->display, &mode_info);
+ if (rc) {
+ SDE_ERROR_CONN(c_conn,
+ "set_info_blob failed, %d\n",
+ rc);
+ goto exit;
+ }
+ }
+
+ blob = c_conn->blob_caps;
+ break;
+ case CONNECTOR_PROP_MODE_INFO:
+ rc = sde_connector_populate_mode_info(conn, info);
+ if (rc) {
+ SDE_ERROR_CONN(c_conn,
+ "mode info population failed, %d\n",
+ rc);
+ goto exit;
+ }
+ blob = c_conn->blob_mode_info;
+ break;
+ default:
+ SDE_ERROR_CONN(c_conn, "invalid prop_id: %d\n", prop_id);
goto exit;
- }
+ };
msm_property_set_blob(&c_conn->property_info,
- &c_conn->blob_caps,
+ &blob,
SDE_KMS_INFO_DATA(info),
SDE_KMS_INFO_DATALEN(info),
- CONNECTOR_PROP_SDE_INFO);
+ prop_id);
exit:
kfree(info);
@@ -1664,18 +1830,33 @@ struct drm_connector *sde_connector_init(struct drm_device *dev,
CONNECTOR_PROP_COUNT, CONNECTOR_PROP_BLOBCOUNT,
sizeof(struct sde_connector_state));
+ if (c_conn->ops.post_init) {
+ rc = c_conn->ops.post_init(&c_conn->base, display);
+ if (rc) {
+ SDE_ERROR("post-init failed, %d\n", rc);
+ goto error_cleanup_fence;
+ }
+ }
+
msm_property_install_blob(&c_conn->property_info,
"capabilities",
DRM_MODE_PROP_IMMUTABLE,
CONNECTOR_PROP_SDE_INFO);
- rc = sde_connector_set_info(&c_conn->base, c_conn->base.state);
+ rc = sde_connector_set_blob_data(&c_conn->base,
+ NULL,
+ CONNECTOR_PROP_SDE_INFO);
if (rc) {
SDE_ERROR_CONN(c_conn,
"failed to setup connector info, rc = %d\n", rc);
goto error_cleanup_fence;
}
+ msm_property_install_blob(&c_conn->property_info,
+ "mode_properties",
+ DRM_MODE_PROP_IMMUTABLE,
+ CONNECTOR_PROP_MODE_INFO);
+
if (connector_type == DRM_MODE_CONNECTOR_DSI) {
dsi_display = (struct dsi_display *)(display);
if (dsi_display && dsi_display->panel &&
@@ -1709,10 +1890,19 @@ struct drm_connector *sde_connector_init(struct drm_device *dev,
_sde_connector_install_dither_property(dev, sde_kms, c_conn);
if (connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
+ struct drm_msm_ext_hdr_properties hdr = {0};
+
msm_property_install_blob(&c_conn->property_info,
"ext_hdr_properties",
DRM_MODE_PROP_IMMUTABLE,
CONNECTOR_PROP_EXT_HDR_INFO);
+
+ /* set default values to avoid reading uninitialized data */
+ msm_property_set_blob(&c_conn->property_info,
+ &c_conn->blob_ext_hdr,
+ &hdr,
+ sizeof(hdr),
+ CONNECTOR_PROP_EXT_HDR_INFO);
}
msm_property_install_volatile_range(&c_conn->property_info,
@@ -1733,6 +1923,10 @@ struct drm_connector *sde_connector_init(struct drm_device *dev,
0x0, 0, MAX_AD_BL_SCALE_LEVEL, MAX_AD_BL_SCALE_LEVEL,
CONNECTOR_PROP_AD_BL_SCALE);
+ c_conn->bl_scale_dirty = false;
+ c_conn->bl_scale = MAX_BL_SCALE_LEVEL;
+ c_conn->bl_scale_ad = MAX_AD_BL_SCALE_LEVEL;
+
/* enum/bitmask properties */
msm_property_install_enum(&c_conn->property_info, "topology_name",
DRM_MODE_PROP_IMMUTABLE, 0, e_topology_name,
@@ -1770,6 +1964,10 @@ struct drm_connector *sde_connector_init(struct drm_device *dev,
drm_property_unreference_blob(c_conn->blob_hdr);
if (c_conn->blob_dither)
drm_property_unreference_blob(c_conn->blob_dither);
+ if (c_conn->blob_mode_info)
+ drm_property_unreference_blob(c_conn->blob_mode_info);
+ if (c_conn->blob_ext_hdr)
+ drm_property_unreference_blob(c_conn->blob_ext_hdr);
msm_property_destroy(&c_conn->property_info);
error_cleanup_fence:
diff --git a/drivers/gpu/drm/msm/sde/sde_connector.h b/drivers/gpu/drm/msm/sde/sde_connector.h
index 8ebec78..b92c342 100644
--- a/drivers/gpu/drm/msm/sde/sde_connector.h
+++ b/drivers/gpu/drm/msm/sde/sde_connector.h
@@ -36,12 +36,21 @@ struct sde_connector_ops {
/**
* post_init - perform additional initialization steps
* @connector: Pointer to drm connector structure
+ * @display: Pointer to private display handle
+ * Returns: Zero on success
+ */
+ int (*post_init)(struct drm_connector *connector,
+ void *display);
+
+ /**
+ * set_info_blob - initialize given info blob
+ * @connector: Pointer to drm connector structure
* @info: Pointer to sde connector info structure
* @display: Pointer to private display handle
* @mode_info: Pointer to mode info structure
* Returns: Zero on success
*/
- int (*post_init)(struct drm_connector *connector,
+ int (*set_info_blob)(struct drm_connector *connector,
void *info,
void *display,
struct msm_mode_info *mode_info);
@@ -211,10 +220,10 @@ struct sde_connector_ops {
int (*post_kickoff)(struct drm_connector *connector);
/**
- * send_hpd_event - send HPD uevent notification to userspace
+ * post_open - calls connector to process post open functionalities
* @display: Pointer to private display structure
*/
- void (*send_hpd_event)(void *display);
+ void (*post_open)(void *display);
/**
* check_status - check status of connected display panel
@@ -271,12 +280,16 @@ struct sde_connector_evt {
* @blob_hdr: Pointer to blob structure for 'hdr_properties' property
* @blob_ext_hdr: Pointer to blob structure for 'ext_hdr_properties' property
* @blob_dither: Pointer to blob structure for default dither config
+ * @blob_mode_info: Pointer to blob structure for mode info
* @fb_kmap: true if kernel mapping of framebuffer is requested
* @event_table: Array of registered events
* @event_lock: Lock object for event_table
* @bl_device: backlight device node
* @status_work: work object to perform status checks
* @force_panel_dead: variable to trigger forced ESD recovery
+ * @bl_scale_dirty: Flag to indicate PP BL scale value(s) is changed
+ * @bl_scale: BL scale value for ABA feature
+ * @bl_scale_ad: BL scale value for AD feature
*/
struct sde_connector {
struct drm_connector base;
@@ -304,6 +317,7 @@ struct sde_connector {
struct drm_property_blob *blob_hdr;
struct drm_property_blob *blob_ext_hdr;
struct drm_property_blob *blob_dither;
+ struct drm_property_blob *blob_mode_info;
bool fb_kmap;
struct sde_connector_evt event_table[SDE_CONN_EVENT_COUNT];
@@ -312,6 +326,10 @@ struct sde_connector {
struct backlight_device *bl_device;
struct delayed_work status_work;
u32 force_panel_dead;
+
+ bool bl_scale_dirty;
+ u32 bl_scale;
+ u32 bl_scale_ad;
};
/**
@@ -545,8 +563,9 @@ int sde_connector_get_info(struct drm_connector *connector,
* sde_connector_clk_ctrl - enables/disables the connector clks
* @connector: Pointer to drm connector object
* @enable: true/false to enable/disable
+ * Returns: Zero on success
*/
-void sde_connector_clk_ctrl(struct drm_connector *connector, bool enable);
+int sde_connector_clk_ctrl(struct drm_connector *connector, bool enable);
/**
* sde_connector_get_dpms - query dpms setting
@@ -644,13 +663,15 @@ int sde_connector_get_dither_cfg(struct drm_connector *conn,
struct drm_connector_state *state, void **cfg, size_t *len);
/**
- * sde_connector_set_info - set connector property value
+ * sde_connector_set_blob_data - set connector blob property data
* @conn: Pointer to drm_connector struct
* @state: Pointer to the drm_connector_state struct
+ * @prop_id: property id to be populated
* Returns: Zero on success
*/
-int sde_connector_set_info(struct drm_connector *conn,
- struct drm_connector_state *state);
+int sde_connector_set_blob_data(struct drm_connector *conn,
+ struct drm_connector_state *state,
+ enum msm_mdp_conn_property prop_id);
/**
* sde_connector_roi_v1_check_roi - validate connector ROI
@@ -691,4 +712,11 @@ int sde_connector_get_mode_info(struct drm_connector_state *conn_state,
* conn: Pointer to drm_connector struct
*/
void sde_conn_timeline_status(struct drm_connector *conn);
+
+/**
+ * sde_connector_helper_bridge_disable - helper function for drm bridge disable
+ * @connector: Pointer to DRM connector object
+ */
+void sde_connector_helper_bridge_disable(struct drm_connector *connector);
+
#endif /* _SDE_CONNECTOR_H_ */
diff --git a/drivers/gpu/drm/msm/sde/sde_core_irq.c b/drivers/gpu/drm/msm/sde/sde_core_irq.c
index b6c6234..a6f22c9 100644
--- a/drivers/gpu/drm/msm/sde/sde_core_irq.c
+++ b/drivers/gpu/drm/msm/sde/sde_core_irq.c
@@ -460,6 +460,7 @@ void sde_core_irq_preinstall(struct sde_kms *sde_kms)
{
struct msm_drm_private *priv;
int i;
+ int rc;
if (!sde_kms) {
SDE_ERROR("invalid sde_kms\n");
@@ -473,7 +474,14 @@ void sde_core_irq_preinstall(struct sde_kms *sde_kms)
}
priv = sde_kms->dev->dev_private;
- sde_power_resource_enable(&priv->phandle, sde_kms->core_client, true);
+ rc = sde_power_resource_enable(&priv->phandle, sde_kms->core_client,
+ true);
+ if (rc) {
+ SDE_ERROR("failed to enable power resource %d\n", rc);
+ SDE_EVT32(rc, SDE_EVTLOG_ERROR);
+ return;
+ }
+
sde_clear_all_irqs(sde_kms);
sde_disable_all_irqs(sde_kms);
sde_power_resource_enable(&priv->phandle, sde_kms->core_client, false);
@@ -504,6 +512,7 @@ void sde_core_irq_uninstall(struct sde_kms *sde_kms)
{
struct msm_drm_private *priv;
int i;
+ int rc;
if (!sde_kms) {
SDE_ERROR("invalid sde_kms\n");
@@ -517,7 +526,14 @@ void sde_core_irq_uninstall(struct sde_kms *sde_kms)
}
priv = sde_kms->dev->dev_private;
- sde_power_resource_enable(&priv->phandle, sde_kms->core_client, true);
+ rc = sde_power_resource_enable(&priv->phandle, sde_kms->core_client,
+ true);
+ if (rc) {
+ SDE_ERROR("failed to enable power resource %d\n", rc);
+ SDE_EVT32(rc, SDE_EVTLOG_ERROR);
+ return;
+ }
+
for (i = 0; i < sde_kms->irq_obj.total_irqs; i++)
if (atomic_read(&sde_kms->irq_obj.enable_counts[i]) ||
!list_empty(&sde_kms->irq_obj.irq_cb_tbl[i]))
diff --git a/drivers/gpu/drm/msm/sde/sde_crtc.c b/drivers/gpu/drm/msm/sde/sde_crtc.c
index aced5cd..94c7f40 100644
--- a/drivers/gpu/drm/msm/sde/sde_crtc.c
+++ b/drivers/gpu/drm/msm/sde/sde_crtc.c
@@ -741,7 +741,7 @@ static void _sde_crtc_setup_dim_layer_cfg(struct drm_crtc *crtc,
for (i = 0; i < sde_crtc->num_mixers; i++) {
split_dim_layer.flags = dim_layer->flags;
- sde_kms_rect_intersect(&cstate->lm_bounds[i], &dim_layer->rect,
+ sde_kms_rect_intersect(&cstate->lm_roi[i], &dim_layer->rect,
&split_dim_layer.rect);
if (sde_kms_rect_is_null(&split_dim_layer.rect)) {
/*
@@ -764,9 +764,26 @@ static void _sde_crtc_setup_dim_layer_cfg(struct drm_crtc *crtc,
} else {
split_dim_layer.rect.x =
split_dim_layer.rect.x -
- cstate->lm_bounds[i].x;
+ cstate->lm_roi[i].x;
+ split_dim_layer.rect.y =
+ split_dim_layer.rect.y -
+ cstate->lm_roi[i].y;
}
+ SDE_EVT32_VERBOSE(DRMID(crtc),
+ cstate->lm_roi[i].x,
+ cstate->lm_roi[i].y,
+ cstate->lm_roi[i].w,
+ cstate->lm_roi[i].h,
+ dim_layer->rect.x,
+ dim_layer->rect.y,
+ dim_layer->rect.w,
+ dim_layer->rect.h,
+ split_dim_layer.rect.x,
+ split_dim_layer.rect.y,
+ split_dim_layer.rect.w,
+ split_dim_layer.rect.h);
+
SDE_DEBUG("split_dim_layer - LM:%d, rect:{%d,%d,%d,%d}}\n",
i, split_dim_layer.rect.x, split_dim_layer.rect.y,
split_dim_layer.rect.w, split_dim_layer.rect.h);
@@ -789,6 +806,21 @@ void sde_crtc_get_crtc_roi(struct drm_crtc_state *state,
*crtc_roi = &crtc_state->crtc_roi;
}
+bool sde_crtc_is_crtc_roi_dirty(struct drm_crtc_state *state)
+{
+ struct sde_crtc_state *cstate;
+ struct sde_crtc *sde_crtc;
+
+ if (!state || !state->crtc)
+ return false;
+
+ sde_crtc = to_sde_crtc(state->crtc);
+ cstate = to_sde_crtc_state(state);
+
+ return msm_property_is_dirty(&sde_crtc->property_info,
+ &cstate->property_state, CRTC_PROP_ROI_V1);
+}
+
static int _sde_crtc_set_roi_v1(struct drm_crtc_state *state,
void __user *usr_ptr)
{
@@ -877,6 +909,8 @@ static int _sde_crtc_set_crtc_roi(struct drm_crtc *crtc,
struct sde_crtc_state *crtc_state;
struct sde_rect *crtc_roi;
int i, num_attached_conns = 0;
+ bool is_crtc_roi_dirty;
+ bool is_any_conn_roi_dirty;
if (!crtc || !state)
return -EINVAL;
@@ -885,7 +919,11 @@ static int _sde_crtc_set_crtc_roi(struct drm_crtc *crtc,
crtc_state = to_sde_crtc_state(state);
crtc_roi = &crtc_state->crtc_roi;
+ is_crtc_roi_dirty = sde_crtc_is_crtc_roi_dirty(state);
+ is_any_conn_roi_dirty = false;
+
for_each_connector_in_state(state->state, conn, conn_state, i) {
+ struct sde_connector *sde_conn;
struct sde_connector_state *sde_conn_state;
struct sde_rect conn_roi;
@@ -900,8 +938,15 @@ static int _sde_crtc_set_crtc_roi(struct drm_crtc *crtc,
}
++num_attached_conns;
+ sde_conn = to_sde_connector(conn_state->connector);
sde_conn_state = to_sde_connector_state(conn_state);
+ is_any_conn_roi_dirty = is_any_conn_roi_dirty ||
+ msm_property_is_dirty(
+ &sde_conn->property_info,
+ &sde_conn_state->property_state,
+ CONNECTOR_PROP_ROI_V1);
+
/*
* current driver only supports same connector and crtc size,
* but if support for different sizes is added, driver needs
@@ -921,8 +966,24 @@ static int _sde_crtc_set_crtc_roi(struct drm_crtc *crtc,
conn_roi.w, conn_roi.h);
}
+ /*
+ * Check against CRTC ROI and Connector ROI not being updated together.
+ * This restriction should be relaxed when Connector ROI scaling is
+ * supported.
+ */
+ if (is_any_conn_roi_dirty != is_crtc_roi_dirty) {
+ SDE_ERROR("connector/crtc rois not updated together\n");
+ return -EINVAL;
+ }
+
sde_kms_rect_merge_rectangles(&crtc_state->user_roi_list, crtc_roi);
+ /* clear the ROI to null if it matches full screen anyways */
+ if (crtc_roi->x == 0 && crtc_roi->y == 0 &&
+ crtc_roi->w == state->adjusted_mode.hdisplay &&
+ crtc_roi->h == state->adjusted_mode.vdisplay)
+ memset(crtc_roi, 0, sizeof(*crtc_roi));
+
SDE_DEBUG("%s: crtc roi (%d,%d,%d,%d)\n", sde_crtc->name,
crtc_roi->x, crtc_roi->y, crtc_roi->w, crtc_roi->h);
SDE_EVT32_VERBOSE(DRMID(crtc), crtc_roi->x, crtc_roi->y, crtc_roi->w,
@@ -1186,8 +1247,6 @@ static int _sde_crtc_check_rois(struct drm_crtc *crtc,
struct sde_crtc *sde_crtc;
struct sde_crtc_state *sde_crtc_state;
struct msm_mode_info mode_info;
- struct drm_connector *conn;
- struct drm_connector_state *conn_state;
int rc, lm_idx, i;
if (!crtc || !state)
@@ -1196,6 +1255,7 @@ static int _sde_crtc_check_rois(struct drm_crtc *crtc,
memset(&mode_info, 0, sizeof(mode_info));
sde_crtc = to_sde_crtc(crtc);
+ sde_crtc_state = to_sde_crtc_state(state);
if (hweight_long(state->connector_mask) != 1) {
SDE_ERROR("invalid connector count(%d) for crtc: %d\n",
@@ -1204,8 +1264,17 @@ static int _sde_crtc_check_rois(struct drm_crtc *crtc,
return -EINVAL;
}
- for_each_connector_in_state(state->state, conn, conn_state, i) {
- rc = sde_connector_get_mode_info(conn_state, &mode_info);
+ /*
+ * check connector array cached at modeset time since incoming atomic
+ * state may not include any connectors if they aren't modified
+ */
+ for (i = 0; i < ARRAY_SIZE(sde_crtc_state->connectors); i++) {
+ struct drm_connector *conn = sde_crtc_state->connectors[i];
+
+ if (!conn || !conn->state)
+ continue;
+
+ rc = sde_connector_get_mode_info(conn->state, &mode_info);
if (rc) {
SDE_ERROR("failed to get mode info\n");
return -EINVAL;
@@ -1216,7 +1285,6 @@ static int _sde_crtc_check_rois(struct drm_crtc *crtc,
if (!mode_info.roi_caps.enabled)
return 0;
- sde_crtc_state = to_sde_crtc_state(state);
if (sde_crtc_state->user_roi_list.num_rects >
mode_info.roi_caps.num_roi) {
SDE_ERROR("roi count is more than supported limit, %d > %d\n",
@@ -3048,6 +3116,11 @@ static void sde_crtc_atomic_begin(struct drm_crtc *crtc,
return;
}
+ if (!sde_kms_power_resource_is_enabled(crtc->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
SDE_DEBUG("crtc%d\n", crtc->base.id);
sde_crtc = to_sde_crtc(crtc);
@@ -3137,6 +3210,11 @@ static void sde_crtc_atomic_flush(struct drm_crtc *crtc,
return;
}
+ if (!sde_kms_power_resource_is_enabled(crtc->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
SDE_DEBUG("crtc%d\n", crtc->base.id);
sde_crtc = to_sde_crtc(crtc);
@@ -3343,12 +3421,13 @@ static int _sde_crtc_commit_kickoff_rot(struct drm_crtc *crtc,
if (!master_ctl || master_ctl->idx > ctl->idx)
master_ctl = ctl;
+
+ if (ctl->ops.setup_sbuf_cfg)
+ ctl->ops.setup_sbuf_cfg(ctl, &cstate->sbuf_cfg);
}
/* only update sbuf_cfg and flush for master ctl */
- if (master_ctl && master_ctl->ops.setup_sbuf_cfg &&
- master_ctl->ops.update_pending_flush) {
- master_ctl->ops.setup_sbuf_cfg(master_ctl, &cstate->sbuf_cfg);
+ if (master_ctl && master_ctl->ops.update_pending_flush) {
master_ctl->ops.update_pending_flush(master_ctl, flush_mask);
/* explicitly trigger rotator for async modes */
@@ -3396,16 +3475,18 @@ static void _sde_crtc_remove_pipe_flush(struct sde_crtc *sde_crtc)
* _sde_crtc_reset_hw - attempt hardware reset on errors
* @crtc: Pointer to DRM crtc instance
* @old_state: Pointer to crtc state for previous commit
+ * @dump_status: Whether or not to dump debug status before reset
* Returns: Zero if current commit should still be attempted
*/
static int _sde_crtc_reset_hw(struct drm_crtc *crtc,
- struct drm_crtc_state *old_state)
+ struct drm_crtc_state *old_state, bool dump_status)
{
struct drm_plane *plane_halt[MAX_PLANES];
struct drm_plane *plane;
const struct drm_plane_state *pstate;
struct sde_crtc *sde_crtc;
struct sde_hw_ctl *ctl;
+ enum sde_ctl_rot_op_mode old_rot_op_mode;
signed int i, plane_count;
int rc;
@@ -3413,6 +3494,13 @@ static int _sde_crtc_reset_hw(struct drm_crtc *crtc,
return -EINVAL;
sde_crtc = to_sde_crtc(crtc);
+ old_rot_op_mode = to_sde_crtc_state(old_state)->sbuf_cfg.rot_op_mode;
+ SDE_EVT32(DRMID(crtc), old_rot_op_mode,
+ dump_status, SDE_EVTLOG_FUNC_ENTRY);
+
+ if (dump_status)
+ SDE_DBG_DUMP("all", "dbg_bus", "vbif_dbg_bus");
+
for (i = 0; i < sde_crtc->num_mixers; ++i) {
ctl = sde_crtc->mixers[i].hw_ctl;
if (!ctl || !ctl->ops.reset)
@@ -3428,11 +3516,19 @@ static int _sde_crtc_reset_hw(struct drm_crtc *crtc,
}
}
- /* early out if simple ctl reset succeeded */
- if (i == sde_crtc->num_mixers) {
- SDE_EVT32(DRMID(crtc), i);
+ /*
+ * Early out if simple ctl reset succeeded and previous commit
+ * did not involve the rotator.
+ *
+ * If the previous commit had rotation enabled, then the ctl
+ * reset would also have reset the rotator h/w. The rotator
+ * programming for the current commit may need to be repeated,
+ * depending on the rotation mode; don't handle this for now
+ * and just force a hard reset in those cases.
+ */
+ if (i == sde_crtc->num_mixers &&
+ old_rot_op_mode == SDE_CTL_ROT_OP_MODE_OFFLINE)
return false;
- }
SDE_DEBUG("crtc%d: issuing hard reset\n", DRMID(crtc));
@@ -3457,7 +3553,7 @@ static int _sde_crtc_reset_hw(struct drm_crtc *crtc,
}
/* reset both previous... */
- for_each_plane_in_state(old_state->state, plane, pstate, i) {
+ drm_atomic_crtc_state_for_each_plane_state(plane, pstate, old_state) {
if (pstate->crtc != crtc)
continue;
@@ -3605,7 +3701,8 @@ void sde_crtc_commit_kickoff(struct drm_crtc *crtc,
* preparing for the kickoff
*/
if (reset_req) {
- if (_sde_crtc_reset_hw(crtc, old_state))
+ if (_sde_crtc_reset_hw(crtc, old_state,
+ !sde_crtc->reset_request))
is_error = true;
/* force offline rotation mode since the commit has no pipes */
@@ -3613,6 +3710,7 @@ void sde_crtc_commit_kickoff(struct drm_crtc *crtc,
cstate->sbuf_cfg.rot_op_mode =
SDE_CTL_ROT_OP_MODE_OFFLINE;
}
+ sde_crtc->reset_request = reset_req;
/* wait for frame_event_done completion */
SDE_ATRACE_BEGIN("wait_for_frame_done_event");
@@ -3928,12 +4026,6 @@ static void sde_crtc_handle_power_event(u32 event_type, void *arg)
sde_cp_crtc_post_ipc(crtc);
- event.type = DRM_EVENT_SDE_POWER;
- event.length = sizeof(power_on);
- power_on = 1;
- msm_mode_object_event_notify(&crtc->base, crtc->dev, &event,
- (u8 *)&power_on);
-
for (i = 0; i < sde_crtc->num_mixers; ++i) {
m = &sde_crtc->mixers[i];
if (!m->hw_lm || !m->hw_lm->ops.setup_misr ||
@@ -4032,6 +4124,12 @@ static void sde_crtc_disable(struct drm_crtc *crtc)
SDE_ERROR("invalid crtc\n");
return;
}
+
+ if (!sde_kms_power_resource_is_enabled(crtc->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
sde_crtc = to_sde_crtc(crtc);
cstate = to_sde_crtc_state(crtc->state);
priv = crtc->dev->dev_private;
@@ -4146,6 +4244,11 @@ static void sde_crtc_enable(struct drm_crtc *crtc)
}
priv = crtc->dev->dev_private;
+ if (!sde_kms_power_resource_is_enabled(crtc->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
SDE_DEBUG("crtc%d\n", crtc->base.id);
SDE_EVT32_VERBOSE(DRMID(crtc));
sde_crtc = to_sde_crtc(crtc);
@@ -4455,6 +4558,17 @@ static int sde_crtc_atomic_check(struct drm_crtc *crtc,
sde_crtc->name, plane->base.id, rc);
goto end;
}
+
+ /* identify attached planes that are not in the delta state */
+ if (!drm_atomic_get_existing_plane_state(state->state, plane)) {
+ rc = sde_plane_confirm_hw_rsvps(plane, pstate);
+ if (rc) {
+ SDE_ERROR("crtc%d confirmation hw failed %d\n",
+ crtc->base.id, rc);
+ goto end;
+ }
+ }
+
if (cnt >= SDE_PSTATES_MAX)
continue;
@@ -5087,6 +5201,9 @@ static int sde_crtc_atomic_set_property(struct drm_crtc *crtc,
cstate->bw_split_vote = true;
break;
case CRTC_PROP_OUTPUT_FENCE:
+ if (!val)
+ goto exit;
+
ret = _sde_crtc_get_output_fence(crtc, state, &fence_fd);
if (ret) {
SDE_ERROR("fence create failed rc:%d\n", ret);
@@ -5160,7 +5277,8 @@ static int sde_crtc_atomic_get_property(struct drm_crtc *crtc,
i = msm_property_index(&sde_crtc->property_info, property);
if (i == CRTC_PROP_OUTPUT_FENCE) {
- ret = _sde_crtc_get_output_fence(crtc, state, val);
+ *val = ~0;
+ ret = 0;
} else {
ret = msm_property_atomic_get(&sde_crtc->property_info,
&cstate->property_state, property, val);
@@ -5817,8 +5935,15 @@ static int _sde_crtc_event_enable(struct sde_kms *kms,
priv = kms->dev->dev_private;
ret = 0;
if (crtc_drm->enabled) {
- sde_power_resource_enable(&priv->phandle, kms->core_client,
- true);
+ ret = sde_power_resource_enable(&priv->phandle,
+ kms->core_client, true);
+ if (ret) {
+ SDE_ERROR("failed to enable power resource %d\n", ret);
+ SDE_EVT32(ret, SDE_EVTLOG_ERROR);
+ kfree(node);
+ return ret;
+ }
+
INIT_LIST_HEAD(&node->irq.list);
ret = node->func(crtc_drm, true, &node->irq);
sde_power_resource_enable(&priv->phandle, kms->core_client,
@@ -5872,7 +5997,15 @@ static int _sde_crtc_event_disable(struct sde_kms *kms,
return 0;
}
priv = kms->dev->dev_private;
- sde_power_resource_enable(&priv->phandle, kms->core_client, true);
+ ret = sde_power_resource_enable(&priv->phandle, kms->core_client, true);
+ if (ret) {
+ SDE_ERROR("failed to enable power resource %d\n", ret);
+ SDE_EVT32(ret, SDE_EVTLOG_ERROR);
+ list_del(&node->list);
+ kfree(node);
+ return ret;
+ }
+
ret = node->func(crtc_drm, false, &node->irq);
list_del(&node->list);
kfree(node);
diff --git a/drivers/gpu/drm/msm/sde/sde_crtc.h b/drivers/gpu/drm/msm/sde/sde_crtc.h
index 1d5b65e..9501d0f 100644
--- a/drivers/gpu/drm/msm/sde/sde_crtc.h
+++ b/drivers/gpu/drm/msm/sde/sde_crtc.h
@@ -188,6 +188,8 @@ struct sde_crtc_event {
* @enabled : whether the SDE CRTC is currently enabled. updated in the
* commit-thread, not state-swap time which is earlier, so
* safe to make decisions on during VBLANK on/off work
+ * @reset_request : whether or not a h/w request was requested for the previous
+ * frame
* @ds_reconfig : force reconfiguration of the destination scaler block
* @feature_list : list of color processing features supported on a crtc
* @active_list : list of color processing features are active
@@ -247,6 +249,7 @@ struct sde_crtc {
bool vblank_requested;
bool suspend;
bool enabled;
+ bool reset_request;
bool ds_reconfig;
struct list_head feature_list;
@@ -681,6 +684,14 @@ void sde_crtc_res_put(struct drm_crtc_state *state, u32 type, u64 tag);
void sde_crtc_get_crtc_roi(struct drm_crtc_state *state,
const struct sde_rect **crtc_roi);
+/**
+ * sde_crtc_is_crtc_roi_dirty - retrieve whether crtc_roi was updated this frame
+ * Note: Only use during atomic_check since dirty properties may be popped
+ * @crtc_state: Pointer to crtc state
+ * Return: true if roi is dirty, false otherwise
+ */
+bool sde_crtc_is_crtc_roi_dirty(struct drm_crtc_state *state);
+
/** sde_crt_get_secure_level - retrieve the secure level from the give state
* object, this is used to determine the secure state of the crtc
* @crtc : Pointer to drm crtc structure
diff --git a/drivers/gpu/drm/msm/sde/sde_encoder.c b/drivers/gpu/drm/msm/sde/sde_encoder.c
index 799936a..4d09642 100644
--- a/drivers/gpu/drm/msm/sde/sde_encoder.c
+++ b/drivers/gpu/drm/msm/sde/sde_encoder.c
@@ -247,6 +247,70 @@ struct sde_encoder_virt {
#define to_sde_encoder_virt(x) container_of(x, struct sde_encoder_virt, base)
+static void _sde_encoder_pm_qos_add_request(struct drm_encoder *drm_enc)
+{
+ struct msm_drm_private *priv;
+ struct sde_kms *sde_kms;
+ struct pm_qos_request *req;
+ u32 cpu_mask;
+ u32 cpu_dma_latency;
+ int cpu;
+
+ if (!drm_enc->dev || !drm_enc->dev->dev_private) {
+ SDE_ERROR("drm device invalid\n");
+ return;
+ }
+
+ priv = drm_enc->dev->dev_private;
+ if (!priv->kms) {
+ SDE_ERROR("invalid kms\n");
+ return;
+ }
+
+ sde_kms = to_sde_kms(priv->kms);
+ if (!sde_kms || !sde_kms->catalog)
+ return;
+
+ cpu_mask = sde_kms->catalog->perf.cpu_mask;
+ cpu_dma_latency = sde_kms->catalog->perf.cpu_dma_latency;
+ if (!cpu_mask)
+ return;
+
+ req = &sde_kms->pm_qos_cpu_req;
+ req->type = PM_QOS_REQ_AFFINE_CORES;
+ cpumask_empty(&req->cpus_affine);
+ for_each_possible_cpu(cpu) {
+ if ((1 << cpu) & cpu_mask)
+ cpumask_set_cpu(cpu, &req->cpus_affine);
+ }
+ pm_qos_add_request(req, PM_QOS_CPU_DMA_LATENCY, cpu_dma_latency);
+
+ SDE_EVT32_VERBOSE(DRMID(drm_enc), cpu_mask, cpu_dma_latency);
+}
+
+static void _sde_encoder_pm_qos_remove_request(struct drm_encoder *drm_enc)
+{
+ struct msm_drm_private *priv;
+ struct sde_kms *sde_kms;
+
+ if (!drm_enc->dev || !drm_enc->dev->dev_private) {
+ SDE_ERROR("drm device invalid\n");
+ return;
+ }
+
+ priv = drm_enc->dev->dev_private;
+ if (!priv->kms) {
+ SDE_ERROR("invalid kms\n");
+ return;
+ }
+
+ sde_kms = to_sde_kms(priv->kms);
+ if (!sde_kms || !sde_kms->catalog || !sde_kms->catalog->perf.cpu_mask)
+ return;
+
+ pm_qos_remove_request(&sde_kms->pm_qos_cpu_req);
+}
+
static struct drm_connector_state *_sde_encoder_get_conn_state(
struct drm_encoder *drm_enc)
{
@@ -716,24 +780,6 @@ void sde_encoder_helper_split_config(
}
}
-static void _sde_encoder_adjust_mode(struct drm_connector *connector,
- struct drm_display_mode *adj_mode)
-{
- struct drm_display_mode *cur_mode;
-
- if (!connector || !adj_mode)
- return;
-
- list_for_each_entry(cur_mode, &connector->modes, head) {
- if (cur_mode->vdisplay == adj_mode->vdisplay &&
- cur_mode->hdisplay == adj_mode->hdisplay &&
- cur_mode->vrefresh == adj_mode->vrefresh) {
- adj_mode->private = cur_mode->private;
- adj_mode->private_flags |= cur_mode->private_flags;
- }
- }
-}
-
static int sde_encoder_virt_atomic_check(
struct drm_encoder *drm_enc,
struct drm_crtc_state *crtc_state,
@@ -769,15 +815,6 @@ static int sde_encoder_virt_atomic_check(
SDE_EVT32(DRMID(drm_enc), drm_atomic_crtc_needs_modeset(crtc_state));
- /*
- * display drivers may populate private fields of the drm display mode
- * structure while registering possible modes of a connector with DRM.
- * These private fields are not populated back while DRM invokes
- * the mode_set callbacks. This module retrieves and populates the
- * private fields of the given mode.
- */
- _sde_encoder_adjust_mode(conn_state->connector, adj_mode);
-
/* perform atomic check on the first physical encoder (master) */
for (i = 0; i < sde_enc->num_phys_encs; i++) {
struct sde_encoder_phys *phys = sde_enc->phys_encs[i];
@@ -882,7 +919,9 @@ static int sde_encoder_virt_atomic_check(
return ret;
}
- ret = sde_connector_set_info(conn_state->connector, conn_state);
+ ret = sde_connector_set_blob_data(conn_state->connector,
+ conn_state,
+ CONNECTOR_PROP_SDE_INFO);
if (ret) {
SDE_ERROR_ENC(sde_enc,
"connector failed to update info, rc: %d\n",
@@ -1669,37 +1708,61 @@ static void _sde_encoder_resource_control_rsc_update(
}
}
-static void _sde_encoder_resource_control_helper(struct drm_encoder *drm_enc,
+static int _sde_encoder_resource_control_helper(struct drm_encoder *drm_enc,
bool enable)
{
struct msm_drm_private *priv;
struct sde_kms *sde_kms;
struct sde_encoder_virt *sde_enc;
+ int rc;
+ bool is_cmd_mode, is_primary;
sde_enc = to_sde_encoder_virt(drm_enc);
priv = drm_enc->dev->dev_private;
sde_kms = to_sde_kms(priv->kms);
+ is_cmd_mode = sde_enc->disp_info.capabilities &
+ MSM_DISPLAY_CAP_CMD_MODE;
+ is_primary = sde_enc->disp_info.is_primary;
+
SDE_DEBUG_ENC(sde_enc, "enable:%d\n", enable);
SDE_EVT32(DRMID(drm_enc), enable);
if (!sde_enc->cur_master) {
SDE_ERROR("encoder master not set\n");
- return;
+ return -EINVAL;
}
if (enable) {
/* enable SDE core clks */
- sde_power_resource_enable(&priv->phandle,
+ rc = sde_power_resource_enable(&priv->phandle,
sde_kms->core_client, true);
+ if (rc) {
+ SDE_ERROR("failed to enable power resource %d\n", rc);
+ SDE_EVT32(rc, SDE_EVTLOG_ERROR);
+ return rc;
+ }
/* enable DSI clks */
- sde_connector_clk_ctrl(sde_enc->cur_master->connector, true);
+ rc = sde_connector_clk_ctrl(sde_enc->cur_master->connector,
+ true);
+ if (rc) {
+ SDE_ERROR("failed to enable clk control %d\n", rc);
+ sde_power_resource_enable(&priv->phandle,
+ sde_kms->core_client, false);
+ return rc;
+ }
/* enable all the irq */
_sde_encoder_irq_control(drm_enc, true);
+ if (is_cmd_mode && is_primary)
+ _sde_encoder_pm_qos_add_request(drm_enc);
+
} else {
+ if (is_cmd_mode && is_primary)
+ _sde_encoder_pm_qos_remove_request(drm_enc);
+
/* disable all the irq */
_sde_encoder_irq_control(drm_enc, false);
@@ -1711,6 +1774,7 @@ static void _sde_encoder_resource_control_helper(struct drm_encoder *drm_enc,
sde_kms->core_client, false);
}
+ return 0;
}
static int sde_encoder_resource_control(struct drm_encoder *drm_enc,
@@ -1789,7 +1853,19 @@ static int sde_encoder_resource_control(struct drm_encoder *drm_enc,
_sde_encoder_irq_control(drm_enc, true);
} else {
/* enable all the clks and resources */
- _sde_encoder_resource_control_helper(drm_enc, true);
+ ret = _sde_encoder_resource_control_helper(drm_enc,
+ true);
+ if (ret) {
+ SDE_ERROR_ENC(sde_enc,
+ "sw_event:%d, rc in state %d\n",
+ sw_event, sde_enc->rc_state);
+ SDE_EVT32(DRMID(drm_enc), sw_event,
+ sde_enc->rc_state,
+ SDE_EVTLOG_ERROR);
+ mutex_unlock(&sde_enc->rc_lock);
+ return ret;
+ }
+
_sde_encoder_resource_control_rsc_update(drm_enc, true);
}
@@ -1947,7 +2023,18 @@ static int sde_encoder_resource_control(struct drm_encoder *drm_enc,
/* return if the resource control is already in ON state */
if (sde_enc->rc_state != SDE_ENC_RC_STATE_ON) {
/* enable all the clks and resources */
- _sde_encoder_resource_control_helper(drm_enc, true);
+ ret = _sde_encoder_resource_control_helper(drm_enc,
+ true);
+ if (ret) {
+ SDE_ERROR_ENC(sde_enc,
+ "sw_event:%d, rc in state %d\n",
+ sw_event, sde_enc->rc_state);
+ SDE_EVT32(DRMID(drm_enc), sw_event,
+ sde_enc->rc_state,
+ SDE_EVTLOG_ERROR);
+ mutex_unlock(&sde_enc->rc_lock);
+ return ret;
+ }
_sde_encoder_resource_control_rsc_update(drm_enc, true);
@@ -2076,6 +2163,11 @@ static void sde_encoder_virt_mode_set(struct drm_encoder *drm_enc,
return;
}
+ if (!sde_kms_power_resource_is_enabled(drm_enc->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
sde_enc = to_sde_encoder_virt(drm_enc);
SDE_DEBUG_ENC(sde_enc, "\n");
@@ -2283,6 +2375,11 @@ static void sde_encoder_virt_enable(struct drm_encoder *drm_enc)
}
sde_enc = to_sde_encoder_virt(drm_enc);
+ if (!sde_kms_power_resource_is_enabled(drm_enc->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
ret = _sde_encoder_get_mode_info(drm_enc, &mode_info);
if (ret) {
SDE_ERROR_ENC(sde_enc, "failed to get mode info\n");
@@ -2377,6 +2474,11 @@ static void sde_encoder_virt_disable(struct drm_encoder *drm_enc)
return;
}
+ if (!sde_kms_power_resource_is_enabled(drm_enc->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
sde_enc = to_sde_encoder_virt(drm_enc);
SDE_DEBUG_ENC(sde_enc, "\n");
@@ -2497,6 +2599,13 @@ static void sde_encoder_underrun_callback(struct drm_encoder *drm_enc,
SDE_ATRACE_BEGIN("encoder_underrun_callback");
atomic_inc(&phy_enc->underrun_cnt);
SDE_EVT32(DRMID(drm_enc), atomic_read(&phy_enc->underrun_cnt));
+
+ trace_sde_encoder_underrun(DRMID(drm_enc),
+ atomic_read(&phy_enc->underrun_cnt));
+
+ SDE_DBG_CTRL("stop_ftrace");
+ SDE_DBG_CTRL("panic_underrun");
+
SDE_ATRACE_END("encoder_underrun_callback");
}
@@ -3269,6 +3378,7 @@ int sde_encoder_prepare_for_kickoff(struct drm_encoder *drm_enc,
struct sde_encoder_virt *sde_enc;
struct sde_encoder_phys *phys;
bool needs_hw_reset = false;
+ uint32_t ln_cnt1, ln_cnt2;
unsigned int i;
int rc, ret = 0;
@@ -3281,6 +3391,13 @@ int sde_encoder_prepare_for_kickoff(struct drm_encoder *drm_enc,
SDE_DEBUG_ENC(sde_enc, "\n");
SDE_EVT32(DRMID(drm_enc));
+ /* save this for later, in case of errors */
+ if (sde_enc->cur_master && sde_enc->cur_master->ops.get_wr_line_count)
+ ln_cnt1 = sde_enc->cur_master->ops.get_wr_line_count(
+ sde_enc->cur_master);
+ else
+ ln_cnt1 = -EINVAL;
+
/* prepare for next kickoff, may include waiting on previous kickoff */
SDE_ATRACE_BEGIN("enc_prepare_for_kickoff");
for (i = 0; i < sde_enc->num_phys_encs; i++) {
@@ -3299,11 +3416,24 @@ int sde_encoder_prepare_for_kickoff(struct drm_encoder *drm_enc,
}
SDE_ATRACE_END("enc_prepare_for_kickoff");
- sde_encoder_resource_control(drm_enc, SDE_ENC_RC_EVENT_KICKOFF);
+ rc = sde_encoder_resource_control(drm_enc, SDE_ENC_RC_EVENT_KICKOFF);
+ if (rc) {
+ SDE_ERROR_ENC(sde_enc, "resource kickoff failed rc %d\n", rc);
+ return rc;
+ }
/* if any phys needs reset, reset all phys, in-order */
if (needs_hw_reset) {
- SDE_EVT32(DRMID(drm_enc), SDE_EVTLOG_FUNC_CASE1);
+ /* query line count before cur_master is updated */
+ if (sde_enc->cur_master &&
+ sde_enc->cur_master->ops.get_wr_line_count)
+ ln_cnt2 = sde_enc->cur_master->ops.get_wr_line_count(
+ sde_enc->cur_master);
+ else
+ ln_cnt2 = -EINVAL;
+
+ SDE_EVT32(DRMID(drm_enc), ln_cnt1, ln_cnt2,
+ SDE_EVTLOG_FUNC_CASE1);
for (i = 0; i < sde_enc->num_phys_encs; i++) {
phys = sde_enc->phys_encs[i];
if (phys && phys->ops.hw_reset)
diff --git a/drivers/gpu/drm/msm/sde/sde_encoder_phys.h b/drivers/gpu/drm/msm/sde/sde_encoder_phys.h
index edfdc0b..cfe2126 100644
--- a/drivers/gpu/drm/msm/sde/sde_encoder_phys.h
+++ b/drivers/gpu/drm/msm/sde/sde_encoder_phys.h
@@ -132,7 +132,8 @@ struct sde_encoder_virt_ops {
* @restore: Restore all the encoder configs.
* @is_autorefresh_enabled: provides the autorefresh current
* enable/disable state.
- * @get_line_count: Obtain current vertical line count
+ * @get_line_count: Obtain current internal vertical line count
+ * @get_wr_line_count: Obtain current output vertical line count
* @wait_dma_trigger: Returns true if lut dma has to trigger and wait
* unitl transaction is complete.
* @wait_for_active: Wait for display scan line to be in active area
@@ -182,6 +183,7 @@ struct sde_encoder_phys_ops {
void (*restore)(struct sde_encoder_phys *phys);
bool (*is_autorefresh_enabled)(struct sde_encoder_phys *phys);
int (*get_line_count)(struct sde_encoder_phys *phys);
+ int (*get_wr_line_count)(struct sde_encoder_phys *phys);
bool (*wait_dma_trigger)(struct sde_encoder_phys *phys);
int (*wait_for_active)(struct sde_encoder_phys *phys);
};
diff --git a/drivers/gpu/drm/msm/sde/sde_encoder_phys_cmd.c b/drivers/gpu/drm/msm/sde/sde_encoder_phys_cmd.c
index a10a6e3..d7cbfbe 100644
--- a/drivers/gpu/drm/msm/sde/sde_encoder_phys_cmd.c
+++ b/drivers/gpu/drm/msm/sde/sde_encoder_phys_cmd.c
@@ -448,9 +448,7 @@ static int _sde_encoder_phys_cmd_handle_ppdone_timeout(
cmd_enc->pp_timeout_report_cnt = PP_TIMEOUT_MAX_TRIALS;
frame_event |= SDE_ENCODER_FRAME_EVENT_PANEL_DEAD;
- sde_encoder_helper_unregister_irq(phys_enc, INTR_IDX_RDPTR);
SDE_DBG_DUMP("panic");
- sde_encoder_helper_register_irq(phys_enc, INTR_IDX_RDPTR);
} else if (cmd_enc->pp_timeout_report_cnt == 1) {
/* to avoid flooding, only log first time, and "dead" time */
SDE_ERROR_CMDENC(cmd_enc,
@@ -461,10 +459,6 @@ static int _sde_encoder_phys_cmd_handle_ppdone_timeout(
atomic_read(&phys_enc->pending_kickoff_cnt));
SDE_EVT32(DRMID(phys_enc->parent), SDE_EVTLOG_FATAL);
-
- sde_encoder_helper_unregister_irq(phys_enc, INTR_IDX_RDPTR);
- SDE_DBG_DUMP("all", "dbg_bus", "vbif_dbg_bus");
- sde_encoder_helper_register_irq(phys_enc, INTR_IDX_RDPTR);
}
atomic_add_unless(&phys_enc->pending_kickoff_cnt, -1, 0);
@@ -958,6 +952,28 @@ static int sde_encoder_phys_cmd_get_line_count(
return hw_pp->ops.get_line_count(hw_pp);
}
+static int sde_encoder_phys_cmd_get_write_line_count(
+ struct sde_encoder_phys *phys_enc)
+{
+ struct sde_hw_pingpong *hw_pp;
+ struct sde_hw_pp_vsync_info info;
+
+ if (!phys_enc || !phys_enc->hw_pp)
+ return -EINVAL;
+
+ if (!sde_encoder_phys_cmd_is_master(phys_enc))
+ return -EINVAL;
+
+ hw_pp = phys_enc->hw_pp;
+ if (!hw_pp->ops.get_vsync_info)
+ return -EINVAL;
+
+ if (hw_pp->ops.get_vsync_info(hw_pp, &info))
+ return -EINVAL;
+
+ return (int)info.wr_ptr_line_count;
+}
+
static void sde_encoder_phys_cmd_disable(struct sde_encoder_phys *phys_enc)
{
struct sde_encoder_phys_cmd *cmd_enc =
@@ -1061,6 +1077,7 @@ static int _sde_encoder_phys_cmd_wait_for_ctl_start(
to_sde_encoder_phys_cmd(phys_enc);
struct sde_encoder_wait_info wait_info;
int ret;
+ bool frame_pending = true;
if (!phys_enc || !phys_enc->hw_ctl) {
SDE_ERROR("invalid argument(s)\n");
@@ -1078,10 +1095,17 @@ static int _sde_encoder_phys_cmd_wait_for_ctl_start(
ret = sde_encoder_helper_wait_for_irq(phys_enc, INTR_IDX_CTL_START,
&wait_info);
if (ret == -ETIMEDOUT) {
- SDE_ERROR_CMDENC(cmd_enc, "ctl start interrupt wait failed\n");
- ret = -EINVAL;
- } else if (!ret)
- ret = 0;
+ struct sde_hw_ctl *ctl = phys_enc->hw_ctl;
+
+ if (ctl && ctl->ops.get_start_state)
+ frame_pending = ctl->ops.get_start_state(ctl);
+
+ if (frame_pending)
+ SDE_ERROR_CMDENC(cmd_enc,
+ "ctl start interrupt wait failed\n");
+ else
+ ret = 0;
+ }
return ret;
}
@@ -1294,6 +1318,7 @@ static void sde_encoder_phys_cmd_init_ops(
ops->is_autorefresh_enabled =
sde_encoder_phys_cmd_is_autorefresh_enabled;
ops->get_line_count = sde_encoder_phys_cmd_get_line_count;
+ ops->get_wr_line_count = sde_encoder_phys_cmd_get_write_line_count;
ops->wait_for_active = NULL;
}
diff --git a/drivers/gpu/drm/msm/sde/sde_encoder_phys_vid.c b/drivers/gpu/drm/msm/sde/sde_encoder_phys_vid.c
index 47aa5e9..aaf50f6 100644
--- a/drivers/gpu/drm/msm/sde/sde_encoder_phys_vid.c
+++ b/drivers/gpu/drm/msm/sde/sde_encoder_phys_vid.c
@@ -823,19 +823,9 @@ static int sde_encoder_phys_vid_prepare_for_kickoff(
if (vid_enc->error_count >= KICKOFF_MAX_ERRORS) {
vid_enc->error_count = KICKOFF_MAX_ERRORS;
- sde_encoder_helper_unregister_irq(
- phys_enc, INTR_IDX_VSYNC);
SDE_DBG_DUMP("panic");
- sde_encoder_helper_register_irq(
- phys_enc, INTR_IDX_VSYNC);
} else if (vid_enc->error_count == 1) {
SDE_EVT32(DRMID(phys_enc->parent), SDE_EVTLOG_FATAL);
-
- sde_encoder_helper_unregister_irq(
- phys_enc, INTR_IDX_VSYNC);
- SDE_DBG_DUMP("all", "dbg_bus", "vbif_dbg_bus");
- sde_encoder_helper_register_irq(
- phys_enc, INTR_IDX_VSYNC);
}
/* request a ctl reset before the next flush */
@@ -1111,6 +1101,7 @@ static void sde_encoder_phys_vid_init_ops(struct sde_encoder_phys_ops *ops)
ops->trigger_flush = sde_encoder_helper_trigger_flush;
ops->hw_reset = sde_encoder_helper_hw_reset;
ops->get_line_count = sde_encoder_phys_vid_get_line_count;
+ ops->get_wr_line_count = sde_encoder_phys_vid_get_line_count;
ops->wait_dma_trigger = sde_encoder_phys_vid_wait_dma_trigger;
ops->wait_for_active = sde_encoder_phys_vid_wait_for_active;
}
diff --git a/drivers/gpu/drm/msm/sde/sde_encoder_phys_wb.c b/drivers/gpu/drm/msm/sde/sde_encoder_phys_wb.c
index bf7d3da..42cf015 100644
--- a/drivers/gpu/drm/msm/sde/sde_encoder_phys_wb.c
+++ b/drivers/gpu/drm/msm/sde/sde_encoder_phys_wb.c
@@ -27,7 +27,8 @@
#define to_sde_encoder_phys_wb(x) \
container_of(x, struct sde_encoder_phys_wb, base)
-#define WBID(wb_enc) ((wb_enc) ? wb_enc->wb_dev->wb_idx : -1)
+#define WBID(wb_enc) \
+ ((wb_enc && wb_enc->wb_dev) ? wb_enc->wb_dev->wb_idx - WB_0 : -1)
#define TO_S15D16(_x_) ((_x_) << 7)
@@ -867,11 +868,11 @@ static int sde_encoder_phys_wb_wait_for_commit_done(
wb_enc->irq_idx, true);
if (irq_status) {
SDE_DEBUG("wb:%d done but irq not triggered\n",
- wb_enc->wb_dev->wb_idx - WB_0);
+ WBID(wb_enc));
sde_encoder_phys_wb_done_irq(wb_enc, wb_enc->irq_idx);
} else {
SDE_ERROR("wb:%d kickoff timed out\n",
- wb_enc->wb_dev->wb_idx - WB_0);
+ WBID(wb_enc));
atomic_add_unless(
&phys_enc->pending_retire_fence_cnt, -1, 0);
@@ -904,8 +905,7 @@ static int sde_encoder_phys_wb_wait_for_commit_done(
if (!rc) {
wb_time = (u64)ktime_to_us(wb_enc->end_time) -
(u64)ktime_to_us(wb_enc->start_time);
- SDE_DEBUG("wb:%d took %llu us\n",
- wb_enc->wb_dev->wb_idx - WB_0, wb_time);
+ SDE_DEBUG("wb:%d took %llu us\n", WBID(wb_enc), wb_time);
}
/* cleanup writeback framebuffer */
diff --git a/drivers/gpu/drm/msm/sde/sde_hw_ctl.c b/drivers/gpu/drm/msm/sde/sde_hw_ctl.c
index 8d68f9b..426ecf1 100644
--- a/drivers/gpu/drm/msm/sde/sde_hw_ctl.c
+++ b/drivers/gpu/drm/msm/sde/sde_hw_ctl.c
@@ -124,6 +124,11 @@ static inline void sde_hw_ctl_trigger_start(struct sde_hw_ctl *ctx)
SDE_REG_WRITE(&ctx->hw, CTL_START, 0x1);
}
+static inline int sde_hw_ctl_get_start_state(struct sde_hw_ctl *ctx)
+{
+ return SDE_REG_READ(&ctx->hw, CTL_START);
+}
+
static inline void sde_hw_ctl_trigger_pending(struct sde_hw_ctl *ctx)
{
SDE_REG_WRITE(&ctx->hw, CTL_PREPARE, 0x1);
@@ -649,6 +654,7 @@ static void _setup_ctl_ops(struct sde_hw_ctl_ops *ops,
ops->get_bitmask_cdm = sde_hw_ctl_get_bitmask_cdm;
ops->get_bitmask_wb = sde_hw_ctl_get_bitmask_wb;
ops->reg_dma_flush = sde_hw_reg_dma_flush;
+ ops->get_start_state = sde_hw_ctl_get_start_state;
if (cap & BIT(SDE_CTL_SBUF)) {
ops->get_bitmask_rot = sde_hw_ctl_get_bitmask_rot;
diff --git a/drivers/gpu/drm/msm/sde/sde_hw_ctl.h b/drivers/gpu/drm/msm/sde/sde_hw_ctl.h
index 255337f..f8594da 100644
--- a/drivers/gpu/drm/msm/sde/sde_hw_ctl.h
+++ b/drivers/gpu/drm/msm/sde/sde_hw_ctl.h
@@ -225,6 +225,12 @@ struct sde_hw_ctl_ops {
*/
void (*reg_dma_flush)(struct sde_hw_ctl *ctx, bool blocking);
+ /**
+ * check if ctl start trigger state to confirm the frame pending
+ * status
+ * @ctx : ctl path ctx pointer
+ */
+ int (*get_start_state)(struct sde_hw_ctl *ctx);
};
/**
diff --git a/drivers/gpu/drm/msm/sde/sde_hw_sspp.c b/drivers/gpu/drm/msm/sde/sde_hw_sspp.c
index acecf1a..e7aa6ea 100644
--- a/drivers/gpu/drm/msm/sde/sde_hw_sspp.c
+++ b/drivers/gpu/drm/msm/sde/sde_hw_sspp.c
@@ -259,7 +259,8 @@ static void _sspp_setup_csc10_opmode(struct sde_hw_pipe *ctx,
* Setup source pixel format, flip,
*/
static void sde_hw_sspp_setup_format(struct sde_hw_pipe *ctx,
- const struct sde_format *fmt, u32 flags,
+ const struct sde_format *fmt,
+ bool blend_enabled, u32 flags,
enum sde_sspp_multirect_index rect_mode)
{
struct sde_hw_blk_reg_map *c;
@@ -328,7 +329,8 @@ static void sde_hw_sspp_setup_format(struct sde_hw_pipe *ctx,
SDE_FETCH_CONFIG_RESET_VALUE |
ctx->mdp->highest_bank_bit << 18);
if (IS_UBWC_20_SUPPORTED(ctx->catalog->ubwc_version)) {
- fast_clear = fmt->alpha_enable ? BIT(31) : 0;
+ fast_clear = (fmt->alpha_enable && blend_enabled) ?
+ BIT(31) : 0;
SDE_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
fast_clear | (ctx->mdp->ubwc_swizzle) |
(ctx->mdp->highest_bank_bit << 4));
diff --git a/drivers/gpu/drm/msm/sde/sde_hw_sspp.h b/drivers/gpu/drm/msm/sde/sde_hw_sspp.h
index d32c9d8..fdf215f 100644
--- a/drivers/gpu/drm/msm/sde/sde_hw_sspp.h
+++ b/drivers/gpu/drm/msm/sde/sde_hw_sspp.h
@@ -293,12 +293,14 @@ struct sde_hw_sspp_ops {
/**
* setup_format - setup pixel format cropping rectangle, flip
* @ctx: Pointer to pipe context
- * @cfg: Pointer to pipe config structure
+ * @fmt: Pointer to sde_format structure
+ * @blend_enabled: flag indicating blend enabled or disabled on plane
* @flags: Extra flags for format config
* @index: rectangle index in multirect
*/
void (*setup_format)(struct sde_hw_pipe *ctx,
- const struct sde_format *fmt, u32 flags,
+ const struct sde_format *fmt,
+ bool blend_enabled, u32 flags,
enum sde_sspp_multirect_index index);
/**
diff --git a/drivers/gpu/drm/msm/sde/sde_kms.c b/drivers/gpu/drm/msm/sde/sde_kms.c
index 2acbb0c..a54e39a9 100644
--- a/drivers/gpu/drm/msm/sde/sde_kms.c
+++ b/drivers/gpu/drm/msm/sde/sde_kms.c
@@ -103,6 +103,7 @@ static int _sde_danger_signal_status(struct seq_file *s,
struct msm_drm_private *priv;
struct sde_danger_safe_status status;
int i;
+ int rc;
if (!kms || !kms->dev || !kms->dev->dev_private || !kms->hw_mdp) {
SDE_ERROR("invalid arg(s)\n");
@@ -112,7 +113,13 @@ static int _sde_danger_signal_status(struct seq_file *s,
priv = kms->dev->dev_private;
memset(&status, 0, sizeof(struct sde_danger_safe_status));
- sde_power_resource_enable(&priv->phandle, kms->core_client, true);
+ rc = sde_power_resource_enable(&priv->phandle, kms->core_client, true);
+ if (rc) {
+ SDE_ERROR("failed to enable power resource %d\n", rc);
+ SDE_EVT32(rc, SDE_EVTLOG_ERROR);
+ return rc;
+ }
+
if (danger_status) {
seq_puts(s, "\nDanger signal status:\n");
if (kms->hw_mdp->ops.get_danger_status)
@@ -541,7 +548,13 @@ static void sde_kms_prepare_commit(struct msm_kms *kms,
return;
priv = dev->dev_private;
- sde_power_resource_enable(&priv->phandle, sde_kms->core_client, true);
+ rc = sde_power_resource_enable(&priv->phandle, sde_kms->core_client,
+ true);
+ if (rc) {
+ SDE_ERROR("failed to enable power resource %d\n", rc);
+ SDE_EVT32(rc, SDE_EVTLOG_ERROR);
+ return;
+ }
for_each_crtc_in_state(state, crtc, crtc_state, i) {
list_for_each_entry(encoder, &dev->mode_config.encoder_list,
@@ -587,10 +600,20 @@ static void sde_kms_prepare_commit(struct msm_kms *kms,
static void sde_kms_commit(struct msm_kms *kms,
struct drm_atomic_state *old_state)
{
+ struct sde_kms *sde_kms;
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state;
int i;
+ if (!kms || !old_state)
+ return;
+ sde_kms = to_sde_kms(kms);
+
+ if (!sde_kms_power_resource_is_enabled(sde_kms->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
if (crtc->state->active) {
SDE_EVT32(DRMID(crtc));
@@ -618,6 +641,11 @@ static void sde_kms_complete_commit(struct msm_kms *kms,
return;
priv = sde_kms->dev->dev_private;
+ if (!sde_kms_power_resource_is_enabled(sde_kms->dev)) {
+ SDE_ERROR("power resource is not enabled\n");
+ return;
+ }
+
for_each_crtc_in_state(old_state, crtc, old_crtc_state, i)
sde_crtc_complete_commit(crtc, old_crtc_state);
@@ -829,7 +857,7 @@ static int _sde_kms_setup_displays(struct drm_device *dev,
struct sde_kms *sde_kms)
{
static const struct sde_connector_ops dsi_ops = {
- .post_init = dsi_conn_post_init,
+ .set_info_blob = dsi_conn_set_info_blob,
.detect = dsi_conn_detect,
.get_modes = dsi_connector_get_modes,
.put_modes = dsi_connector_put_modes,
@@ -848,6 +876,7 @@ static int _sde_kms_setup_displays(struct drm_device *dev,
};
static const struct sde_connector_ops wb_ops = {
.post_init = sde_wb_connector_post_init,
+ .set_info_blob = sde_wb_connector_set_info_blob,
.detect = sde_wb_connector_detect,
.get_modes = sde_wb_connector_get_modes,
.set_property = sde_wb_connector_set_property,
@@ -864,7 +893,7 @@ static int _sde_kms_setup_displays(struct drm_device *dev,
.mode_valid = dp_connector_mode_valid,
.get_info = dp_connector_get_info,
.get_mode_info = dp_connector_get_mode_info,
- .send_hpd_event = dp_connector_send_hpd_event,
+ .post_open = dp_connector_post_open,
.check_status = NULL,
.pre_kickoff = dp_connector_pre_kickoff,
};
@@ -1974,7 +2003,7 @@ static int sde_kms_check_secure_transition(struct msm_kms *kms,
} else if (global_crtc && (global_crtc != cur_crtc)) {
SDE_ERROR(
"crtc%d-sec%d not allowed during crtc%d-sec%d\n",
- cur_crtc->base.id, sec_session,
+ cur_crtc ? cur_crtc->base.id : -1, sec_session,
global_crtc->base.id, global_sec_session);
return -EPERM;
}
@@ -2071,8 +2100,8 @@ static void _sde_kms_post_open(struct msm_kms *kms, struct drm_file *file)
sde_conn = to_sde_connector(connector);
- if (sde_conn->ops.send_hpd_event)
- sde_conn->ops.send_hpd_event(sde_conn->display);
+ if (sde_conn->ops.post_open)
+ sde_conn->ops.post_open(sde_conn->display);
}
mutex_unlock(&dev->mode_config.mutex);
@@ -2084,7 +2113,6 @@ static int _sde_kms_gen_drm_mode(struct sde_kms *sde_kms,
{
struct dsi_display_mode *modes = NULL;
u32 count = 0;
- u32 size = 0;
int rc = 0;
rc = dsi_display_get_mode_count(display, &count);
@@ -2094,18 +2122,12 @@ static int _sde_kms_gen_drm_mode(struct sde_kms *sde_kms,
}
SDE_DEBUG("num of modes = %d\n", count);
- size = count * sizeof(*modes);
- modes = kzalloc(size, GFP_KERNEL);
- if (!modes) {
- count = 0;
- goto end;
- }
- rc = dsi_display_get_modes(display, modes);
+ rc = dsi_display_get_modes(display, &modes);
if (rc) {
SDE_ERROR("failed to get modes, rc=%d\n", rc);
count = 0;
- goto error;
+ return rc;
}
/* TODO; currently consider modes[0] as the preferred mode */
@@ -2115,9 +2137,6 @@ static int _sde_kms_gen_drm_mode(struct sde_kms *sde_kms,
drm_mode->hdisplay, drm_mode->vdisplay);
drm_mode_set_name(drm_mode);
drm_mode_set_crtcinfo(drm_mode, 0);
-error:
- kfree(modes);
-end:
return rc;
}
@@ -2470,41 +2489,6 @@ static int _sde_kms_mmu_init(struct sde_kms *sde_kms)
return ret;
}
-static void _sde_kms_pm_qos_add_request(struct sde_kms *sde_kms)
-{
- struct pm_qos_request *req;
- u32 cpu_mask;
- u32 cpu_dma_latency;
- int cpu;
-
- if (!sde_kms || !sde_kms->catalog)
- return;
-
- cpu_mask = sde_kms->catalog->perf.cpu_mask;
- cpu_dma_latency = sde_kms->catalog->perf.cpu_dma_latency;
- if (!cpu_mask)
- return;
-
- req = &sde_kms->pm_qos_cpu_req;
- req->type = PM_QOS_REQ_AFFINE_CORES;
- cpumask_empty(&req->cpus_affine);
- for_each_possible_cpu(cpu) {
- if ((1 << cpu) & cpu_mask)
- cpumask_set_cpu(cpu, &req->cpus_affine);
- }
- pm_qos_add_request(req, PM_QOS_CPU_DMA_LATENCY, cpu_dma_latency);
-
- SDE_EVT32_VERBOSE(cpu_mask, cpu_dma_latency);
-}
-
-static void _sde_kms_pm_qos_remove_request(struct sde_kms *sde_kms)
-{
- if (!sde_kms || !sde_kms->catalog || !sde_kms->catalog->perf.cpu_mask)
- return;
-
- pm_qos_remove_request(&sde_kms->pm_qos_cpu_req);
-}
-
/* the caller api needs to turn on clock before calling this function */
static int _sde_kms_cont_splash_res_init(struct sde_kms *sde_kms)
{
@@ -2582,9 +2566,7 @@ static void sde_kms_handle_power_event(u32 event_type, void *usr)
if (event_type == SDE_POWER_EVENT_POST_ENABLE) {
sde_irq_update(msm_kms, true);
sde_vbif_init_memtypes(sde_kms);
- _sde_kms_pm_qos_add_request(sde_kms);
} else if (event_type == SDE_POWER_EVENT_PRE_DISABLE) {
- _sde_kms_pm_qos_remove_request(sde_kms);
sde_irq_update(msm_kms, false);
}
}
@@ -2681,6 +2663,7 @@ static int sde_kms_hw_init(struct msm_kms *kms)
struct sde_kms *sde_kms;
struct drm_device *dev;
struct msm_drm_private *priv;
+ bool splash_mem_found = false;
int i, rc = -EINVAL;
if (!kms) {
@@ -2775,8 +2758,10 @@ static int sde_kms_hw_init(struct msm_kms *kms)
rc = _sde_kms_get_splash_data(&sde_kms->splash_data);
if (rc) {
- SDE_ERROR("sde splash data fetch failed: %d\n", rc);
- goto error;
+ SDE_DEBUG("sde splash data fetch failed: %d\n", rc);
+ splash_mem_found = false;
+ } else {
+ splash_mem_found = true;
}
rc = sde_power_resource_enable(&priv->phandle, sde_kms->core_client,
@@ -2802,7 +2787,12 @@ static int sde_kms_hw_init(struct msm_kms *kms)
sde_dbg_init_dbg_buses(sde_kms->core_rev);
- _sde_kms_cont_splash_res_init(sde_kms);
+ /*
+ * Attempt continuous splash handoff only if reserved
+ * splash memory is found.
+ */
+ if (splash_mem_found)
+ _sde_kms_cont_splash_res_init(sde_kms);
/* Initialize reg dma block which is a singleton */
rc = sde_reg_dma_init(sde_kms->reg_dma, sde_kms->catalog,
diff --git a/drivers/gpu/drm/msm/sde/sde_kms.h b/drivers/gpu/drm/msm/sde/sde_kms.h
index 26c45e2..501797b 100644
--- a/drivers/gpu/drm/msm/sde/sde_kms.h
+++ b/drivers/gpu/drm/msm/sde/sde_kms.h
@@ -244,6 +244,23 @@ struct vsync_info {
bool sde_is_custom_client(void);
/**
+ * sde_kms_power_resource_is_enabled - whether or not power resource is enabled
+ * @dev: Pointer to drm device
+ * Return: true if power resource is enabled; false otherwise
+ */
+static inline bool sde_kms_power_resource_is_enabled(struct drm_device *dev)
+{
+ struct msm_drm_private *priv;
+
+ if (!dev || !dev->dev_private)
+ return false;
+
+ priv = dev->dev_private;
+
+ return sde_power_resource_is_enabled(&priv->phandle);
+}
+
+/**
* sde_kms_is_suspend_state - whether or not the system is pm suspended
* @dev: Pointer to drm device
* Return: Suspend status
diff --git a/drivers/gpu/drm/msm/sde/sde_plane.c b/drivers/gpu/drm/msm/sde/sde_plane.c
index e477462..9f27286 100644
--- a/drivers/gpu/drm/msm/sde/sde_plane.c
+++ b/drivers/gpu/drm/msm/sde/sde_plane.c
@@ -58,6 +58,9 @@
#define SDE_PLANE_COLOR_FILL_FLAG BIT(31)
+#define TIME_MULTIPLEX_RECT(r0, r1, buffer_lines) \
+ ((r0).y >= ((r1).y + (r1).h + buffer_lines))
+
/* multirect rect index */
enum {
R0,
@@ -515,6 +518,7 @@ int sde_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable)
struct sde_plane *psde;
struct msm_drm_private *priv;
struct sde_kms *sde_kms;
+ int rc;
if (!plane || !plane->dev) {
SDE_ERROR("invalid arguments\n");
@@ -533,7 +537,13 @@ int sde_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable)
if (!psde->is_rt_pipe)
goto end;
- sde_power_resource_enable(&priv->phandle, sde_kms->core_client, true);
+ rc = sde_power_resource_enable(&priv->phandle, sde_kms->core_client,
+ true);
+ if (rc) {
+ SDE_ERROR("failed to enable power resource %d\n", rc);
+ SDE_EVT32(rc, SDE_EVTLOG_ERROR);
+ return rc;
+ }
_sde_plane_set_qos_ctrl(plane, enable, SDE_PLANE_QOS_PANIC_CTRL);
@@ -1451,6 +1461,7 @@ static int _sde_plane_color_fill(struct sde_plane *psde,
const struct sde_format *fmt;
const struct drm_plane *plane;
struct sde_plane_state *pstate;
+ bool blend_enable = true;
if (!psde || !psde->base.state) {
SDE_ERROR("invalid plane\n");
@@ -1473,6 +1484,9 @@ static int _sde_plane_color_fill(struct sde_plane *psde,
*/
fmt = sde_get_sde_format(DRM_FORMAT_ABGR8888);
+ blend_enable = (SDE_DRM_BLEND_OP_OPAQUE !=
+ sde_plane_get_property(pstate, PLANE_PROP_BLEND_OP));
+
/* update sspp */
if (fmt && psde->pipe_hw->ops.setup_solidfill) {
psde->pipe_hw->ops.setup_solidfill(psde->pipe_hw,
@@ -1488,7 +1502,7 @@ static int _sde_plane_color_fill(struct sde_plane *psde,
if (psde->pipe_hw->ops.setup_format)
psde->pipe_hw->ops.setup_format(psde->pipe_hw,
- fmt, SDE_SSPP_SOLID_FILL,
+ fmt, blend_enable, SDE_SSPP_SOLID_FILL,
pstate->multirect_index);
if (psde->pipe_hw->ops.setup_rects)
@@ -2776,6 +2790,12 @@ void sde_plane_clear_multirect(const struct drm_plane_state *drm_state)
pstate->multirect_mode = SDE_SSPP_MULTIRECT_NONE;
}
+/**
+ * multi_rect validate API allows to validate only R0 and R1 RECT
+ * passing for each plane. Client of this API must not pass multiple
+ * plane which are not sharing same XIN client. Such calls will fail
+ * even though kernel client is passing valid multirect configuration.
+ */
int sde_plane_validate_multirect_v2(struct sde_multirect_plane_states *plane)
{
struct sde_plane_state *pstate[R_MAX];
@@ -2783,37 +2803,44 @@ int sde_plane_validate_multirect_v2(struct sde_multirect_plane_states *plane)
struct sde_rect src[R_MAX], dst[R_MAX];
struct sde_plane *sde_plane[R_MAX];
const struct sde_format *fmt[R_MAX];
+ int xin_id[R_MAX];
bool q16_data = true;
- int i, buffer_lines;
+ int i, j, buffer_lines, width_threshold[R_MAX];
unsigned int max_tile_height = 1;
bool parallel_fetch_qualified = true;
- bool has_tiled_rect = false;
+ enum sde_sspp_multirect_mode mode = SDE_SSPP_MULTIRECT_NONE;
+ const struct msm_format *msm_fmt;
for (i = 0; i < R_MAX; i++) {
- const struct msm_format *msm_fmt;
-
drm_state[i] = i ? plane->r1 : plane->r0;
- msm_fmt = msm_framebuffer_format(drm_state[i]->fb);
- fmt[i] = to_sde_format(msm_fmt);
-
- if (SDE_FORMAT_IS_UBWC(fmt[i])) {
- has_tiled_rect = true;
- if (fmt[i]->tile_height > max_tile_height)
- max_tile_height = fmt[i]->tile_height;
+ if (!drm_state[i]) {
+ SDE_ERROR("drm plane state is NULL\n");
+ return -EINVAL;
}
- }
-
- for (i = 0; i < R_MAX; i++) {
- int width_threshold;
pstate[i] = to_sde_plane_state(drm_state[i]);
sde_plane[i] = to_sde_plane(drm_state[i]->plane);
+ xin_id[i] = sde_plane[i]->pipe_hw->cap->xin_id;
- if (pstate[i] == NULL) {
- SDE_ERROR("SDE plane state of plane id %d is NULL\n",
- drm_state[i]->plane->base.id);
+ for (j = 0; j < i; j++) {
+ if (xin_id[i] != xin_id[j]) {
+ SDE_ERROR_PLANE(sde_plane[i],
+ "invalid multirect validate call base:%d xin_id:%d curr:%d xin:%d\n",
+ j, xin_id[j], i, xin_id[i]);
+ return -EINVAL;
+ }
+ }
+
+ msm_fmt = msm_framebuffer_format(drm_state[i]->fb);
+ if (!msm_fmt) {
+ SDE_ERROR_PLANE(sde_plane[i], "null fb\n");
return -EINVAL;
}
+ fmt[i] = to_sde_format(msm_fmt);
+
+ if (SDE_FORMAT_IS_UBWC(fmt[i]) &&
+ (fmt[i]->tile_height > max_tile_height))
+ max_tile_height = fmt[i]->tile_height;
POPULATE_RECT(&src[i], drm_state[i]->src_x, drm_state[i]->src_y,
drm_state[i]->src_w, drm_state[i]->src_h, q16_data);
@@ -2840,41 +2867,81 @@ int sde_plane_validate_multirect_v2(struct sde_multirect_plane_states *plane)
* So we cannot support more than half of the supported SSPP
* width for tiled formats.
*/
- width_threshold = sde_plane[i]->pipe_sblk->maxlinewidth;
- if (has_tiled_rect)
- width_threshold /= 2;
+ width_threshold[i] = sde_plane[i]->pipe_sblk->maxlinewidth;
+ if (SDE_FORMAT_IS_UBWC(fmt[i]))
+ width_threshold[i] /= 2;
- if (parallel_fetch_qualified && src[i].w > width_threshold)
+ if (parallel_fetch_qualified && src[i].w > width_threshold[i])
parallel_fetch_qualified = false;
+ if (sde_plane[i]->is_virtual)
+ mode = sde_plane_get_property(pstate[i],
+ PLANE_PROP_MULTIRECT_MODE);
}
- /* Validate RECT's and set the mode */
-
- /* Prefer PARALLEL FETCH Mode over TIME_MX Mode */
- if (parallel_fetch_qualified) {
- pstate[R0]->multirect_mode = SDE_SSPP_MULTIRECT_PARALLEL;
- pstate[R1]->multirect_mode = SDE_SSPP_MULTIRECT_PARALLEL;
-
- goto done;
- }
-
- /* TIME_MX Mode */
buffer_lines = 2 * max_tile_height;
- if ((dst[R1].y >= dst[R0].y + dst[R0].h + buffer_lines) ||
- (dst[R0].y >= dst[R1].y + dst[R1].h + buffer_lines)) {
- pstate[R0]->multirect_mode = SDE_SSPP_MULTIRECT_TIME_MX;
- pstate[R1]->multirect_mode = SDE_SSPP_MULTIRECT_TIME_MX;
- } else {
- SDE_ERROR(
- "No multirect mode possible for the planes (%d - %d)\n",
- drm_state[R0]->plane->base.id,
- drm_state[R1]->plane->base.id);
- return -EINVAL;
+ /**
+ * fallback to driver mode selection logic if client is using
+ * multirect plane without setting property.
+ *
+ * validate multirect mode configuration based on rectangle
+ */
+ switch (mode) {
+ case SDE_SSPP_MULTIRECT_NONE:
+ if (parallel_fetch_qualified)
+ mode = SDE_SSPP_MULTIRECT_PARALLEL;
+ else if (TIME_MULTIPLEX_RECT(dst[R1], dst[R0], buffer_lines) ||
+ TIME_MULTIPLEX_RECT(dst[R0], dst[R1], buffer_lines))
+ mode = SDE_SSPP_MULTIRECT_TIME_MX;
+ else
+ SDE_ERROR(
+ "planes(%d - %d) multirect mode selection fail\n",
+ drm_state[R0]->plane->base.id,
+ drm_state[R1]->plane->base.id);
+ break;
+
+ case SDE_SSPP_MULTIRECT_PARALLEL:
+ if (!parallel_fetch_qualified) {
+ SDE_ERROR("R0 plane:%d width_threshold:%d src_w:%d\n",
+ drm_state[R0]->plane->base.id,
+ width_threshold[R0], src[R0].w);
+ SDE_ERROR("R1 plane:%d width_threshold:%d src_w:%d\n",
+ drm_state[R1]->plane->base.id,
+ width_threshold[R1], src[R1].w);
+ SDE_ERROR("parallel fetch not qualified\n");
+ mode = SDE_SSPP_MULTIRECT_NONE;
+ }
+ break;
+
+ case SDE_SSPP_MULTIRECT_TIME_MX:
+ if (!TIME_MULTIPLEX_RECT(dst[R1], dst[R0], buffer_lines) &&
+ !TIME_MULTIPLEX_RECT(dst[R0], dst[R1], buffer_lines)) {
+ SDE_ERROR(
+ "buffer_lines:%d R0 plane:%d dst_y:%d dst_h:%d\n",
+ buffer_lines, drm_state[R0]->plane->base.id,
+ dst[R0].y, dst[R0].h);
+ SDE_ERROR(
+ "buffer_lines:%d R1 plane:%d dst_y:%d dst_h:%d\n",
+ buffer_lines, drm_state[R1]->plane->base.id,
+ dst[R1].y, dst[R1].h);
+ SDE_ERROR("time multiplexed fetch not qualified\n");
+ mode = SDE_SSPP_MULTIRECT_NONE;
+ }
+ break;
+
+ default:
+ SDE_ERROR("bad mode:%d selection\n", mode);
+ mode = SDE_SSPP_MULTIRECT_NONE;
+ break;
}
-done:
+ for (i = 0; i < R_MAX; i++)
+ pstate[i]->multirect_mode = mode;
+
+ if (mode == SDE_SSPP_MULTIRECT_NONE)
+ return -EINVAL;
+
if (sde_plane[R0]->is_virtual) {
pstate[R0]->multirect_index = SDE_SSPP_RECT_1;
pstate[R1]->multirect_index = SDE_SSPP_RECT_0;
@@ -2887,6 +2954,51 @@ int sde_plane_validate_multirect_v2(struct sde_multirect_plane_states *plane)
pstate[R0]->multirect_mode, pstate[R0]->multirect_index);
SDE_DEBUG_PLANE(sde_plane[R1], "R1: %d - %d\n",
pstate[R1]->multirect_mode, pstate[R1]->multirect_index);
+
+ return 0;
+}
+
+int sde_plane_confirm_hw_rsvps(struct drm_plane *plane,
+ const struct drm_plane_state *state)
+{
+ struct drm_crtc_state *cstate;
+ struct sde_plane_state *pstate;
+ struct sde_plane_rot_state *rstate;
+ struct sde_hw_blk *hw_blk;
+
+ if (!plane || !state) {
+ SDE_ERROR("invalid plane/state\n");
+ return -EINVAL;
+ }
+
+ pstate = to_sde_plane_state(state);
+ rstate = &pstate->rot;
+
+ /* cstate will be null if crtc is disconnected from plane */
+ cstate = _sde_plane_get_crtc_state((struct drm_plane_state *)state);
+ if (IS_ERR_OR_NULL(cstate)) {
+ SDE_ERROR("invalid crtc state\n");
+ return -EINVAL;
+ }
+
+ if (sde_plane_enabled((struct drm_plane_state *)state) &&
+ rstate->out_sbuf) {
+ SDE_DEBUG("plane%d.%d acquire rotator, fb %d\n",
+ plane->base.id, rstate->sequence_id,
+ state->fb ? state->fb->base.id : -1);
+
+ hw_blk = sde_crtc_res_get(cstate, SDE_HW_BLK_ROT,
+ (u64) state->fb);
+ if (!hw_blk) {
+ SDE_ERROR("plane%d.%d no available rotator, fb %d\n",
+ plane->base.id, rstate->sequence_id,
+ state->fb ? state->fb->base.id : -1);
+ SDE_EVT32(DRMID(plane), rstate->sequence_id,
+ state->fb ? state->fb->base.id : -1,
+ SDE_EVTLOG_ERROR);
+ return -EINVAL;
+ }
+ }
return 0;
}
@@ -3525,8 +3637,8 @@ static int sde_plane_sspp_atomic_update(struct drm_plane *plane,
struct drm_crtc *crtc;
struct drm_framebuffer *fb;
struct sde_rect src, dst;
- const struct sde_rect *crtc_roi;
bool q16_data = true;
+ bool blend_enabled = true;
int idx;
if (!plane) {
@@ -3597,6 +3709,7 @@ static int sde_plane_sspp_atomic_update(struct drm_plane *plane,
case PLANE_PROP_CSC_V1:
pstate->dirty |= SDE_PLANE_DIRTY_FORMAT;
break;
+ case PLANE_PROP_MULTIRECT_MODE:
case PLANE_PROP_COLOR_FILL:
/* potentially need to refresh everything */
pstate->dirty = SDE_PLANE_DIRTY_ALL;
@@ -3642,9 +3755,8 @@ static int sde_plane_sspp_atomic_update(struct drm_plane *plane,
_sde_plane_sspp_atomic_check_mode_changed(psde, state,
old_state);
- /* re-program the output rects always in the case of partial update */
- sde_crtc_get_crtc_roi(crtc->state, &crtc_roi);
- if (!sde_kms_rect_is_null(crtc_roi))
+ /* re-program the output rects always if partial update roi changed */
+ if (sde_crtc_is_crtc_roi_dirty(crtc->state))
pstate->dirty |= SDE_PLANE_DIRTY_RECTS;
if (pstate->dirty & SDE_PLANE_DIRTY_RECTS)
@@ -3677,6 +3789,8 @@ static int sde_plane_sspp_atomic_update(struct drm_plane *plane,
/* update roi config */
if (pstate->dirty & SDE_PLANE_DIRTY_RECTS) {
+ const struct sde_rect *crtc_roi;
+
POPULATE_RECT(&src, rstate->out_src_x, rstate->out_src_y,
rstate->out_src_w, rstate->out_src_h, q16_data);
POPULATE_RECT(&dst, state->crtc_x, state->crtc_y,
@@ -3703,6 +3817,7 @@ static int sde_plane_sspp_atomic_update(struct drm_plane *plane,
* adjust layer mixer position of the sspp in the presence
* of a partial update to the active lm origin
*/
+ sde_crtc_get_crtc_roi(crtc->state, &crtc_roi);
dst.x -= crtc_roi->x;
dst.y -= crtc_roi->y;
@@ -3761,8 +3876,12 @@ static int sde_plane_sspp_atomic_update(struct drm_plane *plane,
if (rstate->out_rotation & DRM_REFLECT_Y)
src_flags |= SDE_SSPP_FLIP_UD;
+ blend_enabled = (SDE_DRM_BLEND_OP_OPAQUE !=
+ sde_plane_get_property(pstate, PLANE_PROP_BLEND_OP));
+
/* update format */
- psde->pipe_hw->ops.setup_format(psde->pipe_hw, fmt, src_flags,
+ psde->pipe_hw->ops.setup_format(psde->pipe_hw, fmt,
+ blend_enabled, src_flags,
pstate->multirect_index);
if (psde->pipe_hw->ops.setup_cdp) {
@@ -4013,6 +4132,11 @@ static void _sde_plane_install_properties(struct drm_plane *plane,
{SDE_DRM_FB_NON_SEC_DIR_TRANS, "non_sec_direct_translation"},
{SDE_DRM_FB_SEC_DIR_TRANS, "sec_direct_translation"},
};
+ static const struct drm_prop_enum_list e_multirect_mode[] = {
+ {SDE_SSPP_MULTIRECT_NONE, "none"},
+ {SDE_SSPP_MULTIRECT_PARALLEL, "parallel"},
+ {SDE_SSPP_MULTIRECT_TIME_MX, "serial"},
+ };
const struct sde_format_extended *format_list;
struct sde_kms_info *info;
struct sde_plane *psde = to_sde_plane(plane);
@@ -4162,6 +4286,10 @@ static void _sde_plane_install_properties(struct drm_plane *plane,
format_list = psde->pipe_sblk->virt_format_list;
sde_kms_info_add_keyint(info, "primary_smart_plane_id",
master_plane_id);
+ msm_property_install_enum(&psde->property_info,
+ "multirect_mode", 0x0, 0, e_multirect_mode,
+ ARRAY_SIZE(e_multirect_mode),
+ PLANE_PROP_MULTIRECT_MODE);
}
if (format_list) {
diff --git a/drivers/gpu/drm/msm/sde/sde_plane.h b/drivers/gpu/drm/msm/sde/sde_plane.h
index 5c1fff1..d1eb399 100644
--- a/drivers/gpu/drm/msm/sde/sde_plane.h
+++ b/drivers/gpu/drm/msm/sde/sde_plane.h
@@ -199,6 +199,15 @@ enum sde_sspp sde_plane_pipe(struct drm_plane *plane);
bool is_sde_plane_virtual(struct drm_plane *plane);
/**
+ * sde_plane_confirm_hw_rsvps - reserve an sbuf resource, if needed
+ * @plane: Pointer to DRM plane object
+ * @state: Pointer to plane state
+ * Returns: Zero on success
+ */
+int sde_plane_confirm_hw_rsvps(struct drm_plane *plane,
+ const struct drm_plane_state *state);
+
+/**
* sde_plane_get_ctl_flush - get control flush mask
* @plane: Pointer to DRM plane object
* @ctl: Pointer to control hardware
diff --git a/drivers/gpu/drm/msm/sde/sde_rm.c b/drivers/gpu/drm/msm/sde/sde_rm.c
index e78bd94..c2c1f75 100644
--- a/drivers/gpu/drm/msm/sde/sde_rm.c
+++ b/drivers/gpu/drm/msm/sde/sde_rm.c
@@ -175,6 +175,18 @@ void sde_rm_init_hw_iter(
iter->type = type;
}
+enum sde_rm_topology_name sde_rm_get_topology_name(
+ struct msm_display_topology topology)
+{
+ int i;
+
+ for (i = 0; i < SDE_RM_TOPOLOGY_MAX; i++)
+ if (RM_IS_TOPOLOGY_MATCH(g_top_table[i], topology))
+ return g_top_table[i].top_name;
+
+ return SDE_RM_TOPOLOGY_NONE;
+}
+
static bool _sde_rm_get_hw_locked(struct sde_rm *rm, struct sde_rm_hw_iter *i)
{
struct list_head *blk_list;
diff --git a/drivers/gpu/drm/msm/sde/sde_rm.h b/drivers/gpu/drm/msm/sde/sde_rm.h
index 3b9b82f..0545609 100644
--- a/drivers/gpu/drm/msm/sde/sde_rm.h
+++ b/drivers/gpu/drm/msm/sde/sde_rm.h
@@ -107,6 +107,15 @@ struct sde_rm_hw_iter {
};
/**
+ * sde_rm_get_topology_name - get the name of the given topology config
+ * @topology: msm_display_topology topology config
+ * @Return: name of the given topology
+ */
+enum sde_rm_topology_name sde_rm_get_topology_name(
+ struct msm_display_topology topology);
+
+
+/**
* sde_rm_init - Read hardware catalog and create reservation tracking objects
* for all HW blocks.
* @rm: SDE Resource Manager handle
diff --git a/drivers/gpu/drm/msm/sde/sde_trace.h b/drivers/gpu/drm/msm/sde/sde_trace.h
index 19cda3e..ef4cdeb 100644
--- a/drivers/gpu/drm/msm/sde/sde_trace.h
+++ b/drivers/gpu/drm/msm/sde/sde_trace.h
@@ -125,6 +125,22 @@ TRACE_EVENT(sde_cmd_release_bw,
TP_printk("crtc:%d", __entry->crtc_id)
);
+TRACE_EVENT(sde_encoder_underrun,
+ TP_PROTO(u32 enc_id, u32 underrun_cnt),
+ TP_ARGS(enc_id, underrun_cnt),
+ TP_STRUCT__entry(
+ __field(u32, enc_id)
+ __field(u32, underrun_cnt)
+ ),
+ TP_fast_assign(
+ __entry->enc_id = enc_id;
+ __entry->underrun_cnt = underrun_cnt;
+
+ ),
+ TP_printk("enc:%d underrun_cnt:%d", __entry->enc_id,
+ __entry->underrun_cnt)
+);
+
TRACE_EVENT(tracing_mark_write,
TP_PROTO(int pid, const char *name, bool trace_begin),
TP_ARGS(pid, name, trace_begin),
diff --git a/drivers/gpu/drm/msm/sde/sde_vbif.c b/drivers/gpu/drm/msm/sde/sde_vbif.c
index 522f7f9..0dbc027 100644
--- a/drivers/gpu/drm/msm/sde/sde_vbif.c
+++ b/drivers/gpu/drm/msm/sde/sde_vbif.c
@@ -102,15 +102,6 @@ int sde_vbif_halt_plane_xin(struct sde_kms *sde_kms, u32 xin_id, u32 clk_ctrl)
"wait failed for pipe halt:xin_id %u, clk_ctrl %u, rc %u\n",
xin_id, clk_ctrl, rc);
SDE_EVT32(xin_id, clk_ctrl, rc, SDE_EVTLOG_ERROR);
- return rc;
- }
-
- status = vbif->ops.get_halt_ctrl(vbif, xin_id);
- if (status == 0) {
- SDE_ERROR("halt failed for pipe xin_id %u halt clk_ctrl %u\n",
- xin_id, clk_ctrl);
- SDE_EVT32(xin_id, clk_ctrl, SDE_EVTLOG_ERROR);
- return -ETIMEDOUT;
}
/* open xin client to enable transactions */
@@ -118,7 +109,7 @@ int sde_vbif_halt_plane_xin(struct sde_kms *sde_kms, u32 xin_id, u32 clk_ctrl)
if (forced_on)
mdp->ops.setup_clk_force_ctrl(mdp, clk_ctrl, false);
- return 0;
+ return rc;
}
/**
diff --git a/drivers/gpu/drm/msm/sde/sde_wb.c b/drivers/gpu/drm/msm/sde/sde_wb.c
index a4c8518..71c8b63 100644
--- a/drivers/gpu/drm/msm/sde/sde_wb.c
+++ b/drivers/gpu/drm/msm/sde/sde_wb.c
@@ -352,48 +352,20 @@ int sde_wb_get_mode_info(const struct drm_display_mode *drm_mode,
return 0;
}
-int sde_wb_connector_post_init(struct drm_connector *connector,
+int sde_wb_connector_set_info_blob(struct drm_connector *connector,
void *info, void *display, struct msm_mode_info *mode_info)
{
- struct sde_connector *c_conn;
struct sde_wb_device *wb_dev = display;
const struct sde_format_extended *format_list;
- static const struct drm_prop_enum_list e_fb_translation_mode[] = {
- {SDE_DRM_FB_NON_SEC, "non_sec"},
- {SDE_DRM_FB_SEC, "sec"},
- };
if (!connector || !info || !display || !wb_dev->wb_cfg) {
SDE_ERROR("invalid params\n");
return -EINVAL;
}
- c_conn = to_sde_connector(connector);
- wb_dev->connector = connector;
- wb_dev->detect_status = connector_status_connected;
format_list = wb_dev->wb_cfg->format_list;
/*
- * Add extra connector properties
- */
- msm_property_install_range(&c_conn->property_info, "FB_ID",
- 0x0, 0, ~0, 0, CONNECTOR_PROP_OUT_FB);
- msm_property_install_range(&c_conn->property_info, "DST_X",
- 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_X);
- msm_property_install_range(&c_conn->property_info, "DST_Y",
- 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_Y);
- msm_property_install_range(&c_conn->property_info, "DST_W",
- 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_W);
- msm_property_install_range(&c_conn->property_info, "DST_H",
- 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_H);
- msm_property_install_enum(&c_conn->property_info,
- "fb_translation_mode",
- 0x0,
- 0, e_fb_translation_mode,
- ARRAY_SIZE(e_fb_translation_mode),
- CONNECTOR_PROP_FB_TRANSLATION_MODE);
-
- /*
* Populate info buffer
*/
if (format_list) {
@@ -423,6 +395,47 @@ int sde_wb_connector_post_init(struct drm_connector *connector,
return 0;
}
+int sde_wb_connector_post_init(struct drm_connector *connector, void *display)
+{
+ struct sde_connector *c_conn;
+ struct sde_wb_device *wb_dev = display;
+ static const struct drm_prop_enum_list e_fb_translation_mode[] = {
+ {SDE_DRM_FB_NON_SEC, "non_sec"},
+ {SDE_DRM_FB_SEC, "sec"},
+ };
+
+ if (!connector || !display || !wb_dev->wb_cfg) {
+ SDE_ERROR("invalid params\n");
+ return -EINVAL;
+ }
+
+ c_conn = to_sde_connector(connector);
+ wb_dev->connector = connector;
+ wb_dev->detect_status = connector_status_connected;
+
+ /*
+ * Add extra connector properties
+ */
+ msm_property_install_range(&c_conn->property_info, "FB_ID",
+ 0x0, 0, ~0, 0, CONNECTOR_PROP_OUT_FB);
+ msm_property_install_range(&c_conn->property_info, "DST_X",
+ 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_X);
+ msm_property_install_range(&c_conn->property_info, "DST_Y",
+ 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_Y);
+ msm_property_install_range(&c_conn->property_info, "DST_W",
+ 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_W);
+ msm_property_install_range(&c_conn->property_info, "DST_H",
+ 0x0, 0, UINT_MAX, 0, CONNECTOR_PROP_DST_H);
+ msm_property_install_enum(&c_conn->property_info,
+ "fb_translation_mode",
+ 0x0,
+ 0, e_fb_translation_mode,
+ ARRAY_SIZE(e_fb_translation_mode),
+ CONNECTOR_PROP_FB_TRANSLATION_MODE);
+
+ return 0;
+}
+
struct drm_framebuffer *sde_wb_get_output_fb(struct sde_wb_device *wb_dev)
{
struct drm_framebuffer *fb;
diff --git a/drivers/gpu/drm/msm/sde/sde_wb.h b/drivers/gpu/drm/msm/sde/sde_wb.h
index 5e31664..d414bd0 100644
--- a/drivers/gpu/drm/msm/sde/sde_wb.h
+++ b/drivers/gpu/drm/msm/sde/sde_wb.h
@@ -131,12 +131,20 @@ int sde_wb_config(struct drm_device *drm_dev, void *data,
/**
* sde_wb_connector_post_init - perform writeback specific initialization
* @connector: Pointer to drm connector structure
+ * @display: Pointer to private display structure
+ * Returns: Zero on success
+ */
+int sde_wb_connector_post_init(struct drm_connector *connector, void *display);
+
+/**
+ * sde_wb_connector_set_info_blob - perform writeback info blob initialization
+ * @connector: Pointer to drm connector structure
* @info: Pointer to connector info
* @display: Pointer to private display structure
* @mode_info: Pointer to the mode info structure
* Returns: Zero on success
*/
-int sde_wb_connector_post_init(struct drm_connector *connector,
+int sde_wb_connector_set_info_blob(struct drm_connector *connector,
void *info,
void *display,
struct msm_mode_info *mode_info);
diff --git a/drivers/gpu/drm/msm/sde_dbg.c b/drivers/gpu/drm/msm/sde_dbg.c
index 295e841..c34b198 100644
--- a/drivers/gpu/drm/msm/sde_dbg.c
+++ b/drivers/gpu/drm/msm/sde_dbg.c
@@ -68,6 +68,11 @@
#define REG_DUMP_ALIGN 16
#define RSC_DEBUG_MUX_SEL_SDM845 9
+
+#define DBG_CTRL_STOP_FTRACE BIT(0)
+#define DBG_CTRL_PANIC_UNDERRUN BIT(1)
+#define DBG_CTRL_MAX BIT(2)
+
/**
* struct sde_dbg_reg_offset - tracking for start and end of region
* @start: start offset
@@ -198,6 +203,7 @@ static struct sde_dbg_base {
struct sde_dbg_vbif_debug_bus dbgbus_vbif_rt;
bool dump_all;
bool dsi_dbg_bus;
+ u32 debugfs_ctrl;
} sde_dbg_base;
/* sde_dbg_base_evtlog - global pointer to main sde event log for macro use */
@@ -2034,12 +2040,13 @@ static struct vbif_debug_bus_entry vbif_dbg_bus_msm8998[] = {
/**
* _sde_dbg_enable_power - use callback to turn power on for hw register access
* @enable: whether to turn power on or off
+ * Return: zero if success; error code otherwise
*/
-static inline void _sde_dbg_enable_power(int enable)
+static inline int _sde_dbg_enable_power(int enable)
{
if (!sde_dbg_base.power_ctrl.enable_fn)
- return;
- sde_dbg_base.power_ctrl.enable_fn(
+ return -EINVAL;
+ return sde_dbg_base.power_ctrl.enable_fn(
sde_dbg_base.power_ctrl.handle,
sde_dbg_base.power_ctrl.client,
enable);
@@ -2063,6 +2070,7 @@ static void _sde_dump_reg(const char *dump_name, u32 reg_dump_flag,
u32 *dump_addr = NULL;
char *end_addr;
int i;
+ int rc;
if (!len_bytes)
return;
@@ -2103,8 +2111,13 @@ static void _sde_dump_reg(const char *dump_name, u32 reg_dump_flag,
}
}
- if (!from_isr)
- _sde_dbg_enable_power(true);
+ if (!from_isr) {
+ rc = _sde_dbg_enable_power(true);
+ if (rc) {
+ pr_err("failed to enable power %d\n", rc);
+ return;
+ }
+ }
for (i = 0; i < len_align; i++) {
u32 x0, x4, x8, xc;
@@ -2288,6 +2301,7 @@ static void _sde_dbg_dump_sde_dbg_bus(struct sde_dbg_sde_debug_bus *bus)
u32 offset;
void __iomem *mem_base = NULL;
struct sde_dbg_reg_base *reg_base;
+ int rc;
if (!bus || !bus->cmn.entries_size)
return;
@@ -2333,7 +2347,12 @@ static void _sde_dbg_dump_sde_dbg_bus(struct sde_dbg_sde_debug_bus *bus)
}
}
- _sde_dbg_enable_power(true);
+ rc = _sde_dbg_enable_power(true);
+ if (rc) {
+ pr_err("failed to enable power %d\n", rc);
+ return;
+ }
+
for (i = 0; i < bus->cmn.entries_size; i++) {
head = bus->entries + i;
writel_relaxed(TEST_MASK(head->block_id, head->test_id),
@@ -2427,6 +2446,7 @@ static void _sde_dbg_dump_vbif_dbg_bus(struct sde_dbg_vbif_debug_bus *bus)
struct vbif_debug_bus_entry *dbg_bus;
u32 bus_size;
struct sde_dbg_reg_base *reg_base;
+ int rc;
if (!bus || !bus->cmn.entries_size)
return;
@@ -2484,7 +2504,11 @@ static void _sde_dbg_dump_vbif_dbg_bus(struct sde_dbg_vbif_debug_bus *bus)
}
}
- _sde_dbg_enable_power(true);
+ rc = _sde_dbg_enable_power(true);
+ if (rc) {
+ pr_err("failed to enable power %d\n", rc);
+ return;
+ }
value = readl_relaxed(mem_base + MMSS_VBIF_CLKON);
writel_relaxed(value | BIT(1), mem_base + MMSS_VBIF_CLKON);
@@ -2679,6 +2703,46 @@ void sde_dbg_dump(bool queue_work, const char *name, ...)
}
}
+void sde_dbg_ctrl(const char *name, ...)
+{
+ int i = 0;
+ va_list args;
+ char *blk_name = NULL;
+
+
+ /* no debugfs controlled events are enabled, just return */
+ if (!sde_dbg_base.debugfs_ctrl)
+ return;
+
+ va_start(args, name);
+
+ while ((blk_name = va_arg(args, char*))) {
+ if (i++ >= SDE_EVTLOG_MAX_DATA) {
+ pr_err("could not parse all dbg arguments\n");
+ break;
+ }
+
+ if (IS_ERR_OR_NULL(blk_name))
+ break;
+
+ if (!strcmp(blk_name, "stop_ftrace") &&
+ sde_dbg_base.debugfs_ctrl &
+ DBG_CTRL_STOP_FTRACE) {
+ pr_debug("tracing off\n");
+ tracing_off();
+ }
+
+ if (!strcmp(blk_name, "panic_underrun") &&
+ sde_dbg_base.debugfs_ctrl &
+ DBG_CTRL_PANIC_UNDERRUN) {
+ pr_debug("panic underrun\n");
+ panic("underrun");
+ }
+ }
+
+}
+
+
/*
* sde_dbg_debugfs_open - debugfs open handler for evtlog dump
* @inode: debugfs inode
@@ -2742,6 +2806,82 @@ static const struct file_operations sde_evtlog_fops = {
.write = sde_evtlog_dump_write,
};
+/**
+ * sde_dbg_ctrl_read - debugfs read handler for debug ctrl read
+ * @file: file handler
+ * @buff: user buffer content for debugfs
+ * @count: size of user buffer
+ * @ppos: position offset of user buffer
+ */
+static ssize_t sde_dbg_ctrl_read(struct file *file, char __user *buff,
+ size_t count, loff_t *ppos)
+{
+ ssize_t len = 0;
+ char buf[24] = {'\0'};
+
+ if (!buff || !ppos)
+ return -EINVAL;
+
+ if (*ppos)
+ return 0; /* the end */
+
+ len = snprintf(buf, sizeof(buf), "0x%x\n", sde_dbg_base.debugfs_ctrl);
+ pr_debug("%s: ctrl:0x%x len:0x%zx\n",
+ __func__, sde_dbg_base.debugfs_ctrl, len);
+
+ if ((count < sizeof(buf)) || copy_to_user(buff, buf, len)) {
+ pr_err("error copying the buffer! count:0x%zx\n", count);
+ return -EFAULT;
+ }
+
+ *ppos += len; /* increase offset */
+ return len;
+}
+
+/**
+ * sde_dbg_ctrl_write - debugfs read handler for debug ctrl write
+ * @file: file handler
+ * @user_buf: user buffer content from debugfs
+ * @count: size of user buffer
+ * @ppos: position offset of user buffer
+ */
+static ssize_t sde_dbg_ctrl_write(struct file *file,
+ const char __user *user_buf, size_t count, loff_t *ppos)
+{
+ u32 dbg_ctrl = 0;
+ char buf[24];
+
+ if (!file) {
+ pr_err("DbgDbg: %s: error no file --\n", __func__);
+ return -EINVAL;
+ }
+
+ if (count >= sizeof(buf))
+ return -EFAULT;
+
+
+ if (copy_from_user(buf, user_buf, count))
+ return -EFAULT;
+
+ buf[count] = 0; /* end of string */
+
+ if (kstrtouint(buf, 0, &dbg_ctrl)) {
+ pr_err("%s: error in the number of bytes\n", __func__);
+ return -EFAULT;
+ }
+
+ pr_debug("dbg_ctrl_read:0x%x\n", dbg_ctrl);
+ sde_dbg_base.debugfs_ctrl = dbg_ctrl;
+
+ return count;
+}
+
+static const struct file_operations sde_dbg_ctrl_fops = {
+ .open = sde_dbg_debugfs_open,
+ .read = sde_dbg_ctrl_read,
+ .write = sde_dbg_ctrl_write,
+};
+
/*
* sde_evtlog_filter_show - read callback for evtlog filter
* @s: pointer to seq_file object
@@ -2969,6 +3109,7 @@ static ssize_t sde_dbg_reg_base_reg_write(struct file *file,
size_t off;
u32 data, cnt;
char buf[24];
+ int rc;
if (!file)
return -EINVAL;
@@ -2999,7 +3140,12 @@ static ssize_t sde_dbg_reg_base_reg_write(struct file *file,
return -EFAULT;
}
- _sde_dbg_enable_power(true);
+ rc = _sde_dbg_enable_power(true);
+ if (rc) {
+ mutex_unlock(&sde_dbg_base.mutex);
+ pr_err("failed to enable power %d\n", rc);
+ return rc;
+ }
writel_relaxed(data, dbg->base + off);
@@ -3024,6 +3170,7 @@ static ssize_t sde_dbg_reg_base_reg_read(struct file *file,
{
struct sde_dbg_reg_base *dbg;
size_t len;
+ int rc;
if (!file)
return -EINVAL;
@@ -3060,7 +3207,12 @@ static ssize_t sde_dbg_reg_base_reg_read(struct file *file,
ptr = dbg->base + dbg->off;
tot = 0;
- _sde_dbg_enable_power(true);
+ rc = _sde_dbg_enable_power(true);
+ if (rc) {
+ mutex_unlock(&sde_dbg_base.mutex);
+ pr_err("failed to enable power %d\n", rc);
+ return rc;
+ }
for (cnt = dbg->cnt; cnt > 0; cnt -= ROW_BYTES) {
hex_dump_to_buffer(ptr, min(cnt, ROW_BYTES),
@@ -3124,6 +3276,8 @@ int sde_dbg_debugfs_register(struct dentry *debugfs_root)
if (!debugfs_root)
return -EINVAL;
+ debugfs_create_file("dbg_ctrl", 0600, debugfs_root, NULL,
+ &sde_dbg_ctrl_fops);
debugfs_create_file("dump", 0600, debugfs_root, NULL,
&sde_evtlog_fops);
debugfs_create_u32("enable", 0600, debugfs_root,
diff --git a/drivers/gpu/drm/msm/sde_dbg.h b/drivers/gpu/drm/msm/sde_dbg.h
index 7b1b4c6..9efb893 100644
--- a/drivers/gpu/drm/msm/sde_dbg.h
+++ b/drivers/gpu/drm/msm/sde_dbg.h
@@ -151,6 +151,13 @@ extern struct sde_dbg_evtlog *sde_dbg_base_evtlog;
#define SDE_DBG_DUMP_WQ(...) sde_dbg_dump(true, __func__, ##__VA_ARGS__, \
SDE_DBG_DUMP_DATA_LIMITER)
+/**
+ * SDE_DBG_EVT_CTRL - trigger a different driver events
+ * event: event that trigger different behavior in the driver
+ */
+#define SDE_DBG_CTRL(...) sde_dbg_ctrl(__func__, ##__VA_ARGS__, \
+ SDE_DBG_DUMP_DATA_LIMITER)
+
#if defined(CONFIG_DEBUG_FS)
/**
@@ -249,6 +256,15 @@ void sde_dbg_destroy(void);
void sde_dbg_dump(bool queue_work, const char *name, ...);
/**
+ * sde_dbg_ctrl - trigger specific actions for the driver with debugging
+ * purposes. Those actions need to be enabled by the debugfs entry
+ * so the driver executes those actions in the corresponding calls.
+ * @va_args: list of actions to trigger
+ * Returns: none
+ */
+void sde_dbg_ctrl(const char *name, ...);
+
+/**
* sde_dbg_reg_register_base - register a hw register address section for later
* dumping. call this before calling sde_dbg_reg_register_dump_range
* to be able to specify sub-ranges within the base hw range.
@@ -386,6 +402,10 @@ static inline void sde_dbg_dump(bool queue_work, const char *name, ...)
{
}
+static inline void sde_dbg_ctrl(const char *name, ...)
+{
+}
+
static inline int sde_dbg_reg_register_base(const char *name,
void __iomem *base, size_t max_offset)
{
diff --git a/drivers/gpu/drm/msm/sde_dbg_evtlog.c b/drivers/gpu/drm/msm/sde_dbg_evtlog.c
index 9a75179..f157b11 100644
--- a/drivers/gpu/drm/msm/sde_dbg_evtlog.c
+++ b/drivers/gpu/drm/msm/sde_dbg_evtlog.c
@@ -34,19 +34,21 @@ static bool _sde_evtlog_is_filtered_no_lock(
struct sde_dbg_evtlog *evtlog, const char *str)
{
struct sde_evtlog_filter *filter_node;
+ size_t len;
bool rc;
if (!str)
return true;
+ len = strlen(str);
+
/*
* Filter the incoming string IFF the list is not empty AND
* a matching entry is not in the list.
*/
rc = !list_empty(&evtlog->filter_list);
list_for_each_entry(filter_node, &evtlog->filter_list, list)
- if (strnstr(str, filter_node->filter,
- SDE_EVTLOG_FILTER_STRSIZE - 1)) {
+ if (strnstr(str, filter_node->filter, len)) {
rc = false;
break;
}
diff --git a/drivers/gpu/drm/msm/sde_hdcp.h b/drivers/gpu/drm/msm/sde_hdcp.h
index 05d290b..6c44260 100644
--- a/drivers/gpu/drm/msm/sde_hdcp.h
+++ b/drivers/gpu/drm/msm/sde_hdcp.h
@@ -42,6 +42,10 @@ enum sde_hdcp_states {
struct sde_hdcp_init_data {
struct dss_io_data *core_io;
+ struct dss_io_data *dp_ahb;
+ struct dss_io_data *dp_aux;
+ struct dss_io_data *dp_link;
+ struct dss_io_data *dp_p0;
struct dss_io_data *qfprom_io;
struct dss_io_data *hdcp_io;
struct drm_dp_aux *drm_aux;
diff --git a/drivers/gpu/drm/msm/sde_hdcp_1x.c b/drivers/gpu/drm/msm/sde_hdcp_1x.c
index 3673d125..c012f9d 100644
--- a/drivers/gpu/drm/msm/sde_hdcp_1x.c
+++ b/drivers/gpu/drm/msm/sde_hdcp_1x.c
@@ -256,12 +256,15 @@ static int sde_hdcp_1x_load_keys(void *input)
u32 ksv_lsb_addr, ksv_msb_addr;
u32 aksv_lsb, aksv_msb;
u8 aksv[5];
- struct dss_io_data *io;
+ struct dss_io_data *dp_ahb;
+ struct dss_io_data *dp_aux;
+ struct dss_io_data *dp_link;
struct dss_io_data *qfprom_io;
struct sde_hdcp_1x *hdcp = input;
struct sde_hdcp_reg_set *reg_set;
- if (!hdcp || !hdcp->init_data.core_io ||
+ if (!hdcp || !hdcp->init_data.dp_ahb ||
+ !hdcp->init_data.dp_aux ||
!hdcp->init_data.qfprom_io) {
pr_err("invalid input\n");
rc = -EINVAL;
@@ -276,7 +279,9 @@ static int sde_hdcp_1x_load_keys(void *input)
goto end;
}
- io = hdcp->init_data.core_io;
+ dp_ahb = hdcp->init_data.dp_ahb;
+ dp_aux = hdcp->init_data.dp_aux;
+ dp_link = hdcp->init_data.dp_link;
qfprom_io = hdcp->init_data.qfprom_io;
reg_set = &hdcp->reg_set;
@@ -327,18 +332,18 @@ static int sde_hdcp_1x_load_keys(void *input)
goto end;
}
- DSS_REG_W(io, reg_set->aksv_lsb, aksv_lsb);
- DSS_REG_W(io, reg_set->aksv_msb, aksv_msb);
+ DSS_REG_W(dp_aux, reg_set->aksv_lsb, aksv_lsb);
+ DSS_REG_W(dp_aux, reg_set->aksv_msb, aksv_msb);
/* Setup seed values for random number An */
- DSS_REG_W(io, reg_set->entropy_ctrl0, 0xB1FFB0FF);
- DSS_REG_W(io, reg_set->entropy_ctrl1, 0xF00DFACE);
+ DSS_REG_W(dp_link, reg_set->entropy_ctrl0, 0xB1FFB0FF);
+ DSS_REG_W(dp_link, reg_set->entropy_ctrl1, 0xF00DFACE);
/* make sure hw is programmed */
wmb();
/* enable hdcp engine */
- DSS_REG_W(io, reg_set->ctrl, 0x1);
+ DSS_REG_W(dp_ahb, reg_set->ctrl, 0x1);
hdcp->hdcp_state = HDCP_STATE_AUTHENTICATING;
end:
@@ -415,7 +420,7 @@ static void sde_hdcp_1x_enable_interrupts(struct sde_hdcp_1x *hdcp)
struct dss_io_data *io;
struct sde_hdcp_int_set *isr;
- io = hdcp->init_data.core_io;
+ io = hdcp->init_data.dp_ahb;
isr = &hdcp->int_set;
intr_reg = DSS_REG_R(io, isr->int_reg);
@@ -462,7 +467,8 @@ static int sde_hdcp_1x_wait_for_hw_ready(struct sde_hdcp_1x *hdcp)
int rc;
u32 link0_status;
struct sde_hdcp_reg_set *reg_set = &hdcp->reg_set;
- struct dss_io_data *io = hdcp->init_data.core_io;
+ struct dss_io_data *dp_ahb = hdcp->init_data.dp_ahb;
+ struct dss_io_data *dp_aux = hdcp->init_data.dp_aux;
if (!sde_hdcp_1x_state(HDCP_STATE_AUTHENTICATING)) {
pr_err("invalid state\n");
@@ -470,7 +476,7 @@ static int sde_hdcp_1x_wait_for_hw_ready(struct sde_hdcp_1x *hdcp)
}
/* Wait for HDCP keys to be checked and validated */
- rc = readl_poll_timeout(io->base + reg_set->status, link0_status,
+ rc = readl_poll_timeout(dp_ahb->base + reg_set->status, link0_status,
((link0_status >> reg_set->keys_offset) & 0x7)
== HDCP_KEYS_STATE_VALID ||
!sde_hdcp_1x_state(HDCP_STATE_AUTHENTICATING),
@@ -484,10 +490,10 @@ static int sde_hdcp_1x_wait_for_hw_ready(struct sde_hdcp_1x *hdcp)
* 1.1_Features turned off by default.
* No need to write AInfo since 1.1_Features is disabled.
*/
- DSS_REG_W(io, reg_set->data4, 0);
+ DSS_REG_W(dp_aux, reg_set->data4, 0);
/* Wait for An0 and An1 bit to be ready */
- rc = readl_poll_timeout(io->base + reg_set->status, link0_status,
+ rc = readl_poll_timeout(dp_ahb->base + reg_set->status, link0_status,
(link0_status & (BIT(8) | BIT(9))) ||
!sde_hdcp_1x_state(HDCP_STATE_AUTHENTICATING),
HDCP_POLL_SLEEP_US, HDCP_POLL_TIMEOUT_US);
@@ -554,7 +560,8 @@ static int sde_hdcp_1x_send_an_aksv_to_sink(struct sde_hdcp_1x *hdcp)
static int sde_hdcp_1x_read_an_aksv_from_hw(struct sde_hdcp_1x *hdcp)
{
- struct dss_io_data *io = hdcp->init_data.core_io;
+ struct dss_io_data *dp_ahb = hdcp->init_data.dp_ahb;
+ struct dss_io_data *dp_aux = hdcp->init_data.dp_aux;
struct sde_hdcp_reg_set *reg_set = &hdcp->reg_set;
if (!sde_hdcp_1x_state(HDCP_STATE_AUTHENTICATING)) {
@@ -562,21 +569,21 @@ static int sde_hdcp_1x_read_an_aksv_from_hw(struct sde_hdcp_1x *hdcp)
return -EINVAL;
}
- hdcp->an_0 = DSS_REG_R(io, reg_set->data5);
+ hdcp->an_0 = DSS_REG_R(dp_ahb, reg_set->data5);
if (hdcp->init_data.client_id == HDCP_CLIENT_DP) {
udelay(1);
- hdcp->an_0 = DSS_REG_R(io, reg_set->data5);
+ hdcp->an_0 = DSS_REG_R(dp_ahb, reg_set->data5);
}
- hdcp->an_1 = DSS_REG_R(io, reg_set->data6);
+ hdcp->an_1 = DSS_REG_R(dp_ahb, reg_set->data6);
if (hdcp->init_data.client_id == HDCP_CLIENT_DP) {
udelay(1);
- hdcp->an_1 = DSS_REG_R(io, reg_set->data6);
+ hdcp->an_1 = DSS_REG_R(dp_ahb, reg_set->data6);
}
/* Read AKSV */
- hdcp->aksv_0 = DSS_REG_R(io, reg_set->data3);
- hdcp->aksv_1 = DSS_REG_R(io, reg_set->data4);
+ hdcp->aksv_0 = DSS_REG_R(dp_aux, reg_set->data3);
+ hdcp->aksv_1 = DSS_REG_R(dp_aux, reg_set->data4);
return 0;
}
@@ -649,7 +656,7 @@ static int sde_hdcp_1x_verify_r0(struct sde_hdcp_1x *hdcp)
u32 const r0_read_delay_us = 1;
u32 const r0_read_timeout_us = r0_read_delay_us * 10;
struct sde_hdcp_reg_set *reg_set = &hdcp->reg_set;
- struct dss_io_data *io = hdcp->init_data.core_io;
+ struct dss_io_data *io = hdcp->init_data.dp_ahb;
if (!sde_hdcp_1x_state(HDCP_STATE_AUTHENTICATING)) {
pr_err("invalid state\n");
@@ -910,7 +917,7 @@ static int sde_hdcp_1x_write_ksv_fifo(struct sde_hdcp_1x *hdcp)
int i, rc = 0;
u8 *ksv_fifo = hdcp->current_tp.ksv_list;
u32 ksv_bytes = hdcp->sink_addr.ksv_fifo.len;
- struct dss_io_data *io = hdcp->init_data.core_io;
+ struct dss_io_data *io = hdcp->init_data.dp_ahb;
struct dss_io_data *sec_io = hdcp->init_data.hdcp_io;
struct sde_hdcp_reg_set *reg_set = &hdcp->reg_set;
u32 sha_status = 0, status;
@@ -1087,7 +1094,8 @@ static int sde_hdcp_1x_authentication_part2(struct sde_hdcp_1x *hdcp)
static void sde_hdcp_1x_cache_topology(struct sde_hdcp_1x *hdcp)
{
- if (!hdcp || !hdcp->init_data.core_io) {
+ if (!hdcp || !hdcp->init_data.dp_ahb || !hdcp->init_data.dp_aux ||
+ !hdcp->init_data.dp_link || !hdcp->init_data.dp_p0) {
pr_err("invalid input\n");
return;
}
@@ -1146,6 +1154,7 @@ static void sde_hdcp_1x_auth_work(struct work_struct *work)
DSS_REG_W_ND(io, REG_HDMI_DDC_ARBITRATION, DSS_REG_R(io,
REG_HDMI_DDC_ARBITRATION) & ~(BIT(4)));
else if (hdcp->init_data.client_id == HDCP_CLIENT_DP) {
+ io = hdcp->init_data.dp_aux;
DSS_REG_W(io, DP_DP_HPD_REFTIMER, 0x10013);
}
@@ -1224,12 +1233,12 @@ static int sde_hdcp_1x_reauthenticate(void *input)
struct sde_hdcp_int_set *isr;
u32 ret = 0, reg;
- if (!hdcp || !hdcp->init_data.core_io) {
+ if (!hdcp || !hdcp->init_data.dp_ahb) {
pr_err("invalid input\n");
return -EINVAL;
}
- io = hdcp->init_data.core_io;
+ io = hdcp->init_data.dp_ahb;
reg_set = &hdcp->reg_set;
isr = &hdcp->int_set;
@@ -1264,12 +1273,12 @@ static void sde_hdcp_1x_off(void *input)
int rc = 0;
u32 reg;
- if (!hdcp || !hdcp->init_data.core_io) {
+ if (!hdcp || !hdcp->init_data.dp_ahb) {
pr_err("invalid input\n");
return;
}
- io = hdcp->init_data.core_io;
+ io = hdcp->init_data.dp_ahb;
reg_set = &hdcp->reg_set;
isr = &hdcp->int_set;
@@ -1327,13 +1336,13 @@ static int sde_hdcp_1x_isr(void *input)
struct sde_hdcp_reg_set *reg_set;
struct sde_hdcp_int_set *isr;
- if (!hdcp || !hdcp->init_data.core_io) {
+ if (!hdcp || !hdcp->init_data.dp_ahb) {
pr_err("invalid input\n");
rc = -EINVAL;
goto error;
}
- io = hdcp->init_data.core_io;
+ io = hdcp->init_data.dp_ahb;
reg_set = &hdcp->reg_set;
isr = &hdcp->int_set;
@@ -1531,8 +1540,7 @@ void *sde_hdcp_1x_init(struct sde_hdcp_init_data *init_data)
.off = sde_hdcp_1x_off
};
- if (!init_data || !init_data->core_io || !init_data->qfprom_io ||
- !init_data->mutex || !init_data->notify_status ||
+ if (!init_data || !init_data->mutex || !init_data->notify_status ||
!init_data->workq || !init_data->cb_data) {
pr_err("invalid input\n");
goto error;
diff --git a/drivers/gpu/drm/msm/sde_power_handle.c b/drivers/gpu/drm/msm/sde_power_handle.c
index 43fcf0d..34a826d 100644
--- a/drivers/gpu/drm/msm/sde_power_handle.c
+++ b/drivers/gpu/drm/msm/sde_power_handle.c
@@ -983,6 +983,16 @@ int sde_power_resource_enable(struct sde_power_handle *phandle,
return rc;
}
+int sde_power_resource_is_enabled(struct sde_power_handle *phandle)
+{
+ if (!phandle) {
+ pr_err("invalid input argument\n");
+ return false;
+ }
+
+ return phandle->current_usecase_ndx != VOTE_INDEX_DISABLE;
+}
+
int sde_power_clk_set_rate(struct sde_power_handle *phandle, char *clock_name,
u64 rate)
{
diff --git a/drivers/gpu/drm/msm/sde_power_handle.h b/drivers/gpu/drm/msm/sde_power_handle.h
index 9cc78aa..72975e7 100644
--- a/drivers/gpu/drm/msm/sde_power_handle.h
+++ b/drivers/gpu/drm/msm/sde_power_handle.h
@@ -225,6 +225,14 @@ int sde_power_resource_enable(struct sde_power_handle *pdata,
struct sde_power_client *pclient, bool enable);
/**
+ * sde_power_resource_is_enabled() - return true if power resource is enabled
+ * @pdata: power handle containing the resources
+ *
+ * Return: true if enabled; false otherwise
+ */
+int sde_power_resource_is_enabled(struct sde_power_handle *pdata);
+
+/**
* sde_power_data_bus_state_update() - update data bus state
* @pdata: power handle containing the resources
* @enable: take enable vs disable path
diff --git a/drivers/gpu/drm/msm/sde_rsc_hw.c b/drivers/gpu/drm/msm/sde_rsc_hw.c
index 654a2ad..a0d1245 100644
--- a/drivers/gpu/drm/msm/sde_rsc_hw.c
+++ b/drivers/gpu/drm/msm/sde_rsc_hw.c
@@ -204,17 +204,17 @@ static int rsc_hw_seq_memory_init(struct sde_rsc_priv *rsc)
/* tcs sleep & wake sequence */
dss_reg_w(&rsc->drv_io, SDE_RSCC_SEQ_MEM_0_DRV0 + 0x2c,
- 0x2089e6a6, rsc->debug_mode);
+ 0x89e686a6, rsc->debug_mode);
dss_reg_w(&rsc->drv_io, SDE_RSCC_SEQ_MEM_0_DRV0 + 0x30,
- 0xe7a7e9a9, rsc->debug_mode);
+ 0xa7e9a920, rsc->debug_mode);
dss_reg_w(&rsc->drv_io, SDE_RSCC_SEQ_MEM_0_DRV0 + 0x34,
- 0x00002089, rsc->debug_mode);
+ 0x2089e787, rsc->debug_mode);
/* branch address */
dss_reg_w(&rsc->drv_io, SDE_RSCC_SEQ_CFG_BR_ADDR_0_DRV0,
0x2a, rsc->debug_mode);
dss_reg_w(&rsc->drv_io, SDE_RSCC_SEQ_CFG_BR_ADDR_1_DRV0,
- 0x30, rsc->debug_mode);
+ 0x31, rsc->debug_mode);
return 0;
}
diff --git a/drivers/gpu/drm/omapdrm/displays/panel-sony-acx565akm.c b/drivers/gpu/drm/omapdrm/displays/panel-sony-acx565akm.c
index 3557a4c..270a623 100644
--- a/drivers/gpu/drm/omapdrm/displays/panel-sony-acx565akm.c
+++ b/drivers/gpu/drm/omapdrm/displays/panel-sony-acx565akm.c
@@ -912,6 +912,7 @@ static struct spi_driver acx565akm_driver = {
module_spi_driver(acx565akm_driver);
+MODULE_ALIAS("spi:sony,acx565akm");
MODULE_AUTHOR("Nokia Corporation");
MODULE_DESCRIPTION("acx565akm LCD Driver");
MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/sti/sti_vtg.c b/drivers/gpu/drm/sti/sti_vtg.c
index a8882bd..c3d9c8a 100644
--- a/drivers/gpu/drm/sti/sti_vtg.c
+++ b/drivers/gpu/drm/sti/sti_vtg.c
@@ -429,6 +429,10 @@ static int vtg_probe(struct platform_device *pdev)
return -ENOMEM;
}
vtg->regs = devm_ioremap_nocache(dev, res->start, resource_size(res));
+ if (!vtg->regs) {
+ DRM_ERROR("failed to remap I/O memory\n");
+ return -ENOMEM;
+ }
np = of_parse_phandle(pdev->dev.of_node, "st,slave", 0);
if (np) {
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index 36005bd..29abd28 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -721,7 +721,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
* allocation taken by fbdev
*/
if (!(dev_priv->capabilities & SVGA_CAP_3D))
- mem_size *= 2;
+ mem_size *= 3;
dev_priv->max_mob_pages = mem_size * 1024 / PAGE_SIZE;
dev_priv->prim_bb_mem =
diff --git a/drivers/gpu/msm/a6xx_reg.h b/drivers/gpu/msm/a6xx_reg.h
index 728e897..5991cd5 100644
--- a/drivers/gpu/msm/a6xx_reg.h
+++ b/drivers/gpu/msm/a6xx_reg.h
@@ -68,6 +68,7 @@
#define A6XX_CP_MEM_POOL_SIZE 0x8C3
#define A6XX_CP_CHICKEN_DBG 0x841
#define A6XX_CP_ADDR_MODE_CNTL 0x842
+#define A6XX_CP_DBG_ECO_CNTL 0x843
#define A6XX_CP_PROTECT_CNTL 0x84F
#define A6XX_CP_PROTECT_REG 0x850
#define A6XX_CP_CONTEXT_SWITCH_CNTL 0x8A0
@@ -532,6 +533,8 @@
#define A6XX_DBGC_CFG_DBGBUS_CNTLT_SEGT_SHIFT 0x1C
#define A6XX_DBGC_CFG_DBGBUS_CNTLM 0x605
#define A6XX_DBGC_CFG_DBGBUS_CTLTM_ENABLE_SHIFT 0x18
+#define A6XX_DBGC_CFG_DBGBUS_OPL 0x606
+#define A6XX_DBGC_CFG_DBGBUS_OPE 0x607
#define A6XX_DBGC_CFG_DBGBUS_IVTL_0 0x608
#define A6XX_DBGC_CFG_DBGBUS_IVTL_1 0x609
#define A6XX_DBGC_CFG_DBGBUS_IVTL_2 0x60a
@@ -558,8 +561,40 @@
#define A6XX_DBGC_CFG_DBGBUS_BYTEL13_SHIFT 0x14
#define A6XX_DBGC_CFG_DBGBUS_BYTEL14_SHIFT 0x18
#define A6XX_DBGC_CFG_DBGBUS_BYTEL15_SHIFT 0x1C
+#define A6XX_DBGC_CFG_DBGBUS_IVTE_0 0x612
+#define A6XX_DBGC_CFG_DBGBUS_IVTE_1 0x613
+#define A6XX_DBGC_CFG_DBGBUS_IVTE_2 0x614
+#define A6XX_DBGC_CFG_DBGBUS_IVTE_3 0x615
+#define A6XX_DBGC_CFG_DBGBUS_MASKE_0 0x616
+#define A6XX_DBGC_CFG_DBGBUS_MASKE_1 0x617
+#define A6XX_DBGC_CFG_DBGBUS_MASKE_2 0x618
+#define A6XX_DBGC_CFG_DBGBUS_MASKE_3 0x619
+#define A6XX_DBGC_CFG_DBGBUS_NIBBLEE 0x61a
+#define A6XX_DBGC_CFG_DBGBUS_PTRC0 0x61b
+#define A6XX_DBGC_CFG_DBGBUS_PTRC1 0x61c
+#define A6XX_DBGC_CFG_DBGBUS_LOADREG 0x61d
+#define A6XX_DBGC_CFG_DBGBUS_IDX 0x61e
+#define A6XX_DBGC_CFG_DBGBUS_CLRC 0x61f
+#define A6XX_DBGC_CFG_DBGBUS_LOADIVT 0x620
+#define A6XX_DBGC_VBIF_DBG_CNTL 0x621
+#define A6XX_DBGC_DBG_LO_HI_GPIO 0x622
+#define A6XX_DBGC_EXT_TRACE_BUS_CNTL 0x623
+#define A6XX_DBGC_READ_AHB_THROUGH_DBG 0x624
#define A6XX_DBGC_CFG_DBGBUS_TRACE_BUF1 0x62f
#define A6XX_DBGC_CFG_DBGBUS_TRACE_BUF2 0x630
+#define A6XX_DBGC_EVT_CFG 0x640
+#define A6XX_DBGC_EVT_INTF_SEL_0 0x641
+#define A6XX_DBGC_EVT_INTF_SEL_1 0x642
+#define A6XX_DBGC_PERF_ATB_CFG 0x643
+#define A6XX_DBGC_PERF_ATB_COUNTER_SEL_0 0x644
+#define A6XX_DBGC_PERF_ATB_COUNTER_SEL_1 0x645
+#define A6XX_DBGC_PERF_ATB_COUNTER_SEL_2 0x646
+#define A6XX_DBGC_PERF_ATB_COUNTER_SEL_3 0x647
+#define A6XX_DBGC_PERF_ATB_TRIG_INTF_SEL_0 0x648
+#define A6XX_DBGC_PERF_ATB_TRIG_INTF_SEL_1 0x649
+#define A6XX_DBGC_PERF_ATB_DRAIN_CMD 0x64a
+#define A6XX_DBGC_ECO_CNTL 0x650
+#define A6XX_DBGC_AHB_DBG_CNTL 0x651
/* VSC registers */
#define A6XX_VSC_PERFCTR_VSC_SEL_0 0xCD8
@@ -676,6 +711,7 @@
#define A6XX_UCHE_PERFCTR_UCHE_SEL_9 0xE25
#define A6XX_UCHE_PERFCTR_UCHE_SEL_10 0xE26
#define A6XX_UCHE_PERFCTR_UCHE_SEL_11 0xE27
+#define A6XX_UCHE_GBIF_GX_CONFIG 0xE3A
/* SP registers */
#define A6XX_SP_ADDR_MODE_CNTL 0xAE01
@@ -764,6 +800,12 @@
#define A6XX_VBIF_PERF_PWR_CNT_HIGH1 0x3119
#define A6XX_VBIF_PERF_PWR_CNT_HIGH2 0x311a
+/* GBIF countables */
+#define GBIF_AXI0_READ_DATA_TOTAL_BEATS 34
+#define GBIF_AXI1_READ_DATA_TOTAL_BEATS 35
+#define GBIF_AXI0_WRITE_DATA_TOTAL_BEATS 46
+#define GBIF_AXI1_WRITE_DATA_TOTAL_BEATS 47
+
/* GBIF registers */
#define A6XX_GBIF_HALT 0x3c45
#define A6XX_GBIF_HALT_ACK 0x3c46
@@ -792,12 +834,16 @@
#define A6XX_CX_DBGC_CFG_DBGBUS_SEL_B 0x18401
#define A6XX_CX_DBGC_CFG_DBGBUS_SEL_C 0x18402
#define A6XX_CX_DBGC_CFG_DBGBUS_SEL_D 0x18403
+#define A6XX_CX_DBGC_CFG_DBGBUS_SEL_PING_INDEX_SHIFT 0x0
+#define A6XX_CX_DBGC_CFG_DBGBUS_SEL_PING_BLK_SEL_SHIFT 0x8
#define A6XX_CX_DBGC_CFG_DBGBUS_CNTLT 0x18404
#define A6XX_CX_DBGC_CFG_DBGBUS_CNTLT_TRACEEN_SHIFT 0x0
#define A6XX_CX_DBGC_CFG_DBGBUS_CNTLT_GRANU_SHIFT 0xC
#define A6XX_CX_DBGC_CFG_DBGBUS_CNTLT_SEGT_SHIFT 0x1C
#define A6XX_CX_DBGC_CFG_DBGBUS_CNTLM 0x18405
#define A6XX_CX_DBGC_CFG_DBGBUS_CNTLM_ENABLE_SHIFT 0x18
+#define A6XX_CX_DBGC_CFG_DBGBUS_OPL 0x18406
+#define A6XX_CX_DBGC_CFG_DBGBUS_OPE 0x18407
#define A6XX_CX_DBGC_CFG_DBGBUS_IVTL_0 0x18408
#define A6XX_CX_DBGC_CFG_DBGBUS_IVTL_1 0x18409
#define A6XX_CX_DBGC_CFG_DBGBUS_IVTL_2 0x1840A
@@ -824,10 +870,40 @@
#define A6XX_CX_DBGC_CFG_DBGBUS_BYTEL13_SHIFT 0x14
#define A6XX_CX_DBGC_CFG_DBGBUS_BYTEL14_SHIFT 0x18
#define A6XX_CX_DBGC_CFG_DBGBUS_BYTEL15_SHIFT 0x1C
+#define A6XX_CX_DBGC_CFG_DBGBUS_IVTE_0 0x18412
+#define A6XX_CX_DBGC_CFG_DBGBUS_IVTE_1 0x18413
+#define A6XX_CX_DBGC_CFG_DBGBUS_IVTE_2 0x18414
+#define A6XX_CX_DBGC_CFG_DBGBUS_IVTE_3 0x18415
+#define A6XX_CX_DBGC_CFG_DBGBUS_MASKE_0 0x18416
+#define A6XX_CX_DBGC_CFG_DBGBUS_MASKE_1 0x18417
+#define A6XX_CX_DBGC_CFG_DBGBUS_MASKE_2 0x18418
+#define A6XX_CX_DBGC_CFG_DBGBUS_MASKE_3 0x18419
+#define A6XX_CX_DBGC_CFG_DBGBUS_NIBBLEE 0x1841A
+#define A6XX_CX_DBGC_CFG_DBGBUS_PTRC0 0x1841B
+#define A6XX_CX_DBGC_CFG_DBGBUS_PTRC1 0x1841C
+#define A6XX_CX_DBGC_CFG_DBGBUS_LOADREG 0x1841D
+#define A6XX_CX_DBGC_CFG_DBGBUS_IDX 0x1841E
+#define A6XX_CX_DBGC_CFG_DBGBUS_CLRC 0x1841F
+#define A6XX_CX_DBGC_CFG_DBGBUS_LOADIVT 0x18420
+#define A6XX_CX_DBGC_VBIF_DBG_CNTL 0x18421
+#define A6XX_CX_DBGC_DBG_LO_HI_GPIO 0x18422
+#define A6XX_CX_DBGC_EXT_TRACE_BUS_CNTL 0x18423
+#define A6XX_CX_DBGC_READ_AHB_THROUGH_DBG 0x18424
#define A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF1 0x1842F
#define A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF2 0x18430
-#define A6XX_CX_DBGC_CFG_DBGBUS_SEL_PING_INDEX_SHIFT 0x0
-#define A6XX_CX_DBGC_CFG_DBGBUS_SEL_PING_BLK_SEL_SHIFT 0x8
+#define A6XX_CX_DBGC_EVT_CFG 0x18440
+#define A6XX_CX_DBGC_EVT_INTF_SEL_0 0x18441
+#define A6XX_CX_DBGC_EVT_INTF_SEL_1 0x18442
+#define A6XX_CX_DBGC_PERF_ATB_CFG 0x18443
+#define A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_0 0x18444
+#define A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_1 0x18445
+#define A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_2 0x18446
+#define A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_3 0x18447
+#define A6XX_CX_DBGC_PERF_ATB_TRIG_INTF_SEL_0 0x18448
+#define A6XX_CX_DBGC_PERF_ATB_TRIG_INTF_SEL_1 0x18449
+#define A6XX_CX_DBGC_PERF_ATB_DRAIN_CMD 0x1844A
+#define A6XX_CX_DBGC_ECO_CNTL 0x18450
+#define A6XX_CX_DBGC_AHB_DBG_CNTL 0x18451
/* GMU control registers */
#define A6XX_GPU_GMU_GX_SPTPRAC_CLOCK_CONTROL 0x1A880
diff --git a/drivers/gpu/msm/adreno-gpulist.h b/drivers/gpu/msm/adreno-gpulist.h
index d0e6d73..770cf3b 100644
--- a/drivers/gpu/msm/adreno-gpulist.h
+++ b/drivers/gpu/msm/adreno-gpulist.h
@@ -347,7 +347,8 @@ static const struct adreno_gpu_core adreno_gpulist[] = {
.minor = 0,
.patchid = ANY_ID,
.features = ADRENO_64BIT | ADRENO_RPMH | ADRENO_IFPC |
- ADRENO_GPMU | ADRENO_CONTENT_PROTECTION | ADRENO_LM,
+ ADRENO_GPMU | ADRENO_CONTENT_PROTECTION | ADRENO_LM |
+ ADRENO_IOCOHERENT,
.sqefw_name = "a630_sqe.fw",
.zap_name = "a630_zap",
.gpudev = &adreno_a6xx_gpudev,
@@ -375,7 +376,7 @@ static const struct adreno_gpu_core adreno_gpulist[] = {
.num_protected_regs = 0x20,
.busy_mask = 0xFFFFFFFE,
.gpmufw_name = "a630_gmu.bin",
- .gpmu_major = 0x0,
- .gpmu_minor = 0x005,
+ .gpmu_major = 0x1,
+ .gpmu_minor = 0x001,
},
};
diff --git a/drivers/gpu/msm/adreno.c b/drivers/gpu/msm/adreno.c
index c66e027..13fe0a7 100644
--- a/drivers/gpu/msm/adreno.c
+++ b/drivers/gpu/msm/adreno.c
@@ -37,6 +37,7 @@
#include "adreno_trace.h"
#include "a3xx_reg.h"
+#include "a6xx_reg.h"
#include "adreno_snapshot.h"
/* Include the master list of GPU cores that are supported */
@@ -118,6 +119,7 @@ static struct adreno_device device_3d0 = {
.skipsaverestore = 1,
.usesgmem = 1,
},
+ .priv = BIT(ADRENO_DEVICE_PREEMPTION_EXECUTION),
};
/* Ptr to array for the current set of fault detect registers */
@@ -612,6 +614,7 @@ static irqreturn_t adreno_irq_handler(struct kgsl_device *device)
struct adreno_irq *irq_params = gpudev->irq;
irqreturn_t ret = IRQ_NONE;
unsigned int status = 0, fence = 0, fence_retries = 0, tmp, int_bit;
+ unsigned int status_retries = 0;
int i;
atomic_inc(&adreno_dev->pending_irq_refcnt);
@@ -651,6 +654,32 @@ static irqreturn_t adreno_irq_handler(struct kgsl_device *device)
adreno_readreg(adreno_dev, ADRENO_REG_RBBM_INT_0_STATUS, &status);
/*
+ * Read status again to make sure the bits aren't transitory.
+ * Transitory bits mean that they are spurious interrupts and are
+ * seen while preemption is on going. Empirical experiments have
+ * shown that the transitory bits are a timing thing and they
+ * go away in the small time window between two or three consecutive
+ * reads. If they don't go away, log the message and return.
+ */
+ while (status_retries < STATUS_RETRY_MAX) {
+ unsigned int new_status;
+
+ adreno_readreg(adreno_dev, ADRENO_REG_RBBM_INT_0_STATUS,
+ &new_status);
+
+ if (status == new_status)
+ break;
+
+ status = new_status;
+ status_retries++;
+ }
+
+ if (status_retries == STATUS_RETRY_MAX) {
+ KGSL_DRV_CRIT_RATELIMIT(device, "STATUS bits are not stable\n");
+ return ret;
+ }
+
+ /*
* Clear all the interrupt bits but ADRENO_INT_RBBM_AHB_ERROR. Because
* even if we clear it here, it will stay high until it is cleared
* in its respective handler. Otherwise, the interrupt handler will
@@ -1120,6 +1149,9 @@ static int adreno_probe(struct platform_device *pdev)
if (!ADRENO_FEATURE(adreno_dev, ADRENO_CONTENT_PROTECTION))
device->mmu.secured = false;
+ if (ADRENO_FEATURE(adreno_dev, ADRENO_IOCOHERENT))
+ device->mmu.features |= KGSL_MMU_IO_COHERENT;
+
status = adreno_ringbuffer_probe(adreno_dev, nopreempt);
if (status)
goto out;
@@ -1639,29 +1671,91 @@ static int _adreno_start(struct adreno_device *adreno_dev)
adreno_dev->starved_ram_lo_ch1 = 0;
}
}
- }
- /* VBIF DDR cycles */
- if (adreno_dev->ram_cycles_lo == 0) {
- ret = adreno_perfcounter_get(adreno_dev,
- KGSL_PERFCOUNTER_GROUP_VBIF,
- VBIF_AXI_TOTAL_BEATS,
- &adreno_dev->ram_cycles_lo, NULL,
- PERFCOUNTER_FLAG_KERNEL);
+ if (adreno_dev->ram_cycles_lo == 0) {
+ ret = adreno_perfcounter_get(adreno_dev,
+ KGSL_PERFCOUNTER_GROUP_VBIF,
+ GBIF_AXI0_READ_DATA_TOTAL_BEATS,
+ &adreno_dev->ram_cycles_lo, NULL,
+ PERFCOUNTER_FLAG_KERNEL);
- if (ret) {
- KGSL_DRV_ERR(device,
- "Unable to get perf counters for bus DCVS\n");
- adreno_dev->ram_cycles_lo = 0;
+ if (ret) {
+ KGSL_DRV_ERR(device,
+ "Unable to get perf counters for bus DCVS\n");
+ adreno_dev->ram_cycles_lo = 0;
+ }
+ }
+
+ if (adreno_dev->ram_cycles_lo_ch1_read == 0) {
+ ret = adreno_perfcounter_get(adreno_dev,
+ KGSL_PERFCOUNTER_GROUP_VBIF,
+ GBIF_AXI1_READ_DATA_TOTAL_BEATS,
+ &adreno_dev->ram_cycles_lo_ch1_read,
+ NULL,
+ PERFCOUNTER_FLAG_KERNEL);
+
+ if (ret) {
+ KGSL_DRV_ERR(device,
+ "Unable to get perf counters for bus DCVS\n");
+ adreno_dev->ram_cycles_lo_ch1_read = 0;
+ }
+ }
+
+ if (adreno_dev->ram_cycles_lo_ch0_write == 0) {
+ ret = adreno_perfcounter_get(adreno_dev,
+ KGSL_PERFCOUNTER_GROUP_VBIF,
+ GBIF_AXI0_WRITE_DATA_TOTAL_BEATS,
+ &adreno_dev->ram_cycles_lo_ch0_write,
+ NULL,
+ PERFCOUNTER_FLAG_KERNEL);
+
+ if (ret) {
+ KGSL_DRV_ERR(device,
+ "Unable to get perf counters for bus DCVS\n");
+ adreno_dev->ram_cycles_lo_ch0_write = 0;
+ }
+ }
+
+ if (adreno_dev->ram_cycles_lo_ch1_write == 0) {
+ ret = adreno_perfcounter_get(adreno_dev,
+ KGSL_PERFCOUNTER_GROUP_VBIF,
+ GBIF_AXI1_WRITE_DATA_TOTAL_BEATS,
+ &adreno_dev->ram_cycles_lo_ch1_write,
+ NULL,
+ PERFCOUNTER_FLAG_KERNEL);
+
+ if (ret) {
+ KGSL_DRV_ERR(device,
+ "Unable to get perf counters for bus DCVS\n");
+ adreno_dev->ram_cycles_lo_ch1_write = 0;
+ }
+ }
+ } else {
+ /* VBIF DDR cycles */
+ if (adreno_dev->ram_cycles_lo == 0) {
+ ret = adreno_perfcounter_get(adreno_dev,
+ KGSL_PERFCOUNTER_GROUP_VBIF,
+ VBIF_AXI_TOTAL_BEATS,
+ &adreno_dev->ram_cycles_lo, NULL,
+ PERFCOUNTER_FLAG_KERNEL);
+
+ if (ret) {
+ KGSL_DRV_ERR(device,
+ "Unable to get perf counters for bus DCVS\n");
+ adreno_dev->ram_cycles_lo = 0;
+ }
}
}
}
/* Clear the busy_data stats - we're starting over from scratch */
adreno_dev->busy_data.gpu_busy = 0;
- adreno_dev->busy_data.vbif_ram_cycles = 0;
- adreno_dev->busy_data.vbif_starved_ram = 0;
- adreno_dev->busy_data.vbif_starved_ram_ch1 = 0;
+ adreno_dev->busy_data.bif_ram_cycles = 0;
+ adreno_dev->busy_data.bif_ram_cycles_read_ch1 = 0;
+ adreno_dev->busy_data.bif_ram_cycles_write_ch0 = 0;
+ adreno_dev->busy_data.bif_ram_cycles_write_ch1 = 0;
+ adreno_dev->busy_data.bif_starved_ram = 0;
+ adreno_dev->busy_data.bif_starved_ram_ch1 = 0;
/* Restore performance counter registers with saved values */
adreno_perfcounter_restore(adreno_dev);
@@ -2495,9 +2589,12 @@ int adreno_soft_reset(struct kgsl_device *device)
/* Clear the busy_data stats - we're starting over from scratch */
adreno_dev->busy_data.gpu_busy = 0;
- adreno_dev->busy_data.vbif_ram_cycles = 0;
- adreno_dev->busy_data.vbif_starved_ram = 0;
- adreno_dev->busy_data.vbif_starved_ram_ch1 = 0;
+ adreno_dev->busy_data.bif_ram_cycles = 0;
+ adreno_dev->busy_data.bif_ram_cycles_read_ch1 = 0;
+ adreno_dev->busy_data.bif_ram_cycles_write_ch0 = 0;
+ adreno_dev->busy_data.bif_ram_cycles_write_ch1 = 0;
+ adreno_dev->busy_data.bif_starved_ram = 0;
+ adreno_dev->busy_data.bif_starved_ram_ch1 = 0;
/* Set the page table back to the default page table */
adreno_ringbuffer_set_global(adreno_dev, 0);
@@ -3078,18 +3175,35 @@ static void adreno_power_stats(struct kgsl_device *device,
if (adreno_dev->ram_cycles_lo != 0)
ram_cycles = counter_delta(device,
adreno_dev->ram_cycles_lo,
- &busy->vbif_ram_cycles);
+ &busy->bif_ram_cycles);
+
+ if (adreno_has_gbif(adreno_dev)) {
+ if (adreno_dev->ram_cycles_lo_ch1_read != 0)
+ ram_cycles += counter_delta(device,
+ adreno_dev->ram_cycles_lo_ch1_read,
+ &busy->bif_ram_cycles_read_ch1);
+
+ if (adreno_dev->ram_cycles_lo_ch0_write != 0)
+ ram_cycles += counter_delta(device,
+ adreno_dev->ram_cycles_lo_ch0_write,
+ &busy->bif_ram_cycles_write_ch0);
+
+ if (adreno_dev->ram_cycles_lo_ch1_write != 0)
+ ram_cycles += counter_delta(device,
+ adreno_dev->ram_cycles_lo_ch1_write,
+ &busy->bif_ram_cycles_write_ch1);
+ }
if (adreno_dev->starved_ram_lo != 0)
starved_ram = counter_delta(device,
adreno_dev->starved_ram_lo,
- &busy->vbif_starved_ram);
+ &busy->bif_starved_ram);
if (adreno_has_gbif(adreno_dev)) {
if (adreno_dev->starved_ram_lo_ch1 != 0)
starved_ram += counter_delta(device,
adreno_dev->starved_ram_lo_ch1,
- &busy->vbif_starved_ram_ch1);
+ &busy->bif_starved_ram_ch1);
}
stats->ram_time = ram_cycles;
diff --git a/drivers/gpu/msm/adreno.h b/drivers/gpu/msm/adreno.h
index c3a9868..b77f6e1 100644
--- a/drivers/gpu/msm/adreno.h
+++ b/drivers/gpu/msm/adreno.h
@@ -121,6 +121,8 @@
#define ADRENO_HW_NAP BIT(14)
/* The GMU supports min voltage*/
#define ADRENO_MIN_VOLT BIT(15)
+/* The core supports IO-coherent memory */
+#define ADRENO_IOCOHERENT BIT(16)
/*
* Adreno GPU quirks - control bits for various workarounds
@@ -167,6 +169,9 @@
/* Number of times to poll the AHB fence in ISR */
#define FENCE_RETRY_MAX 100
+/* Number of times to see if INT_0_STATUS changed or not */
+#define STATUS_RETRY_MAX 3
+
/* One cannot wait forever for the core to idle, so set an upper limit to the
* amount of time to wait for the core to go idle
*/
@@ -267,6 +272,7 @@ enum adreno_preempt_states {
* preempt_level: The level of preemption (for 6XX)
* skipsaverestore: To skip saverestore during L1 preemption (for 6XX)
* usesgmem: enable GMEM save/restore across preemption (for 6XX)
+ * count: Track the number of preemptions triggered
*/
struct adreno_preemption {
atomic_t state;
@@ -277,14 +283,18 @@ struct adreno_preemption {
unsigned int preempt_level;
bool skipsaverestore;
bool usesgmem;
+ unsigned int count;
};
struct adreno_busy_data {
unsigned int gpu_busy;
- unsigned int vbif_ram_cycles;
- unsigned int vbif_starved_ram;
- unsigned int vbif_starved_ram_ch1;
+ unsigned int bif_ram_cycles;
+ unsigned int bif_ram_cycles_read_ch1;
+ unsigned int bif_ram_cycles_write_ch0;
+ unsigned int bif_ram_cycles_write_ch1;
+ unsigned int bif_starved_ram;
+ unsigned int bif_starved_ram_ch1;
unsigned int throttle_cycles[ADRENO_GPMU_THROTTLE_COUNTERS];
};
@@ -367,6 +377,13 @@ struct adreno_gpu_core {
unsigned int max_power;
};
+
+enum gpu_coresight_sources {
+ GPU_CORESIGHT_GX = 0,
+ GPU_CORESIGHT_CX = 1,
+ GPU_CORESIGHT_MAX,
+};
+
/**
* struct adreno_device - The mothership structure for all adreno related info
* @dev: Reference to struct kgsl_device
@@ -401,7 +418,14 @@ struct adreno_gpu_core {
* @pwron_fixup_dwords: Number of dwords in the command buffer
* @input_work: Work struct for turning on the GPU after a touch event
* @busy_data: Struct holding GPU VBIF busy stats
- * @ram_cycles_lo: Number of DDR clock cycles for the monitor session
+ * @ram_cycles_lo: Number of DDR clock cycles for the monitor session (Only
+ * DDR channel 0 read cycles in case of GBIF)
+ * @ram_cycles_lo_ch1_read: Number of DDR channel 1 Read clock cycles for
+ * the monitor session
+ * @ram_cycles_lo_ch0_write: Number of DDR channel 0 Write clock cycles for
+ * the monitor session
+ * @ram_cycles_lo_ch1_write: Number of DDR channel 0 Write clock cycles for
+ * the monitor session
* @starved_ram_lo: Number of cycles VBIF/GBIF is stalled by DDR (Only channel 0
* stall cycles in case of GBIF)
* @starved_ram_lo_ch1: Number of cycles GBIF is stalled by DDR channel 1
@@ -465,6 +489,9 @@ struct adreno_device {
struct work_struct input_work;
struct adreno_busy_data busy_data;
unsigned int ram_cycles_lo;
+ unsigned int ram_cycles_lo_ch1_read;
+ unsigned int ram_cycles_lo_ch0_write;
+ unsigned int ram_cycles_lo_ch1_write;
unsigned int starved_ram_lo;
unsigned int starved_ram_lo_ch1;
unsigned int perfctr_pwr_lo;
@@ -491,7 +518,7 @@ struct adreno_device {
unsigned int speed_bin;
unsigned int quirks;
- struct coresight_device *csdev;
+ struct coresight_device *csdev[GPU_CORESIGHT_MAX];
uint32_t gpmu_throttle_counters[ADRENO_GPMU_THROTTLE_COUNTERS];
struct work_struct irq_storm_work;
@@ -545,6 +572,7 @@ enum adreno_device_flags {
ADRENO_DEVICE_CACHE_FLUSH_TS_SUSPENDED = 13,
ADRENO_DEVICE_HARD_RESET = 14,
ADRENO_DEVICE_PREEMPTION_EXECUTION = 15,
+ ADRENO_DEVICE_CORESIGHT_CX = 16,
};
/**
@@ -612,6 +640,12 @@ enum adreno_regs {
ADRENO_REG_CP_PROTECT_REG_0,
ADRENO_REG_CP_CONTEXT_SWITCH_SMMU_INFO_LO,
ADRENO_REG_CP_CONTEXT_SWITCH_SMMU_INFO_HI,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_LO,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_HI,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_LO,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_HI,
+ ADRENO_REG_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_LO,
+ ADRENO_REG_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_HI,
ADRENO_REG_RBBM_STATUS,
ADRENO_REG_RBBM_STATUS3,
ADRENO_REG_RBBM_PERFCTR_CTL,
@@ -831,6 +865,13 @@ struct adreno_snapshot_data {
struct adreno_snapshot_sizes *sect_sizes;
};
+enum adreno_cp_marker_type {
+ IFPC_DISABLE,
+ IFPC_ENABLE,
+ IB1LIST_START,
+ IB1LIST_END,
+};
+
struct adreno_gpudev {
/*
* These registers are in a different location on different devices,
@@ -845,7 +886,7 @@ struct adreno_gpudev {
const struct adreno_invalid_countables *invalid_countables;
struct adreno_snapshot_data *snapshot_data;
- struct adreno_coresight *coresight;
+ struct adreno_coresight *coresight[GPU_CORESIGHT_MAX];
struct adreno_irq *irq;
int num_prio_levels;
@@ -878,7 +919,8 @@ struct adreno_gpudev {
unsigned int *cmds,
struct kgsl_context *context);
int (*preemption_yield_enable)(unsigned int *);
- unsigned int (*set_marker)(unsigned int *cmds, int start);
+ unsigned int (*set_marker)(unsigned int *cmds,
+ enum adreno_cp_marker_type type);
unsigned int (*preemption_post_ibsubmit)(
struct adreno_device *adreno_dev,
unsigned int *cmds);
@@ -913,6 +955,9 @@ struct adreno_gpudev {
bool (*sptprac_is_on)(struct adreno_device *);
unsigned int (*ccu_invalidate)(struct adreno_device *adreno_dev,
unsigned int *cmds);
+ int (*perfcounter_update)(struct adreno_device *adreno_dev,
+ struct adreno_perfcount_register *reg,
+ bool update_reg);
};
/**
@@ -1898,4 +1943,7 @@ static inline int adreno_vbif_clear_pending_transactions(
return ret;
}
+void adreno_gmu_fenced_write(struct adreno_device *adreno_dev,
+ enum adreno_regs offset, unsigned int val,
+ unsigned int fence_mask);
#endif /*__ADRENO_H */
diff --git a/drivers/gpu/msm/adreno_a3xx.c b/drivers/gpu/msm/adreno_a3xx.c
index b2cdf56..e5c8222 100644
--- a/drivers/gpu/msm/adreno_a3xx.c
+++ b/drivers/gpu/msm/adreno_a3xx.c
@@ -1923,5 +1923,5 @@ struct adreno_gpudev adreno_a3xx_gpudev = {
.perfcounter_close = a3xx_perfcounter_close,
.start = a3xx_start,
.snapshot = a3xx_snapshot,
- .coresight = &a3xx_coresight,
+ .coresight = {&a3xx_coresight},
};
diff --git a/drivers/gpu/msm/adreno_a4xx.c b/drivers/gpu/msm/adreno_a4xx.c
index 80ceabd..771d035 100644
--- a/drivers/gpu/msm/adreno_a4xx.c
+++ b/drivers/gpu/msm/adreno_a4xx.c
@@ -1790,7 +1790,7 @@ struct adreno_gpudev adreno_a4xx_gpudev = {
.rb_start = a4xx_rb_start,
.init = a4xx_init,
.microcode_read = a3xx_microcode_read,
- .coresight = &a4xx_coresight,
+ .coresight = {&a4xx_coresight},
.start = a4xx_start,
.snapshot = a4xx_snapshot,
.is_sptp_idle = a4xx_is_sptp_idle,
diff --git a/drivers/gpu/msm/adreno_a5xx.c b/drivers/gpu/msm/adreno_a5xx.c
index f3e8650..baf366e 100644
--- a/drivers/gpu/msm/adreno_a5xx.c
+++ b/drivers/gpu/msm/adreno_a5xx.c
@@ -193,6 +193,8 @@ static void a5xx_critical_packet_destroy(struct adreno_device *adreno_dev)
kgsl_free_global(&adreno_dev->dev, &crit_pkts_refbuf2);
kgsl_free_global(&adreno_dev->dev, &crit_pkts_refbuf3);
+ kgsl_iommu_unmap_global_secure_pt_entry(KGSL_DEVICE(adreno_dev),
+ &crit_pkts_refbuf0);
kgsl_sharedmem_free(&crit_pkts_refbuf0);
}
@@ -231,8 +233,10 @@ static int a5xx_critical_packet_construct(struct adreno_device *adreno_dev)
if (ret)
return ret;
- kgsl_add_global_secure_entry(&adreno_dev->dev,
+ ret = kgsl_iommu_map_global_secure_pt_entry(&adreno_dev->dev,
&crit_pkts_refbuf0);
+ if (ret)
+ return ret;
ret = kgsl_allocate_global(&adreno_dev->dev,
&crit_pkts_refbuf1,
@@ -293,8 +297,13 @@ static void a5xx_init(struct adreno_device *adreno_dev)
INIT_WORK(&adreno_dev->irq_storm_work, a5xx_irq_storm_worker);
- if (ADRENO_QUIRK(adreno_dev, ADRENO_QUIRK_CRITICAL_PACKETS))
- a5xx_critical_packet_construct(adreno_dev);
+ if (ADRENO_QUIRK(adreno_dev, ADRENO_QUIRK_CRITICAL_PACKETS)) {
+ int ret;
+
+ ret = a5xx_critical_packet_construct(adreno_dev);
+ if (ret)
+ a5xx_critical_packet_destroy(adreno_dev);
+ }
a5xx_crashdump_init(adreno_dev);
}
@@ -3588,7 +3597,7 @@ struct adreno_gpudev adreno_a5xx_gpudev = {
.int_bits = a5xx_int_bits,
.ft_perf_counters = a5xx_ft_perf_counters,
.ft_perf_counters_count = ARRAY_SIZE(a5xx_ft_perf_counters),
- .coresight = &a5xx_coresight,
+ .coresight = {&a5xx_coresight},
.start = a5xx_start,
.snapshot = a5xx_snapshot,
.irq = &a5xx_irq,
diff --git a/drivers/gpu/msm/adreno_a5xx_snapshot.c b/drivers/gpu/msm/adreno_a5xx_snapshot.c
index 6dc62866..d1a6005 100644
--- a/drivers/gpu/msm/adreno_a5xx_snapshot.c
+++ b/drivers/gpu/msm/adreno_a5xx_snapshot.c
@@ -621,7 +621,8 @@ static size_t a5xx_snapshot_shader_memory(struct kgsl_device *device,
header->index = info->bank;
header->size = block->sz;
- memcpy(data, registers.hostptr + info->offset, block->sz);
+ memcpy(data, registers.hostptr + info->offset,
+ block->sz * sizeof(unsigned int));
return SHADER_SECTION_SZ(block->sz);
}
diff --git a/drivers/gpu/msm/adreno_a6xx.c b/drivers/gpu/msm/adreno_a6xx.c
index fa6762a..83dd3fb 100644
--- a/drivers/gpu/msm/adreno_a6xx.c
+++ b/drivers/gpu/msm/adreno_a6xx.c
@@ -13,6 +13,7 @@
#include <linux/firmware.h>
#include <soc/qcom/subsystem_restart.h>
#include <linux/pm_opp.h>
+#include <linux/jiffies.h>
#include "adreno.h"
#include "a6xx_reg.h"
@@ -52,6 +53,7 @@ static const struct adreno_vbif_data a630_vbif[] = {
static const struct adreno_vbif_data a615_gbif[] = {
{A6XX_RBBM_VBIF_CLIENT_QOS_CNTL, 0x3},
+ {A6XX_UCHE_GBIF_GX_CONFIG, 0x10200F9},
{0, 0},
};
@@ -173,12 +175,12 @@ static const struct kgsl_hwcg_reg a630_hwcg_regs[] = {
};
static const struct kgsl_hwcg_reg a615_hwcg_regs[] = {
- {A6XX_RBBM_CLOCK_CNTL_SP0, 0x22222222},
+ {A6XX_RBBM_CLOCK_CNTL_SP0, 0x02222222},
{A6XX_RBBM_CLOCK_CNTL2_SP0, 0x02222220},
- {A6XX_RBBM_CLOCK_DELAY_SP0, 0x00000081},
+ {A6XX_RBBM_CLOCK_DELAY_SP0, 0x00000080},
{A6XX_RBBM_CLOCK_HYST_SP0, 0x0000F3CF},
- {A6XX_RBBM_CLOCK_CNTL_TP0, 0x22222222},
- {A6XX_RBBM_CLOCK_CNTL_TP1, 0x22222222},
+ {A6XX_RBBM_CLOCK_CNTL_TP0, 0x02222222},
+ {A6XX_RBBM_CLOCK_CNTL_TP1, 0x02222222},
{A6XX_RBBM_CLOCK_CNTL2_TP0, 0x22222222},
{A6XX_RBBM_CLOCK_CNTL2_TP1, 0x22222222},
{A6XX_RBBM_CLOCK_CNTL3_TP0, 0x22222222},
@@ -222,7 +224,7 @@ static const struct kgsl_hwcg_reg a615_hwcg_regs[] = {
{A6XX_RBBM_CLOCK_DELAY_RAC, 0x00000011},
{A6XX_RBBM_CLOCK_HYST_RAC, 0x00445044},
{A6XX_RBBM_CLOCK_CNTL_TSE_RAS_RBBM, 0x04222222},
- {A6XX_RBBM_CLOCK_MODE_GPC, 0x02222222},
+ {A6XX_RBBM_CLOCK_MODE_GPC, 0x00222222},
{A6XX_RBBM_CLOCK_MODE_VFD, 0x00002222},
{A6XX_RBBM_CLOCK_HYST_TSE_RAS_RBBM, 0x00000000},
{A6XX_RBBM_CLOCK_HYST_GPC, 0x04104004},
@@ -266,7 +268,8 @@ static struct a6xx_protected_regs {
{ 0x0, 0x4F9, 0 },
{ 0x501, 0xA, 0 },
{ 0x511, 0x44, 0 },
- { 0xE00, 0xE, 1 },
+ { 0xE00, 0x1, 1 },
+ { 0xE03, 0xB, 1 },
{ 0x8E00, 0x0, 1 },
{ 0x8E50, 0xF, 1 },
{ 0xBE02, 0x0, 1 },
@@ -281,6 +284,7 @@ static struct a6xx_protected_regs {
{ 0xA630, 0x0, 1 },
};
+/* IFPC & Preemption static powerup restore list */
static struct reg_list_pair {
uint32_t offset;
uint32_t val;
@@ -315,6 +319,48 @@ static struct reg_list_pair {
{ A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x0 },
};
+/* IFPC only static powerup restore list */
+static struct reg_list_pair a6xx_ifpc_pwrup_reglist[] = {
+ { A6XX_RBBM_VBIF_CLIENT_QOS_CNTL, 0x0 },
+ { A6XX_CP_CHICKEN_DBG, 0x0 },
+ { A6XX_CP_ADDR_MODE_CNTL, 0x0 },
+ { A6XX_CP_DBG_ECO_CNTL, 0x0 },
+ { A6XX_CP_PROTECT_CNTL, 0x0 },
+ { A6XX_CP_PROTECT_REG, 0x0 },
+ { A6XX_CP_PROTECT_REG+1, 0x0 },
+ { A6XX_CP_PROTECT_REG+2, 0x0 },
+ { A6XX_CP_PROTECT_REG+3, 0x0 },
+ { A6XX_CP_PROTECT_REG+4, 0x0 },
+ { A6XX_CP_PROTECT_REG+5, 0x0 },
+ { A6XX_CP_PROTECT_REG+6, 0x0 },
+ { A6XX_CP_PROTECT_REG+7, 0x0 },
+ { A6XX_CP_PROTECT_REG+8, 0x0 },
+ { A6XX_CP_PROTECT_REG+9, 0x0 },
+ { A6XX_CP_PROTECT_REG+10, 0x0 },
+ { A6XX_CP_PROTECT_REG+11, 0x0 },
+ { A6XX_CP_PROTECT_REG+12, 0x0 },
+ { A6XX_CP_PROTECT_REG+13, 0x0 },
+ { A6XX_CP_PROTECT_REG+14, 0x0 },
+ { A6XX_CP_PROTECT_REG+15, 0x0 },
+ { A6XX_CP_PROTECT_REG+16, 0x0 },
+ { A6XX_CP_PROTECT_REG+17, 0x0 },
+ { A6XX_CP_PROTECT_REG+18, 0x0 },
+ { A6XX_CP_PROTECT_REG+19, 0x0 },
+ { A6XX_CP_PROTECT_REG+20, 0x0 },
+ { A6XX_CP_PROTECT_REG+21, 0x0 },
+ { A6XX_CP_PROTECT_REG+22, 0x0 },
+ { A6XX_CP_PROTECT_REG+23, 0x0 },
+ { A6XX_CP_PROTECT_REG+24, 0x0 },
+ { A6XX_CP_PROTECT_REG+25, 0x0 },
+ { A6XX_CP_PROTECT_REG+26, 0x0 },
+ { A6XX_CP_PROTECT_REG+27, 0x0 },
+ { A6XX_CP_PROTECT_REG+28, 0x0 },
+ { A6XX_CP_PROTECT_REG+29, 0x0 },
+ { A6XX_CP_PROTECT_REG+30, 0x0 },
+ { A6XX_CP_PROTECT_REG+31, 0x0 },
+ { A6XX_CP_AHB_CNTL, 0x0 },
+};
+
static void _update_always_on_regs(struct adreno_device *adreno_dev)
{
struct adreno_gpudev *gpudev = ADRENO_GPU_DEVICE(adreno_dev);
@@ -331,7 +377,7 @@ static void a6xx_pwrup_reglist_init(struct adreno_device *adreno_dev)
struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
if (kgsl_allocate_global(device, &adreno_dev->pwrup_reglist,
- PAGE_SIZE, KGSL_MEMFLAGS_GPUREADONLY, 0,
+ PAGE_SIZE, 0, KGSL_MEMDESC_CONTIG | KGSL_MEMDESC_PRIVILEGED,
"powerup_register_list")) {
adreno_dev->pwrup_reglist.gpuaddr = 0;
return;
@@ -428,7 +474,41 @@ static void a6xx_enable_64bit(struct adreno_device *adreno_dev)
kgsl_regwrite(device, A6XX_RBBM_SECVID_TSB_ADDR_MODE_CNTL, 0x1);
}
-#define RBBM_CLOCK_CNTL_ON 0x8AA8AA02
+static inline unsigned int
+__get_rbbm_clock_cntl_on(struct adreno_device *adreno_dev)
+{
+ if (adreno_is_a615(adreno_dev))
+ return 0x8AA8AA82;
+ else
+ return 0x8AA8AA02;
+}
+
+static inline unsigned int
+__get_gmu_ao_cgc_mode_cntl(struct adreno_device *adreno_dev)
+{
+ if (adreno_is_a615(adreno_dev))
+ return 0x00000222;
+ else
+ return 0x00020222;
+}
+
+static inline unsigned int
+__get_gmu_ao_cgc_delay_cntl(struct adreno_device *adreno_dev)
+{
+ if (adreno_is_a615(adreno_dev))
+ return 0x00000111;
+ else
+ return 0x00010111;
+}
+
+static inline unsigned int
+__get_gmu_ao_cgc_hyst_cntl(struct adreno_device *adreno_dev)
+{
+ if (adreno_is_a615(adreno_dev))
+ return 0x00000555;
+ else
+ return 0x00005555;
+}
static void a6xx_hwcg_set(struct adreno_device *adreno_dev, bool on)
{
@@ -442,16 +522,16 @@ static void a6xx_hwcg_set(struct adreno_device *adreno_dev, bool on)
if (kgsl_gmu_isenabled(device)) {
kgsl_gmu_regwrite(device, A6XX_GPU_GMU_AO_GMU_CGC_MODE_CNTL,
- on ? 0x00020222 : 0);
+ on ? __get_gmu_ao_cgc_mode_cntl(adreno_dev) : 0);
kgsl_gmu_regwrite(device, A6XX_GPU_GMU_AO_GMU_CGC_DELAY_CNTL,
- on ? 0x00010111 : 0);
+ on ? __get_gmu_ao_cgc_delay_cntl(adreno_dev) : 0);
kgsl_gmu_regwrite(device, A6XX_GPU_GMU_AO_GMU_CGC_HYST_CNTL,
- on ? 0x00050555 : 0);
+ on ? __get_gmu_ao_cgc_hyst_cntl(adreno_dev) : 0);
}
kgsl_regread(device, A6XX_RBBM_CLOCK_CNTL, &value);
- if (value == RBBM_CLOCK_CNTL_ON && on)
+ if (value == __get_rbbm_clock_cntl_on(adreno_dev) && on)
return;
if (value == 0 && !on)
@@ -478,7 +558,7 @@ static void a6xx_hwcg_set(struct adreno_device *adreno_dev, bool on)
/* enable top level HWCG */
kgsl_regwrite(device, A6XX_RBBM_CLOCK_CNTL,
- on ? RBBM_CLOCK_CNTL_ON : 0);
+ on ? __get_rbbm_clock_cntl_on(adreno_dev) : 0);
}
#define LM_DEFAULT_LIMIT 6000
@@ -500,17 +580,46 @@ static uint32_t lm_limit(struct adreno_device *adreno_dev)
static void a6xx_patch_pwrup_reglist(struct adreno_device *adreno_dev)
{
uint32_t i;
+ struct cpu_gpu_lock *lock;
+ struct reg_list_pair *r;
/* Set up the register values */
- for (i = 0; i < ARRAY_SIZE(a6xx_pwrup_reglist); i++) {
- struct reg_list_pair *r = &a6xx_pwrup_reglist[i];
-
+ for (i = 0; i < ARRAY_SIZE(a6xx_ifpc_pwrup_reglist); i++) {
+ r = &a6xx_ifpc_pwrup_reglist[i];
kgsl_regread(KGSL_DEVICE(adreno_dev), r->offset, &r->val);
}
- /* Copy Preemption register/data pairs */
- memcpy(adreno_dev->pwrup_reglist.hostptr, &a6xx_pwrup_reglist,
- sizeof(a6xx_pwrup_reglist));
+ for (i = 0; i < ARRAY_SIZE(a6xx_pwrup_reglist); i++) {
+ r = &a6xx_pwrup_reglist[i];
+ kgsl_regread(KGSL_DEVICE(adreno_dev), r->offset, &r->val);
+ }
+
+ lock = (struct cpu_gpu_lock *) adreno_dev->pwrup_reglist.hostptr;
+ lock->flag_ucode = 0;
+ lock->flag_kmd = 0;
+ lock->turn = 0;
+
+ /*
+ * The overall register list is composed of
+ * 1. Static IFPC-only registers
+ * 2. Static IFPC + preemption registers
+ * 2. Dynamic IFPC + preemption registers (ex: perfcounter selects)
+ *
+ * The CP views the second and third entries as one dynamic list
+ * starting from list_offset. Thus, list_length should be the sum
+ * of all three lists above (of which the third list will start off
+ * empty). And list_offset should be specified as the size in dwords
+ * of the static IFPC-only register list.
+ */
+ lock->list_length = (sizeof(a6xx_ifpc_pwrup_reglist) +
+ sizeof(a6xx_pwrup_reglist)) >> 2;
+ lock->list_offset = sizeof(a6xx_ifpc_pwrup_reglist) >> 2;
+
+ memcpy(adreno_dev->pwrup_reglist.hostptr + sizeof(*lock),
+ a6xx_ifpc_pwrup_reglist, sizeof(a6xx_ifpc_pwrup_reglist));
+ memcpy(adreno_dev->pwrup_reglist.hostptr + sizeof(*lock)
+ + sizeof(a6xx_ifpc_pwrup_reglist),
+ a6xx_pwrup_reglist, sizeof(a6xx_pwrup_reglist));
}
/*
@@ -717,13 +826,16 @@ static int a6xx_microcode_load(struct adreno_device *adreno_dev)
/* Register initialization list */
#define CP_INIT_REGISTER_INIT_LIST BIT(7)
+/* Register initialization list with spinlock */
+#define CP_INIT_REGISTER_INIT_LIST_WITH_SPINLOCK BIT(8)
+
#define CP_INIT_MASK (CP_INIT_MAX_CONTEXT | \
CP_INIT_ERROR_DETECTION_CONTROL | \
CP_INIT_HEADER_DUMP | \
CP_INIT_DEFAULT_RESET_STATE | \
CP_INIT_UCODE_WORKAROUND_MASK | \
CP_INIT_OPERATION_MODE_MASK | \
- CP_INIT_REGISTER_INIT_LIST)
+ CP_INIT_REGISTER_INIT_LIST_WITH_SPINLOCK)
static void _set_ordinals(struct adreno_device *adreno_dev,
unsigned int *cmds, unsigned int count)
@@ -759,13 +871,21 @@ static void _set_ordinals(struct adreno_device *adreno_dev,
if (CP_INIT_MASK & CP_INIT_OPERATION_MODE_MASK)
*cmds++ = 0x00000002;
- if (CP_INIT_MASK & CP_INIT_REGISTER_INIT_LIST) {
+ if (CP_INIT_MASK & CP_INIT_REGISTER_INIT_LIST_WITH_SPINLOCK) {
+ uint64_t gpuaddr = adreno_dev->pwrup_reglist.gpuaddr;
+
+ *cmds++ = lower_32_bits(gpuaddr);
+ *cmds++ = upper_32_bits(gpuaddr);
+ *cmds++ = 0;
+
+ } else if (CP_INIT_MASK & CP_INIT_REGISTER_INIT_LIST) {
uint64_t gpuaddr = adreno_dev->pwrup_reglist.gpuaddr;
*cmds++ = lower_32_bits(gpuaddr);
*cmds++ = upper_32_bits(gpuaddr);
/* Size is in dwords */
- *cmds++ = sizeof(a6xx_pwrup_reglist) >> 2;
+ *cmds++ = (sizeof(a6xx_ifpc_pwrup_reglist) +
+ sizeof(a6xx_pwrup_reglist)) >> 2;
}
/* Pad rest of the cmds with 0's */
@@ -822,7 +942,8 @@ static int _preemption_init(struct adreno_device *adreno_dev,
rb->preemption_desc.gpuaddr);
*cmds++ = 2;
- cmds += cp_gpuaddr(adreno_dev, cmds, 0);
+ cmds += cp_gpuaddr(adreno_dev, cmds,
+ rb->secure_preemption_desc.gpuaddr);
/* Turn CP protection ON */
*cmds++ = cp_type7_packet(CP_SET_PROTECTED_MODE, 1);
@@ -913,6 +1034,38 @@ static int a6xx_rb_start(struct adreno_device *adreno_dev,
return a6xx_post_start(adreno_dev);
}
+unsigned int a6xx_set_marker(
+ unsigned int *cmds, enum adreno_cp_marker_type type)
+{
+ unsigned int cmd = 0;
+
+ *cmds++ = cp_type7_packet(CP_SET_MARKER, 1);
+
+ /*
+ * Indicate the beginning and end of the IB1 list with a SET_MARKER.
+ * Among other things, this will implicitly enable and disable
+ * preemption respectively. IFPC can also be disabled and enabled
+ * with a SET_MARKER. Bit 8 tells the CP the marker is for IFPC.
+ */
+ switch (type) {
+ case IFPC_DISABLE:
+ cmd = 0x101;
+ break;
+ case IFPC_ENABLE:
+ cmd = 0x100;
+ break;
+ case IB1LIST_START:
+ cmd = 0xD;
+ break;
+ case IB1LIST_END:
+ cmd = 0xE;
+ break;
+ }
+
+ *cmds++ = cmd;
+ return 2;
+}
+
static int _load_firmware(struct kgsl_device *device, const char *fwfile,
struct adreno_firmware *firmware)
{
@@ -1439,8 +1592,8 @@ static int a6xx_notify_slumber(struct kgsl_device *device)
kgsl_gmu_regwrite(device, A6XX_GMU_BOOT_SLUMBER_OPTION,
OOB_SLUMBER_OPTION);
- kgsl_gmu_regwrite(device, A6XX_GMU_GX_VOTE_IDX, bus_level);
- kgsl_gmu_regwrite(device, A6XX_GMU_MX_VOTE_IDX, perf_idx);
+ kgsl_gmu_regwrite(device, A6XX_GMU_GX_VOTE_IDX, perf_idx);
+ kgsl_gmu_regwrite(device, A6XX_GMU_MX_VOTE_IDX, bus_level);
ret = a6xx_oob_set(adreno_dev, OOB_BOOT_SLUMBER_SET_MASK,
OOB_BOOT_SLUMBER_CHECK_MASK,
@@ -2509,6 +2662,420 @@ static struct adreno_snapshot_data a6xx_snapshot_data = {
.sect_sizes = &a6xx_snap_sizes,
};
+static struct adreno_coresight_register a6xx_coresight_regs[] = {
+ { A6XX_DBGC_CFG_DBGBUS_SEL_A },
+ { A6XX_DBGC_CFG_DBGBUS_SEL_B },
+ { A6XX_DBGC_CFG_DBGBUS_SEL_C },
+ { A6XX_DBGC_CFG_DBGBUS_SEL_D },
+ { A6XX_DBGC_CFG_DBGBUS_CNTLT },
+ { A6XX_DBGC_CFG_DBGBUS_CNTLM },
+ { A6XX_DBGC_CFG_DBGBUS_OPL },
+ { A6XX_DBGC_CFG_DBGBUS_OPE },
+ { A6XX_DBGC_CFG_DBGBUS_IVTL_0 },
+ { A6XX_DBGC_CFG_DBGBUS_IVTL_1 },
+ { A6XX_DBGC_CFG_DBGBUS_IVTL_2 },
+ { A6XX_DBGC_CFG_DBGBUS_IVTL_3 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKL_0 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKL_1 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKL_2 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKL_3 },
+ { A6XX_DBGC_CFG_DBGBUS_BYTEL_0 },
+ { A6XX_DBGC_CFG_DBGBUS_BYTEL_1 },
+ { A6XX_DBGC_CFG_DBGBUS_IVTE_0 },
+ { A6XX_DBGC_CFG_DBGBUS_IVTE_1 },
+ { A6XX_DBGC_CFG_DBGBUS_IVTE_2 },
+ { A6XX_DBGC_CFG_DBGBUS_IVTE_3 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKE_0 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKE_1 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKE_2 },
+ { A6XX_DBGC_CFG_DBGBUS_MASKE_3 },
+ { A6XX_DBGC_CFG_DBGBUS_NIBBLEE },
+ { A6XX_DBGC_CFG_DBGBUS_PTRC0 },
+ { A6XX_DBGC_CFG_DBGBUS_PTRC1 },
+ { A6XX_DBGC_CFG_DBGBUS_LOADREG },
+ { A6XX_DBGC_CFG_DBGBUS_IDX },
+ { A6XX_DBGC_CFG_DBGBUS_CLRC },
+ { A6XX_DBGC_CFG_DBGBUS_LOADIVT },
+ { A6XX_DBGC_VBIF_DBG_CNTL },
+ { A6XX_DBGC_DBG_LO_HI_GPIO },
+ { A6XX_DBGC_EXT_TRACE_BUS_CNTL },
+ { A6XX_DBGC_READ_AHB_THROUGH_DBG },
+ { A6XX_DBGC_CFG_DBGBUS_TRACE_BUF1 },
+ { A6XX_DBGC_CFG_DBGBUS_TRACE_BUF2 },
+ { A6XX_DBGC_EVT_CFG },
+ { A6XX_DBGC_EVT_INTF_SEL_0 },
+ { A6XX_DBGC_EVT_INTF_SEL_1 },
+ { A6XX_DBGC_PERF_ATB_CFG },
+ { A6XX_DBGC_PERF_ATB_COUNTER_SEL_0 },
+ { A6XX_DBGC_PERF_ATB_COUNTER_SEL_1 },
+ { A6XX_DBGC_PERF_ATB_COUNTER_SEL_2 },
+ { A6XX_DBGC_PERF_ATB_COUNTER_SEL_3 },
+ { A6XX_DBGC_PERF_ATB_TRIG_INTF_SEL_0 },
+ { A6XX_DBGC_PERF_ATB_TRIG_INTF_SEL_1 },
+ { A6XX_DBGC_PERF_ATB_DRAIN_CMD },
+ { A6XX_DBGC_ECO_CNTL },
+ { A6XX_DBGC_AHB_DBG_CNTL },
+};
+
+static struct adreno_coresight_register a6xx_coresight_regs_cx[] = {
+ { A6XX_CX_DBGC_CFG_DBGBUS_SEL_A },
+ { A6XX_CX_DBGC_CFG_DBGBUS_SEL_B },
+ { A6XX_CX_DBGC_CFG_DBGBUS_SEL_C },
+ { A6XX_CX_DBGC_CFG_DBGBUS_SEL_D },
+ { A6XX_CX_DBGC_CFG_DBGBUS_CNTLT },
+ { A6XX_CX_DBGC_CFG_DBGBUS_CNTLM },
+ { A6XX_CX_DBGC_CFG_DBGBUS_OPL },
+ { A6XX_CX_DBGC_CFG_DBGBUS_OPE },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTL_0 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTL_1 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTL_2 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTL_3 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKL_0 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKL_1 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKL_2 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKL_3 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_BYTEL_0 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_BYTEL_1 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTE_0 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTE_1 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTE_2 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IVTE_3 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKE_0 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKE_1 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKE_2 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_MASKE_3 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_NIBBLEE },
+ { A6XX_CX_DBGC_CFG_DBGBUS_PTRC0 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_PTRC1 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_LOADREG },
+ { A6XX_CX_DBGC_CFG_DBGBUS_IDX },
+ { A6XX_CX_DBGC_CFG_DBGBUS_CLRC },
+ { A6XX_CX_DBGC_CFG_DBGBUS_LOADIVT },
+ { A6XX_CX_DBGC_VBIF_DBG_CNTL },
+ { A6XX_CX_DBGC_DBG_LO_HI_GPIO },
+ { A6XX_CX_DBGC_EXT_TRACE_BUS_CNTL },
+ { A6XX_CX_DBGC_READ_AHB_THROUGH_DBG },
+ { A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF1 },
+ { A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF2 },
+ { A6XX_CX_DBGC_EVT_CFG },
+ { A6XX_CX_DBGC_EVT_INTF_SEL_0 },
+ { A6XX_CX_DBGC_EVT_INTF_SEL_1 },
+ { A6XX_CX_DBGC_PERF_ATB_CFG },
+ { A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_0 },
+ { A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_1 },
+ { A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_2 },
+ { A6XX_CX_DBGC_PERF_ATB_COUNTER_SEL_3 },
+ { A6XX_CX_DBGC_PERF_ATB_TRIG_INTF_SEL_0 },
+ { A6XX_CX_DBGC_PERF_ATB_TRIG_INTF_SEL_1 },
+ { A6XX_CX_DBGC_PERF_ATB_DRAIN_CMD },
+ { A6XX_CX_DBGC_ECO_CNTL },
+ { A6XX_CX_DBGC_AHB_DBG_CNTL },
+};
+
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_sel_a, &a6xx_coresight_regs[0]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_sel_b, &a6xx_coresight_regs[1]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_sel_c, &a6xx_coresight_regs[2]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_sel_d, &a6xx_coresight_regs[3]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_cntlt, &a6xx_coresight_regs[4]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_cntlm, &a6xx_coresight_regs[5]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_opl, &a6xx_coresight_regs[6]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ope, &a6xx_coresight_regs[7]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivtl_0, &a6xx_coresight_regs[8]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivtl_1, &a6xx_coresight_regs[9]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivtl_2, &a6xx_coresight_regs[10]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivtl_3, &a6xx_coresight_regs[11]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maskl_0, &a6xx_coresight_regs[12]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maskl_1, &a6xx_coresight_regs[13]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maskl_2, &a6xx_coresight_regs[14]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maskl_3, &a6xx_coresight_regs[15]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_bytel_0, &a6xx_coresight_regs[16]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_bytel_1, &a6xx_coresight_regs[17]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivte_0, &a6xx_coresight_regs[18]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivte_1, &a6xx_coresight_regs[19]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivte_2, &a6xx_coresight_regs[20]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ivte_3, &a6xx_coresight_regs[21]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maske_0, &a6xx_coresight_regs[22]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maske_1, &a6xx_coresight_regs[23]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maske_2, &a6xx_coresight_regs[24]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_maske_3, &a6xx_coresight_regs[25]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_nibblee, &a6xx_coresight_regs[26]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ptrc0, &a6xx_coresight_regs[27]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_ptrc1, &a6xx_coresight_regs[28]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_loadreg, &a6xx_coresight_regs[29]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_idx, &a6xx_coresight_regs[30]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_clrc, &a6xx_coresight_regs[31]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_loadivt, &a6xx_coresight_regs[32]);
+static ADRENO_CORESIGHT_ATTR(vbif_dbg_cntl, &a6xx_coresight_regs[33]);
+static ADRENO_CORESIGHT_ATTR(dbg_lo_hi_gpio, &a6xx_coresight_regs[34]);
+static ADRENO_CORESIGHT_ATTR(ext_trace_bus_cntl, &a6xx_coresight_regs[35]);
+static ADRENO_CORESIGHT_ATTR(read_ahb_through_dbg, &a6xx_coresight_regs[36]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_trace_buf1, &a6xx_coresight_regs[37]);
+static ADRENO_CORESIGHT_ATTR(cfg_dbgbus_trace_buf2, &a6xx_coresight_regs[38]);
+static ADRENO_CORESIGHT_ATTR(evt_cfg, &a6xx_coresight_regs[39]);
+static ADRENO_CORESIGHT_ATTR(evt_intf_sel_0, &a6xx_coresight_regs[40]);
+static ADRENO_CORESIGHT_ATTR(evt_intf_sel_1, &a6xx_coresight_regs[41]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_cfg, &a6xx_coresight_regs[42]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_counter_sel_0, &a6xx_coresight_regs[43]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_counter_sel_1, &a6xx_coresight_regs[44]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_counter_sel_2, &a6xx_coresight_regs[45]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_counter_sel_3, &a6xx_coresight_regs[46]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_trig_intf_sel_0,
+ &a6xx_coresight_regs[47]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_trig_intf_sel_1,
+ &a6xx_coresight_regs[48]);
+static ADRENO_CORESIGHT_ATTR(perf_atb_drain_cmd, &a6xx_coresight_regs[49]);
+static ADRENO_CORESIGHT_ATTR(eco_cntl, &a6xx_coresight_regs[50]);
+static ADRENO_CORESIGHT_ATTR(ahb_dbg_cntl, &a6xx_coresight_regs[51]);
+
+/*CX debug registers*/
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_sel_a,
+ &a6xx_coresight_regs_cx[0]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_sel_b,
+ &a6xx_coresight_regs_cx[1]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_sel_c,
+ &a6xx_coresight_regs_cx[2]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_sel_d,
+ &a6xx_coresight_regs_cx[3]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_cntlt,
+ &a6xx_coresight_regs_cx[4]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_cntlm,
+ &a6xx_coresight_regs_cx[5]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_opl,
+ &a6xx_coresight_regs_cx[6]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ope,
+ &a6xx_coresight_regs_cx[7]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivtl_0,
+ &a6xx_coresight_regs_cx[8]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivtl_1,
+ &a6xx_coresight_regs_cx[9]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivtl_2,
+ &a6xx_coresight_regs_cx[10]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivtl_3,
+ &a6xx_coresight_regs_cx[11]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maskl_0,
+ &a6xx_coresight_regs_cx[12]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maskl_1,
+ &a6xx_coresight_regs_cx[13]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maskl_2,
+ &a6xx_coresight_regs_cx[14]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maskl_3,
+ &a6xx_coresight_regs_cx[15]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_bytel_0,
+ &a6xx_coresight_regs_cx[16]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_bytel_1,
+ &a6xx_coresight_regs_cx[17]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivte_0,
+ &a6xx_coresight_regs_cx[18]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivte_1,
+ &a6xx_coresight_regs_cx[19]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivte_2,
+ &a6xx_coresight_regs_cx[20]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ivte_3,
+ &a6xx_coresight_regs_cx[21]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maske_0,
+ &a6xx_coresight_regs_cx[22]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maske_1,
+ &a6xx_coresight_regs_cx[23]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maske_2,
+ &a6xx_coresight_regs_cx[24]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_maske_3,
+ &a6xx_coresight_regs_cx[25]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_nibblee,
+ &a6xx_coresight_regs_cx[26]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ptrc0,
+ &a6xx_coresight_regs_cx[27]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_ptrc1,
+ &a6xx_coresight_regs_cx[28]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_loadreg,
+ &a6xx_coresight_regs_cx[29]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_idx,
+ &a6xx_coresight_regs_cx[30]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_clrc,
+ &a6xx_coresight_regs_cx[31]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_loadivt,
+ &a6xx_coresight_regs_cx[32]);
+static ADRENO_CORESIGHT_ATTR(cx_vbif_dbg_cntl,
+ &a6xx_coresight_regs_cx[33]);
+static ADRENO_CORESIGHT_ATTR(cx_dbg_lo_hi_gpio,
+ &a6xx_coresight_regs_cx[34]);
+static ADRENO_CORESIGHT_ATTR(cx_ext_trace_bus_cntl,
+ &a6xx_coresight_regs_cx[35]);
+static ADRENO_CORESIGHT_ATTR(cx_read_ahb_through_dbg,
+ &a6xx_coresight_regs_cx[36]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_trace_buf1,
+ &a6xx_coresight_regs_cx[37]);
+static ADRENO_CORESIGHT_ATTR(cx_cfg_dbgbus_trace_buf2,
+ &a6xx_coresight_regs_cx[38]);
+static ADRENO_CORESIGHT_ATTR(cx_evt_cfg,
+ &a6xx_coresight_regs_cx[39]);
+static ADRENO_CORESIGHT_ATTR(cx_evt_intf_sel_0,
+ &a6xx_coresight_regs_cx[40]);
+static ADRENO_CORESIGHT_ATTR(cx_evt_intf_sel_1,
+ &a6xx_coresight_regs_cx[41]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_cfg,
+ &a6xx_coresight_regs_cx[42]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_counter_sel_0,
+ &a6xx_coresight_regs_cx[43]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_counter_sel_1,
+ &a6xx_coresight_regs_cx[44]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_counter_sel_2,
+ &a6xx_coresight_regs_cx[45]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_counter_sel_3,
+ &a6xx_coresight_regs_cx[46]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_trig_intf_sel_0,
+ &a6xx_coresight_regs_cx[47]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_trig_intf_sel_1,
+ &a6xx_coresight_regs_cx[48]);
+static ADRENO_CORESIGHT_ATTR(cx_perf_atb_drain_cmd,
+ &a6xx_coresight_regs_cx[49]);
+static ADRENO_CORESIGHT_ATTR(cx_eco_cntl,
+ &a6xx_coresight_regs_cx[50]);
+static ADRENO_CORESIGHT_ATTR(cx_ahb_dbg_cntl,
+ &a6xx_coresight_regs_cx[51]);
+
+static struct attribute *a6xx_coresight_attrs[] = {
+ &coresight_attr_cfg_dbgbus_sel_a.attr.attr,
+ &coresight_attr_cfg_dbgbus_sel_b.attr.attr,
+ &coresight_attr_cfg_dbgbus_sel_c.attr.attr,
+ &coresight_attr_cfg_dbgbus_sel_d.attr.attr,
+ &coresight_attr_cfg_dbgbus_cntlt.attr.attr,
+ &coresight_attr_cfg_dbgbus_cntlm.attr.attr,
+ &coresight_attr_cfg_dbgbus_opl.attr.attr,
+ &coresight_attr_cfg_dbgbus_ope.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivtl_0.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivtl_1.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivtl_2.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivtl_3.attr.attr,
+ &coresight_attr_cfg_dbgbus_maskl_0.attr.attr,
+ &coresight_attr_cfg_dbgbus_maskl_1.attr.attr,
+ &coresight_attr_cfg_dbgbus_maskl_2.attr.attr,
+ &coresight_attr_cfg_dbgbus_maskl_3.attr.attr,
+ &coresight_attr_cfg_dbgbus_bytel_0.attr.attr,
+ &coresight_attr_cfg_dbgbus_bytel_1.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivte_0.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivte_1.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivte_2.attr.attr,
+ &coresight_attr_cfg_dbgbus_ivte_3.attr.attr,
+ &coresight_attr_cfg_dbgbus_maske_0.attr.attr,
+ &coresight_attr_cfg_dbgbus_maske_1.attr.attr,
+ &coresight_attr_cfg_dbgbus_maske_2.attr.attr,
+ &coresight_attr_cfg_dbgbus_maske_3.attr.attr,
+ &coresight_attr_cfg_dbgbus_nibblee.attr.attr,
+ &coresight_attr_cfg_dbgbus_ptrc0.attr.attr,
+ &coresight_attr_cfg_dbgbus_ptrc1.attr.attr,
+ &coresight_attr_cfg_dbgbus_loadreg.attr.attr,
+ &coresight_attr_cfg_dbgbus_idx.attr.attr,
+ &coresight_attr_cfg_dbgbus_clrc.attr.attr,
+ &coresight_attr_cfg_dbgbus_loadivt.attr.attr,
+ &coresight_attr_vbif_dbg_cntl.attr.attr,
+ &coresight_attr_dbg_lo_hi_gpio.attr.attr,
+ &coresight_attr_ext_trace_bus_cntl.attr.attr,
+ &coresight_attr_read_ahb_through_dbg.attr.attr,
+ &coresight_attr_cfg_dbgbus_trace_buf1.attr.attr,
+ &coresight_attr_cfg_dbgbus_trace_buf2.attr.attr,
+ &coresight_attr_evt_cfg.attr.attr,
+ &coresight_attr_evt_intf_sel_0.attr.attr,
+ &coresight_attr_evt_intf_sel_1.attr.attr,
+ &coresight_attr_perf_atb_cfg.attr.attr,
+ &coresight_attr_perf_atb_counter_sel_0.attr.attr,
+ &coresight_attr_perf_atb_counter_sel_1.attr.attr,
+ &coresight_attr_perf_atb_counter_sel_2.attr.attr,
+ &coresight_attr_perf_atb_counter_sel_3.attr.attr,
+ &coresight_attr_perf_atb_trig_intf_sel_0.attr.attr,
+ &coresight_attr_perf_atb_trig_intf_sel_1.attr.attr,
+ &coresight_attr_perf_atb_drain_cmd.attr.attr,
+ &coresight_attr_eco_cntl.attr.attr,
+ &coresight_attr_ahb_dbg_cntl.attr.attr,
+ NULL,
+};
+
+/*cx*/
+static struct attribute *a6xx_coresight_attrs_cx[] = {
+ &coresight_attr_cx_cfg_dbgbus_sel_a.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_sel_b.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_sel_c.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_sel_d.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_cntlt.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_cntlm.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_opl.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ope.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivtl_0.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivtl_1.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivtl_2.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivtl_3.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maskl_0.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maskl_1.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maskl_2.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maskl_3.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_bytel_0.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_bytel_1.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivte_0.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivte_1.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivte_2.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ivte_3.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maske_0.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maske_1.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maske_2.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_maske_3.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_nibblee.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ptrc0.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_ptrc1.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_loadreg.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_idx.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_clrc.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_loadivt.attr.attr,
+ &coresight_attr_cx_vbif_dbg_cntl.attr.attr,
+ &coresight_attr_cx_dbg_lo_hi_gpio.attr.attr,
+ &coresight_attr_cx_ext_trace_bus_cntl.attr.attr,
+ &coresight_attr_cx_read_ahb_through_dbg.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_trace_buf1.attr.attr,
+ &coresight_attr_cx_cfg_dbgbus_trace_buf2.attr.attr,
+ &coresight_attr_cx_evt_cfg.attr.attr,
+ &coresight_attr_cx_evt_intf_sel_0.attr.attr,
+ &coresight_attr_cx_evt_intf_sel_1.attr.attr,
+ &coresight_attr_cx_perf_atb_cfg.attr.attr,
+ &coresight_attr_cx_perf_atb_counter_sel_0.attr.attr,
+ &coresight_attr_cx_perf_atb_counter_sel_1.attr.attr,
+ &coresight_attr_cx_perf_atb_counter_sel_2.attr.attr,
+ &coresight_attr_cx_perf_atb_counter_sel_3.attr.attr,
+ &coresight_attr_cx_perf_atb_trig_intf_sel_0.attr.attr,
+ &coresight_attr_cx_perf_atb_trig_intf_sel_1.attr.attr,
+ &coresight_attr_cx_perf_atb_drain_cmd.attr.attr,
+ &coresight_attr_cx_eco_cntl.attr.attr,
+ &coresight_attr_cx_ahb_dbg_cntl.attr.attr,
+ NULL,
+};
+
+static const struct attribute_group a6xx_coresight_group = {
+ .attrs = a6xx_coresight_attrs,
+};
+
+static const struct attribute_group *a6xx_coresight_groups[] = {
+ &a6xx_coresight_group,
+ NULL,
+};
+
+static const struct attribute_group a6xx_coresight_group_cx = {
+ .attrs = a6xx_coresight_attrs_cx,
+};
+
+static const struct attribute_group *a6xx_coresight_groups_cx[] = {
+ &a6xx_coresight_group_cx,
+ NULL,
+};
+
+static struct adreno_coresight a6xx_coresight = {
+ .registers = a6xx_coresight_regs,
+ .count = ARRAY_SIZE(a6xx_coresight_regs),
+ .groups = a6xx_coresight_groups,
+};
+
+static struct adreno_coresight a6xx_coresight_cx = {
+ .registers = a6xx_coresight_regs_cx,
+ .count = ARRAY_SIZE(a6xx_coresight_regs_cx),
+ .groups = a6xx_coresight_groups_cx,
+};
+
static struct adreno_perfcount_register a6xx_perfcounters_cp[] = {
{ KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_CP_0_LO,
A6XX_RBBM_PERFCTR_CP_0_HI, 0, A6XX_CP_PERFCTR_CP_SEL_0 },
@@ -2894,8 +3461,16 @@ static struct adreno_perfcount_register a6xx_pwrcounters_gpmu[] = {
A6XX_GMU_CX_GMU_POWER_COUNTER_SELECT_1, },
};
+/*
+ * ADRENO_PERFCOUNTER_GROUP_RESTORE flag is enabled by default
+ * because most of the perfcounter groups need to be restored
+ * as part of preemption and IFPC. Perfcounter groups that are
+ * not restored as part of preemption and IFPC should be defined
+ * using A6XX_PERFCOUNTER_GROUP_FLAGS macro
+ */
#define A6XX_PERFCOUNTER_GROUP(offset, name) \
- ADRENO_PERFCOUNTER_GROUP(a6xx, offset, name)
+ ADRENO_PERFCOUNTER_GROUP_FLAGS(a6xx, offset, name, \
+ ADRENO_PERFCOUNTER_GROUP_RESTORE)
#define A6XX_PERFCOUNTER_GROUP_FLAGS(offset, name, flags) \
ADRENO_PERFCOUNTER_GROUP_FLAGS(a6xx, offset, name, flags)
@@ -2906,7 +3481,7 @@ static struct adreno_perfcount_register a6xx_pwrcounters_gpmu[] = {
static struct adreno_perfcount_group a6xx_perfcounter_groups
[KGSL_PERFCOUNTER_GROUP_MAX] = {
A6XX_PERFCOUNTER_GROUP(CP, cp),
- A6XX_PERFCOUNTER_GROUP(RBBM, rbbm),
+ A6XX_PERFCOUNTER_GROUP_FLAGS(RBBM, rbbm, 0),
A6XX_PERFCOUNTER_GROUP(PC, pc),
A6XX_PERFCOUNTER_GROUP(VFD, vfd),
A6XX_PERFCOUNTER_GROUP(HLSQ, hlsq),
@@ -2921,7 +3496,7 @@ static struct adreno_perfcount_group a6xx_perfcounter_groups
A6XX_PERFCOUNTER_GROUP(SP, sp),
A6XX_PERFCOUNTER_GROUP(RB, rb),
A6XX_PERFCOUNTER_GROUP(VSC, vsc),
- A6XX_PERFCOUNTER_GROUP(VBIF, vbif),
+ A6XX_PERFCOUNTER_GROUP_FLAGS(VBIF, vbif, 0),
A6XX_PERFCOUNTER_GROUP_FLAGS(VBIF_PWR, vbif_pwr,
ADRENO_PERFCOUNTER_GROUP_FIXED),
A6XX_PERFCOUNTER_GROUP_FLAGS(PWR, pwr,
@@ -3013,7 +3588,8 @@ static void a6xx_platform_setup(struct adreno_device *adreno_dev)
a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_VBIF_PWR].regs =
a6xx_perfcounters_gbif_pwr;
- a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_VBIF].reg_count
+ a6xx_perfcounter_groups[
+ KGSL_PERFCOUNTER_GROUP_VBIF_PWR].reg_count
= ARRAY_SIZE(a6xx_perfcounters_gbif_pwr);
gpudev->vbif_xin_halt_ctrl0_mask =
@@ -3069,6 +3645,22 @@ static unsigned int a6xx_register_offsets[ADRENO_REG_REGISTER_MAX] = {
A6XX_CP_CONTEXT_SWITCH_SMMU_INFO_LO),
ADRENO_REG_DEFINE(ADRENO_REG_CP_CONTEXT_SWITCH_SMMU_INFO_HI,
A6XX_CP_CONTEXT_SWITCH_SMMU_INFO_HI),
+ ADRENO_REG_DEFINE(
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_LO,
+ A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_LO),
+ ADRENO_REG_DEFINE(
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_HI,
+ A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_HI),
+ ADRENO_REG_DEFINE(
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_LO,
+ A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_LO),
+ ADRENO_REG_DEFINE(
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_HI,
+ A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_HI),
+ ADRENO_REG_DEFINE(ADRENO_REG_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_LO,
+ A6XX_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_LO),
+ ADRENO_REG_DEFINE(ADRENO_REG_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_HI,
+ A6XX_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_HI),
ADRENO_REG_DEFINE(ADRENO_REG_RBBM_STATUS, A6XX_RBBM_STATUS),
ADRENO_REG_DEFINE(ADRENO_REG_RBBM_STATUS3, A6XX_RBBM_STATUS3),
ADRENO_REG_DEFINE(ADRENO_REG_RBBM_PERFCTR_CTL, A6XX_RBBM_PERFCTR_CNTL),
@@ -3161,6 +3753,69 @@ static const struct adreno_reg_offsets a6xx_reg_offsets = {
.offset_0 = ADRENO_REG_REGISTER_MAX,
};
+static int a6xx_perfcounter_update(struct adreno_device *adreno_dev,
+ struct adreno_perfcount_register *reg, bool update_reg)
+{
+ struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
+ struct cpu_gpu_lock *lock = adreno_dev->pwrup_reglist.hostptr;
+ struct reg_list_pair *reg_pair = (struct reg_list_pair *)(lock + 1);
+ unsigned int i;
+ unsigned long timeout = jiffies + msecs_to_jiffies(1000);
+ int ret = 0;
+
+ lock->flag_kmd = 1;
+ /* Write flag_kmd before turn */
+ wmb();
+ lock->turn = 0;
+ /* Write these fields before looping */
+ mb();
+
+ /*
+ * Spin here while GPU ucode holds the lock, lock->flag_ucode will
+ * be set to 0 after GPU ucode releases the lock. Minimum wait time
+ * is 1 second and this should be enough for GPU to release the lock
+ */
+ while (lock->flag_ucode == 1 && lock->turn == 0) {
+ cpu_relax();
+ /* Get the latest updates from GPU */
+ rmb();
+ /*
+ * Make sure we wait at least 1sec for the lock,
+ * if we did not get it after 1sec return an error.
+ */
+ if (time_after(jiffies, timeout) &&
+ (lock->flag_ucode == 1 && lock->turn == 0)) {
+ ret = -EBUSY;
+ goto unlock;
+ }
+ }
+
+ /* Read flag_ucode and turn before list_length */
+ rmb();
+ /*
+ * If the perfcounter select register is already present in reglist
+ * update it, otherwise append the <select register, value> pair to
+ * the end of the list.
+ */
+ for (i = 0; i < lock->list_length >> 1; i++)
+ if (reg_pair[i].offset == reg->select)
+ break;
+
+ reg_pair[i].offset = reg->select;
+ reg_pair[i].val = reg->countable;
+ if (i == lock->list_length >> 1)
+ lock->list_length += 2;
+
+ if (update_reg)
+ kgsl_regwrite(device, reg->select, reg->countable);
+
+unlock:
+ /* All writes done before releasing the lock */
+ wmb();
+ lock->flag_kmd = 0;
+ return ret;
+}
+
struct adreno_gpudev adreno_a6xx_gpudev = {
.reg_offsets = &a6xx_reg_offsets,
.start = a6xx_start,
@@ -3203,4 +3858,6 @@ struct adreno_gpudev adreno_a6xx_gpudev = {
.gx_is_on = a6xx_gx_is_on,
.sptprac_is_on = a6xx_sptprac_is_on,
.ccu_invalidate = a6xx_ccu_invalidate,
+ .perfcounter_update = a6xx_perfcounter_update,
+ .coresight = {&a6xx_coresight, &a6xx_coresight_cx},
};
diff --git a/drivers/gpu/msm/adreno_a6xx.h b/drivers/gpu/msm/adreno_a6xx.h
index dd8af80..bf1111c 100644
--- a/drivers/gpu/msm/adreno_a6xx.h
+++ b/drivers/gpu/msm/adreno_a6xx.h
@@ -75,6 +75,24 @@ struct a6xx_cp_smmu_info {
#define A6XX_CP_SMMU_INFO_MAGIC_REF 0x241350D5UL
+/**
+ * struct cpu_gpu_spinlock - CP spinlock structure for power up list
+ * @flag_ucode: flag value set by CP
+ * @flag_kmd: flag value set by KMD
+ * @turn: turn variable set by both CP and KMD
+ * @list_length: this tells CP the last dword in the list:
+ * 16 + (4 * (List_Length - 1))
+ * @list_offset: this tells CP the start of preemption only list:
+ * 16 + (4 * List_Offset)
+ */
+struct cpu_gpu_lock {
+ uint32_t flag_ucode;
+ uint32_t flag_kmd;
+ uint32_t turn;
+ uint16_t list_length;
+ uint16_t list_offset;
+};
+
#define A6XX_CP_CTXRECORD_MAGIC_REF 0xAE399D6EUL
/* Size of each CP preemption record */
#define A6XX_CP_CTXRECORD_SIZE_IN_BYTES (2112 * 1024)
@@ -100,7 +118,8 @@ unsigned int a6xx_preemption_pre_ibsubmit(struct adreno_device *adreno_dev,
struct adreno_ringbuffer *rb,
unsigned int *cmds, struct kgsl_context *context);
-unsigned int a6xx_set_marker(unsigned int *cmds, int start);
+unsigned int a6xx_set_marker(unsigned int *cmds,
+ enum adreno_cp_marker_type type);
void a6xx_preemption_callback(struct adreno_device *adreno_dev, int bit);
diff --git a/drivers/gpu/msm/adreno_a6xx_preempt.c b/drivers/gpu/msm/adreno_a6xx_preempt.c
index 1eec381..d92d1e0 100644
--- a/drivers/gpu/msm/adreno_a6xx_preempt.c
+++ b/drivers/gpu/msm/adreno_a6xx_preempt.c
@@ -35,6 +35,25 @@ static void _update_wptr(struct adreno_device *adreno_dev, bool reset_timer)
struct adreno_ringbuffer *rb = adreno_dev->cur_rb;
unsigned int wptr;
unsigned long flags;
+ struct adreno_gpudev *gpudev = ADRENO_GPU_DEVICE(adreno_dev);
+
+ /*
+ * Need to make sure GPU is up before we read the
+ * WPTR as fence doesn't wake GPU on read operation.
+ */
+ if (in_interrupt() == 0) {
+ int status;
+
+ if (gpudev->oob_set) {
+ status = gpudev->oob_set(adreno_dev,
+ OOB_PREEMPTION_SET_MASK,
+ OOB_PREEMPTION_CHECK_MASK,
+ OOB_PREEMPTION_CLEAR_MASK);
+ if (status)
+ return;
+ }
+ }
+
spin_lock_irqsave(&rb->preempt_lock, flags);
@@ -55,6 +74,12 @@ static void _update_wptr(struct adreno_device *adreno_dev, bool reset_timer)
msecs_to_jiffies(adreno_drawobj_timeout);
spin_unlock_irqrestore(&rb->preempt_lock, flags);
+
+ if (in_interrupt() == 0) {
+ if (gpudev->oob_clear)
+ gpudev->oob_clear(adreno_dev,
+ OOB_PREEMPTION_CLEAR_MASK);
+ }
}
static inline bool adreno_move_preempt_state(struct adreno_device *adreno_dev,
@@ -204,7 +229,7 @@ void a6xx_preemption_trigger(struct adreno_device *adreno_dev)
struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
struct kgsl_iommu *iommu = KGSL_IOMMU_PRIV(device);
struct adreno_ringbuffer *next;
- uint64_t ttbr0;
+ uint64_t ttbr0, gpuaddr;
unsigned int contextidr;
unsigned long flags;
uint32_t preempt_level, usesgmem, skipsaverestore;
@@ -267,6 +292,8 @@ void a6xx_preemption_trigger(struct adreno_device *adreno_dev)
kgsl_sharedmem_writel(device, &next->preemption_desc,
PREEMPT_RECORD(wptr), next->wptr);
+ preempt->count++;
+
spin_unlock_irqrestore(&next->preempt_lock, flags);
/* And write it to the smmu info */
@@ -275,24 +302,57 @@ void a6xx_preemption_trigger(struct adreno_device *adreno_dev)
kgsl_sharedmem_writel(device, &iommu->smmu_info,
PREEMPT_SMMU_RECORD(context_idr), contextidr);
- kgsl_regwrite(device,
- A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_LO,
- lower_32_bits(next->preemption_desc.gpuaddr));
- kgsl_regwrite(device,
- A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_HI,
- upper_32_bits(next->preemption_desc.gpuaddr));
+ kgsl_sharedmem_readq(&device->scratch, &gpuaddr,
+ SCRATCH_PREEMPTION_CTXT_RESTORE_ADDR_OFFSET(next->id));
- if (next->drawctxt_active) {
- struct kgsl_context *context = &next->drawctxt_active->base;
- uint64_t gpuaddr = context->user_ctxt_record->memdesc.gpuaddr;
+ /*
+ * Set a keepalive bit before the first preemption register write.
+ * This is required since while each individual write to the context
+ * switch registers will wake the GPU from collapse, it will not in
+ * itself cause GPU activity. Thus, the GPU could technically be
+ * re-collapsed between subsequent register writes leading to a
+ * prolonged preemption sequence. The keepalive bit prevents any
+ * further power collapse while it is set.
+ * It is more efficient to use a keepalive+wake-on-fence approach here
+ * rather than an OOB. Both keepalive and the fence are effectively
+ * free when the GPU is already powered on, whereas an OOB requires an
+ * unconditional handshake with the GMU.
+ */
+ kgsl_gmu_regrmw(device, A6XX_GMU_AO_SPARE_CNTL, 0x0, 0x2);
- kgsl_regwrite(device,
- A6XX_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_LO,
- lower_32_bits(gpuaddr));
- kgsl_regwrite(device,
- A6XX_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_HI,
- upper_32_bits(gpuaddr));
- }
+ /*
+ * Fenced writes on this path will make sure the GPU is woken up
+ * in case it was power collapsed by the GMU.
+ */
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_LO,
+ lower_32_bits(next->preemption_desc.gpuaddr),
+ FENCE_STATUS_WRITEDROPPED1_MASK);
+
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR_HI,
+ upper_32_bits(next->preemption_desc.gpuaddr),
+ FENCE_STATUS_WRITEDROPPED1_MASK);
+
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_LO,
+ lower_32_bits(next->secure_preemption_desc.gpuaddr),
+ FENCE_STATUS_WRITEDROPPED1_MASK);
+
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR_HI,
+ upper_32_bits(next->secure_preemption_desc.gpuaddr),
+ FENCE_STATUS_WRITEDROPPED1_MASK);
+
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_LO,
+ lower_32_bits(gpuaddr),
+ FENCE_STATUS_WRITEDROPPED1_MASK);
+
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_CONTEXT_SWITCH_NON_PRIV_RESTORE_ADDR_HI,
+ upper_32_bits(gpuaddr),
+ FENCE_STATUS_WRITEDROPPED1_MASK);
adreno_dev->next_rb = next;
@@ -305,10 +365,20 @@ void a6xx_preemption_trigger(struct adreno_device *adreno_dev)
adreno_set_preempt_state(adreno_dev, ADRENO_PREEMPT_TRIGGERED);
/* Trigger the preemption */
- adreno_writereg(adreno_dev, ADRENO_REG_CP_PREEMPT,
- ((preempt_level << 6) & 0xC0) |
- ((skipsaverestore << 9) & 0x200) |
- ((usesgmem << 8) & 0x100) | 0x1);
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_PREEMPT,
+ (((preempt_level << 6) & 0xC0) |
+ ((skipsaverestore << 9) & 0x200) |
+ ((usesgmem << 8) & 0x100) | 0x1),
+ FENCE_STATUS_WRITEDROPPED1_MASK);
+
+ /*
+ * Once preemption has been requested with the final register write,
+ * the preemption process starts and the GPU is considered busy.
+ * We can now safely clear the preemption keepalive bit, allowing
+ * power collapse to resume its regular activity.
+ */
+ kgsl_gmu_regrmw(device, A6XX_GMU_AO_SPARE_CNTL, 0x2, 0x0);
}
void a6xx_preemption_callback(struct adreno_device *adreno_dev, int bit)
@@ -374,34 +444,20 @@ void a6xx_preemption_schedule(struct adreno_device *adreno_dev)
mutex_unlock(&device->mutex);
}
-unsigned int a6xx_set_marker(unsigned int *cmds, int start)
-{
- *cmds++ = cp_type7_packet(CP_SET_MARKER, 1);
-
- /*
- * Indicate the beginning and end of the IB1 list with a SET_MARKER.
- * Among other things, this will implicitly enable and disable
- * preemption respectively.
- */
- if (start)
- *cmds++ = 0xD;
- else
- *cmds++ = 0xE;
-
- return 2;
-}
-
unsigned int a6xx_preemption_pre_ibsubmit(
struct adreno_device *adreno_dev,
struct adreno_ringbuffer *rb,
unsigned int *cmds, struct kgsl_context *context)
{
unsigned int *cmds_orig = cmds;
+ uint64_t gpuaddr = 0;
- if (context)
+ if (context) {
+ gpuaddr = context->user_ctxt_record->memdesc.gpuaddr;
*cmds++ = cp_type7_packet(CP_SET_PSEUDO_REGISTER, 15);
- else
+ } else {
*cmds++ = cp_type7_packet(CP_SET_PSEUDO_REGISTER, 12);
+ }
/* NULL SMMU_INFO buffer - we track in KMD */
*cmds++ = SET_PSEUDO_REGISTER_SAVE_REGISTER_SMMU_INFO;
@@ -411,10 +467,10 @@ unsigned int a6xx_preemption_pre_ibsubmit(
cmds += cp_gpuaddr(adreno_dev, cmds, rb->preemption_desc.gpuaddr);
*cmds++ = SET_PSEUDO_REGISTER_SAVE_REGISTER_PRIV_SECURE_SAVE_ADDR;
- cmds += cp_gpuaddr(adreno_dev, cmds, 0);
+ cmds += cp_gpuaddr(adreno_dev, cmds,
+ rb->secure_preemption_desc.gpuaddr);
if (context) {
- uint64_t gpuaddr = context->user_ctxt_record->memdesc.gpuaddr;
*cmds++ = SET_PSEUDO_REGISTER_SAVE_REGISTER_NON_PRIV_SAVE_ADDR;
cmds += cp_gpuaddr(adreno_dev, cmds, gpuaddr);
@@ -431,6 +487,20 @@ unsigned int a6xx_preemption_pre_ibsubmit(
cmds += cp_gpuaddr(adreno_dev, cmds,
rb->perfcounter_save_restore_desc.gpuaddr);
+ if (context) {
+ struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
+ struct adreno_context *drawctxt = ADRENO_CONTEXT(context);
+ struct adreno_ringbuffer *rb = drawctxt->rb;
+ uint64_t dest =
+ SCRATCH_PREEMPTION_CTXT_RESTORE_GPU_ADDR(device,
+ rb->id);
+
+ *cmds++ = cp_mem_packet(adreno_dev, CP_MEM_WRITE, 2, 2);
+ cmds += cp_gpuaddr(adreno_dev, cmds, dest);
+ *cmds++ = lower_32_bits(gpuaddr);
+ *cmds++ = upper_32_bits(gpuaddr);
+ }
+
return (unsigned int) (cmds - cmds_orig);
}
@@ -438,6 +508,18 @@ unsigned int a6xx_preemption_post_ibsubmit(struct adreno_device *adreno_dev,
unsigned int *cmds)
{
unsigned int *cmds_orig = cmds;
+ struct adreno_ringbuffer *rb = adreno_dev->cur_rb;
+
+ if (rb) {
+ struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
+ uint64_t dest = SCRATCH_PREEMPTION_CTXT_RESTORE_GPU_ADDR(device,
+ rb->id);
+
+ *cmds++ = cp_mem_packet(adreno_dev, CP_MEM_WRITE, 2, 2);
+ cmds += cp_gpuaddr(adreno_dev, cmds, dest);
+ *cmds++ = 0;
+ *cmds++ = 0;
+ }
*cmds++ = cp_type7_packet(CP_CONTEXT_SWITCH_YIELD, 4);
cmds += cp_gpuaddr(adreno_dev, cmds, 0x0);
@@ -505,6 +587,17 @@ static int a6xx_preemption_ringbuffer_init(struct adreno_device *adreno_dev,
if (ret)
return ret;
+ ret = kgsl_allocate_user(device, &rb->secure_preemption_desc,
+ A6XX_CP_CTXRECORD_SIZE_IN_BYTES,
+ KGSL_MEMFLAGS_SECURE | KGSL_MEMDESC_PRIVILEGED);
+ if (ret)
+ return ret;
+
+ ret = kgsl_iommu_map_global_secure_pt_entry(device,
+ &rb->secure_preemption_desc);
+ if (ret)
+ return ret;
+
ret = kgsl_allocate_global(device, &rb->perfcounter_save_restore_desc,
A6XX_CP_PERFCOUNTER_SAVE_RESTORE_SIZE, 0,
KGSL_MEMDESC_PRIVILEGED, "perfcounter_save_restore_desc");
@@ -578,6 +671,9 @@ static void a6xx_preemption_close(struct kgsl_device *device)
FOR_EACH_RINGBUFFER(adreno_dev, rb, i) {
kgsl_free_global(device, &rb->preemption_desc);
kgsl_free_global(device, &rb->perfcounter_save_restore_desc);
+ kgsl_iommu_unmap_global_secure_pt_entry(device,
+ &rb->secure_preemption_desc);
+ kgsl_sharedmem_free(&rb->secure_preemption_desc);
}
}
@@ -645,16 +741,20 @@ int a6xx_preemption_context_init(struct kgsl_context *context)
{
struct kgsl_device *device = context->device;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
+ uint64_t flags = 0;
if (!adreno_is_preemption_setup_enabled(adreno_dev))
return 0;
+ if (context->flags & KGSL_CONTEXT_SECURE)
+ flags |= KGSL_MEMFLAGS_SECURE;
+
/*
* gpumem_alloc_entry takes an extra refcount. Put it only when
* destroying the context to keep the context record valid
*/
context->user_ctxt_record = gpumem_alloc_entry(context->dev_priv,
- A6XX_CP_CTXRECORD_USER_RESTORE_SIZE, 0);
+ A6XX_CP_CTXRECORD_USER_RESTORE_SIZE, flags);
if (IS_ERR(context->user_ctxt_record)) {
int ret = PTR_ERR(context->user_ctxt_record);
diff --git a/drivers/gpu/msm/adreno_a6xx_snapshot.c b/drivers/gpu/msm/adreno_a6xx_snapshot.c
index c1a76bc..3f92f75 100644
--- a/drivers/gpu/msm/adreno_a6xx_snapshot.c
+++ b/drivers/gpu/msm/adreno_a6xx_snapshot.c
@@ -210,6 +210,11 @@ static const unsigned int a6xx_vbif_ver_20xxxxxx_registers[] = {
0x3410, 0x3410, 0x3800, 0x3801,
};
+static const unsigned int a6xx_gbif_registers[] = {
+ /* GBIF */
+ 0x3C00, 0X3C0B, 0X3C40, 0X3C47, 0X3CC0, 0X3CD1,
+};
+
static const unsigned int a6xx_gmu_gx_registers[] = {
/* GMU GX */
0x1A800, 0x1A800, 0x1A810, 0x1A813, 0x1A816, 0x1A816, 0x1A818, 0x1A81B,
@@ -640,7 +645,7 @@ static size_t a6xx_snapshot_shader_memory(struct kgsl_device *device,
header->size = block->sz;
memcpy(data, a6xx_crashdump_registers.hostptr + info->offset,
- block->sz);
+ block->sz * sizeof(unsigned int));
return SHADER_SECTION_SZ(block->sz);
}
@@ -1274,13 +1279,21 @@ static void a6xx_snapshot_debugbus(struct kgsl_device *device,
snapshot, a6xx_snapshot_dbgc_debugbus_block,
(void *) &a6xx_dbgc_debugbus_blocks[i]);
}
-
- /* Skip if GPU has GBIF */
- if (!adreno_has_gbif(adreno_dev))
+ /*
+ * GBIF has same debugbus as of other GPU blocks hence fall back to
+ * default path if GPU uses GBIF.
+ * GBIF uses exactly same ID as of VBIF so use it as it is.
+ */
+ if (adreno_has_gbif(adreno_dev))
kgsl_snapshot_add_section(device,
- KGSL_SNAPSHOT_SECTION_DEBUGBUS,
- snapshot, a6xx_snapshot_vbif_debugbus_block,
- (void *) &a6xx_vbif_debugbus_blocks);
+ KGSL_SNAPSHOT_SECTION_DEBUGBUS,
+ snapshot, a6xx_snapshot_dbgc_debugbus_block,
+ (void *) &a6xx_vbif_debugbus_blocks);
+ else
+ kgsl_snapshot_add_section(device,
+ KGSL_SNAPSHOT_SECTION_DEBUGBUS,
+ snapshot, a6xx_snapshot_vbif_debugbus_block,
+ (void *) &a6xx_vbif_debugbus_blocks);
/* Dump the CX debugbus data if the block exists */
if (adreno_is_cx_dbgc_register(device, A6XX_CX_DBGC_CFG_DBGBUS_SEL_A)) {
@@ -1289,6 +1302,17 @@ static void a6xx_snapshot_debugbus(struct kgsl_device *device,
KGSL_SNAPSHOT_SECTION_DEBUGBUS,
snapshot, a6xx_snapshot_cx_dbgc_debugbus_block,
(void *) &a6xx_cx_dbgc_debugbus_blocks[i]);
+ /*
+ * Get debugbus for GBIF CX part if GPU has GBIF block
+ * GBIF uses exactly same ID as of VBIF so use
+ * it as it is.
+ */
+ if (adreno_has_gbif(adreno_dev))
+ kgsl_snapshot_add_section(device,
+ KGSL_SNAPSHOT_SECTION_DEBUGBUS,
+ snapshot,
+ a6xx_snapshot_cx_dbgc_debugbus_block,
+ (void *) &a6xx_vbif_debugbus_blocks);
}
}
}
@@ -1429,6 +1453,10 @@ void a6xx_snapshot(struct adreno_device *adreno_dev,
adreno_snapshot_vbif_registers(device, snapshot,
a6xx_vbif_snapshot_registers,
ARRAY_SIZE(a6xx_vbif_snapshot_registers));
+ else
+ adreno_snapshot_registers(device, snapshot,
+ a6xx_gbif_registers,
+ ARRAY_SIZE(a6xx_gbif_registers) / 2);
/* Try to run the crash dumper */
if (sptprac_on)
diff --git a/drivers/gpu/msm/adreno_coresight.c b/drivers/gpu/msm/adreno_coresight.c
index d792d4e..ef482a2 100644
--- a/drivers/gpu/msm/adreno_coresight.c
+++ b/drivers/gpu/msm/adreno_coresight.c
@@ -14,10 +14,19 @@
#include <linux/coresight.h>
#include "adreno.h"
-
#define TO_ADRENO_CORESIGHT_ATTR(_attr) \
container_of(_attr, struct adreno_coresight_attr, attr)
+static int adreno_coresight_identify(const char *name)
+{
+ if (!strcmp(name, "coresight-gfx"))
+ return GPU_CORESIGHT_GX;
+ else if (!strcmp(name, "coresight-gfx-cx"))
+ return GPU_CORESIGHT_CX;
+ else
+ return -EINVAL;
+}
+
ssize_t adreno_coresight_show_register(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -25,6 +34,7 @@ ssize_t adreno_coresight_show_register(struct device *dev,
struct kgsl_device *device = dev_get_drvdata(dev->parent);
struct adreno_device *adreno_dev;
struct adreno_coresight_attr *cattr = TO_ADRENO_CORESIGHT_ATTR(attr);
+ bool is_cx;
if (device == NULL)
return -EINVAL;
@@ -34,14 +44,16 @@ ssize_t adreno_coresight_show_register(struct device *dev,
if (cattr->reg == NULL)
return -EINVAL;
+ is_cx = adreno_is_cx_dbgc_register(device, cattr->reg->offset);
/*
* Return the current value of the register if coresight is enabled,
* otherwise report 0
*/
mutex_lock(&device->mutex);
- if (test_bit(ADRENO_DEVICE_CORESIGHT, &adreno_dev->priv)) {
-
+ if ((is_cx && test_bit(ADRENO_DEVICE_CORESIGHT_CX, &adreno_dev->priv))
+ || (!is_cx && test_bit(ADRENO_DEVICE_CORESIGHT,
+ &adreno_dev->priv))) {
/*
* If the device isn't power collapsed read the actual value
* from the hardware - otherwise return the cached value
@@ -50,8 +62,13 @@ ssize_t adreno_coresight_show_register(struct device *dev,
if (device->state == KGSL_STATE_ACTIVE ||
device->state == KGSL_STATE_NAP) {
if (!kgsl_active_count_get(device)) {
- kgsl_regread(device, cattr->reg->offset,
- &cattr->reg->value);
+ if (!is_cx)
+ kgsl_regread(device, cattr->reg->offset,
+ &cattr->reg->value);
+ else
+ adreno_cx_dbgc_regread(device,
+ cattr->reg->offset,
+ &cattr->reg->value);
kgsl_active_count_put(device);
}
}
@@ -70,7 +87,7 @@ ssize_t adreno_coresight_store_register(struct device *dev,
struct adreno_device *adreno_dev;
struct adreno_coresight_attr *cattr = TO_ADRENO_CORESIGHT_ATTR(attr);
unsigned long val;
- int ret;
+ int ret, is_cx;
if (device == NULL)
return -EINVAL;
@@ -80,6 +97,8 @@ ssize_t adreno_coresight_store_register(struct device *dev,
if (cattr->reg == NULL)
return -EINVAL;
+ is_cx = adreno_is_cx_dbgc_register(device, cattr->reg->offset);
+
ret = kstrtoul(buf, 0, &val);
if (ret)
return ret;
@@ -87,7 +106,9 @@ ssize_t adreno_coresight_store_register(struct device *dev,
mutex_lock(&device->mutex);
/* Ignore writes while coresight is off */
- if (!test_bit(ADRENO_DEVICE_CORESIGHT, &adreno_dev->priv))
+ if (!((is_cx && test_bit(ADRENO_DEVICE_CORESIGHT_CX, &adreno_dev->priv))
+ || (!is_cx && test_bit(ADRENO_DEVICE_CORESIGHT,
+ &adreno_dev->priv))))
goto out;
cattr->reg->value = val;
@@ -96,8 +117,14 @@ ssize_t adreno_coresight_store_register(struct device *dev,
if (device->state == KGSL_STATE_ACTIVE ||
device->state == KGSL_STATE_NAP) {
if (!kgsl_active_count_get(device)) {
- kgsl_regwrite(device, cattr->reg->offset,
+ if (!is_cx)
+ kgsl_regwrite(device, cattr->reg->offset,
cattr->reg->value);
+ else
+ adreno_cx_dbgc_regwrite(device,
+ cattr->reg->offset,
+ cattr->reg->value);
+
kgsl_active_count_put(device);
}
}
@@ -127,7 +154,7 @@ static void adreno_coresight_disable(struct coresight_device *csdev,
struct adreno_device *adreno_dev;
struct adreno_gpudev *gpudev;
struct adreno_coresight *coresight;
- int i;
+ int i, cs_id;
if (device == NULL)
return;
@@ -135,7 +162,12 @@ static void adreno_coresight_disable(struct coresight_device *csdev,
adreno_dev = ADRENO_DEVICE(device);
gpudev = ADRENO_GPU_DEVICE(adreno_dev);
- coresight = gpudev->coresight;
+ cs_id = adreno_coresight_identify(dev_name(&csdev->dev));
+
+ if (cs_id < 0)
+ return;
+
+ coresight = gpudev->coresight[cs_id];
if (coresight == NULL)
return;
@@ -143,9 +175,14 @@ static void adreno_coresight_disable(struct coresight_device *csdev,
mutex_lock(&device->mutex);
if (!kgsl_active_count_get(device)) {
- for (i = 0; i < coresight->count; i++)
- kgsl_regwrite(device, coresight->registers[i].offset,
- 0);
+ if (cs_id == GPU_CORESIGHT_GX)
+ for (i = 0; i < coresight->count; i++)
+ kgsl_regwrite(device,
+ coresight->registers[i].offset, 0);
+ else if (cs_id == GPU_CORESIGHT_CX)
+ for (i = 0; i < coresight->count; i++)
+ adreno_cx_dbgc_regwrite(device,
+ coresight->registers[i].offset, 0);
kgsl_active_count_put(device);
}
@@ -161,12 +198,13 @@ static void adreno_coresight_disable(struct coresight_device *csdev,
* has the effect of disabling coresight.
* @adreno_dev: Pointer to adreno device struct
*/
-static int _adreno_coresight_get_and_clear(struct adreno_device *adreno_dev)
+static int _adreno_coresight_get_and_clear(struct adreno_device *adreno_dev,
+ int cs_id)
{
+ int i;
struct adreno_gpudev *gpudev = ADRENO_GPU_DEVICE(adreno_dev);
struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
- struct adreno_coresight *coresight = gpudev->coresight;
- int i;
+ struct adreno_coresight *coresight = gpudev->coresight[cs_id];
if (coresight == NULL)
return -ENODEV;
@@ -176,33 +214,46 @@ static int _adreno_coresight_get_and_clear(struct adreno_device *adreno_dev)
* Save the current value of each coresight register
* and then clear each register
*/
- for (i = 0; i < coresight->count; i++) {
- kgsl_regread(device, coresight->registers[i].offset,
- &coresight->registers[i].value);
- kgsl_regwrite(device, coresight->registers[i].offset,
- 0);
+ if (cs_id == GPU_CORESIGHT_GX) {
+ for (i = 0; i < coresight->count; i++) {
+ kgsl_regread(device, coresight->registers[i].offset,
+ &coresight->registers[i].value);
+ kgsl_regwrite(device, coresight->registers[i].offset,
+ 0);
+ }
+ } else if (cs_id == GPU_CORESIGHT_CX) {
+ for (i = 0; i < coresight->count; i++) {
+ adreno_cx_dbgc_regread(device,
+ coresight->registers[i].offset,
+ &coresight->registers[i].value);
+ adreno_cx_dbgc_regwrite(device,
+ coresight->registers[i].offset, 0);
+ }
}
return 0;
}
-static int _adreno_coresight_set(struct adreno_device *adreno_dev)
+static int _adreno_coresight_set(struct adreno_device *adreno_dev, int cs_id)
{
struct adreno_gpudev *gpudev = ADRENO_GPU_DEVICE(adreno_dev);
struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
- struct adreno_coresight *coresight = gpudev->coresight;
+ struct adreno_coresight *coresight = gpudev->coresight[cs_id];
int i;
if (coresight == NULL)
return -ENODEV;
- for (i = 0; i < coresight->count; i++)
- kgsl_regwrite(device, coresight->registers[i].offset,
- coresight->registers[i].value);
-
- kgsl_property_read_u32(device, "coresight-atid",
- (unsigned int *)&(coresight->atid));
-
+ if (cs_id == GPU_CORESIGHT_GX) {
+ for (i = 0; i < coresight->count; i++)
+ kgsl_regwrite(device, coresight->registers[i].offset,
+ coresight->registers[i].value);
+ } else if (cs_id == GPU_CORESIGHT_CX) {
+ for (i = 0; i < coresight->count; i++)
+ adreno_cx_dbgc_regwrite(device,
+ coresight->registers[i].offset,
+ coresight->registers[i].value);
+ }
return 0;
}
/**
@@ -223,7 +274,7 @@ static int adreno_coresight_enable(struct coresight_device *csdev,
struct adreno_device *adreno_dev;
struct adreno_gpudev *gpudev;
struct adreno_coresight *coresight;
- int ret = 0;
+ int ret = 0, adreno_dev_flag = -EINVAL, cs_id;
if (device == NULL)
return -ENODEV;
@@ -231,13 +282,25 @@ static int adreno_coresight_enable(struct coresight_device *csdev,
adreno_dev = ADRENO_DEVICE(device);
gpudev = ADRENO_GPU_DEVICE(adreno_dev);
- coresight = gpudev->coresight;
+ cs_id = adreno_coresight_identify(dev_name(&csdev->dev));
+
+ if (cs_id < 0)
+ return -ENODEV;
+
+ coresight = gpudev->coresight[cs_id];
if (coresight == NULL)
return -ENODEV;
+ if (cs_id == GPU_CORESIGHT_GX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT;
+ else if (cs_id == GPU_CORESIGHT_CX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT_CX;
+ else
+ return -ENODEV;
+
mutex_lock(&device->mutex);
- if (!test_and_set_bit(ADRENO_DEVICE_CORESIGHT, &adreno_dev->priv)) {
+ if (!test_and_set_bit(adreno_dev_flag, &adreno_dev->priv)) {
int i;
/* Reset all the debug registers to their default values */
@@ -249,7 +312,7 @@ static int adreno_coresight_enable(struct coresight_device *csdev,
if (kgsl_state_is_awake(device)) {
ret = kgsl_active_count_get(device);
if (!ret) {
- ret = _adreno_coresight_set(adreno_dev);
+ ret = _adreno_coresight_set(adreno_dev, cs_id);
kgsl_active_count_put(device);
}
}
@@ -269,8 +332,19 @@ static int adreno_coresight_enable(struct coresight_device *csdev,
*/
void adreno_coresight_stop(struct adreno_device *adreno_dev)
{
- if (test_bit(ADRENO_DEVICE_CORESIGHT, &adreno_dev->priv))
- _adreno_coresight_get_and_clear(adreno_dev);
+ int i, adreno_dev_flag = -EINVAL;
+
+ for (i = 0; i < GPU_CORESIGHT_MAX; ++i) {
+ if (i == GPU_CORESIGHT_GX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT;
+ else if (i == GPU_CORESIGHT_CX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT_CX;
+ else
+ return;
+
+ if (test_bit(adreno_dev_flag, &adreno_dev->priv))
+ _adreno_coresight_get_and_clear(adreno_dev, i);
+ }
}
/**
@@ -281,16 +355,33 @@ void adreno_coresight_stop(struct adreno_device *adreno_dev)
*/
void adreno_coresight_start(struct adreno_device *adreno_dev)
{
- if (test_bit(ADRENO_DEVICE_CORESIGHT, &adreno_dev->priv))
- _adreno_coresight_set(adreno_dev);
+ int i, adreno_dev_flag = -EINVAL;
+
+ for (i = 0; i < GPU_CORESIGHT_MAX; ++i) {
+ if (i == GPU_CORESIGHT_GX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT;
+ else if (i == GPU_CORESIGHT_CX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT_CX;
+ else
+ return;
+
+ if (test_bit(adreno_dev_flag, &adreno_dev->priv))
+ _adreno_coresight_set(adreno_dev, i);
+ }
}
static int adreno_coresight_trace_id(struct coresight_device *csdev)
{
struct kgsl_device *device = dev_get_drvdata(csdev->dev.parent);
struct adreno_gpudev *gpudev = ADRENO_GPU_DEVICE(ADRENO_DEVICE(device));
+ int cs_id;
- return gpudev->coresight->atid;
+ cs_id = adreno_coresight_identify(dev_name(&csdev->dev));
+
+ if (cs_id < 0)
+ return -ENODEV;
+
+ return gpudev->coresight[cs_id]->atid;
}
static const struct coresight_ops_source adreno_coresight_source_ops = {
@@ -305,8 +396,21 @@ static const struct coresight_ops adreno_coresight_ops = {
void adreno_coresight_remove(struct adreno_device *adreno_dev)
{
- coresight_unregister(adreno_dev->csdev);
- adreno_dev->csdev = NULL;
+ int i, adreno_dev_flag = -EINVAL;
+
+ for (i = 0; i < GPU_CORESIGHT_MAX; ++i) {
+ if (i == GPU_CORESIGHT_GX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT;
+ else if (i == GPU_CORESIGHT_CX)
+ adreno_dev_flag = ADRENO_DEVICE_CORESIGHT_CX;
+ else
+ return;
+
+ if (test_bit(adreno_dev_flag, &adreno_dev->priv)) {
+ coresight_unregister(adreno_dev->csdev[i]);
+ adreno_dev->csdev[i] = NULL;
+ }
+ }
}
int adreno_coresight_init(struct adreno_device *adreno_dev)
@@ -315,31 +419,37 @@ int adreno_coresight_init(struct adreno_device *adreno_dev)
struct adreno_gpudev *gpudev = ADRENO_GPU_DEVICE(adreno_dev);
struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
struct coresight_desc desc;
+ int i = 0;
+ struct device_node *node, *child;
- if (gpudev->coresight == NULL)
- return -ENODEV;
+ node = of_find_compatible_node(device->pdev->dev.of_node,
+ NULL, "qcom,gpu-coresight");
- if (!IS_ERR_OR_NULL(adreno_dev->csdev))
- return 0;
+ for_each_child_of_node(node, child) {
+ memset(&desc, 0, sizeof(desc));
+ desc.pdata = of_get_coresight_platform_data(&device->pdev->dev,
+ child);
+ if (IS_ERR_OR_NULL(desc.pdata))
+ return (desc.pdata == NULL) ? -ENODEV :
+ PTR_ERR(desc.pdata);
+ if (gpudev->coresight[i] == NULL)
+ return -ENODEV;
- memset(&desc, 0, sizeof(desc));
+ desc.type = CORESIGHT_DEV_TYPE_SOURCE;
+ desc.subtype.source_subtype =
+ CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE;
+ desc.ops = &adreno_coresight_ops;
+ desc.dev = &device->pdev->dev;
+ desc.groups = gpudev->coresight[i]->groups;
- desc.pdata = of_get_coresight_platform_data(&device->pdev->dev,
- device->pdev->dev.of_node);
- if (IS_ERR_OR_NULL(desc.pdata))
- return (desc.pdata == NULL) ? -ENODEV :
- PTR_ERR(desc.pdata);
-
- desc.type = CORESIGHT_DEV_TYPE_SOURCE;
- desc.subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_BUS;
- desc.ops = &adreno_coresight_ops;
- desc.dev = &device->pdev->dev;
- desc.groups = gpudev->coresight->groups;
-
- adreno_dev->csdev = coresight_register(&desc);
-
- if (IS_ERR(adreno_dev->csdev))
- ret = PTR_ERR(adreno_dev->csdev);
+ adreno_dev->csdev[i] = coresight_register(&desc);
+ if (IS_ERR(adreno_dev->csdev[i]))
+ ret = PTR_ERR(adreno_dev->csdev[i]);
+ if (of_property_read_u32(child, "coresight-atid",
+ &gpudev->coresight[i]->atid))
+ return -EINVAL;
+ i++;
+ }
return ret;
}
diff --git a/drivers/gpu/msm/adreno_perfcounter.c b/drivers/gpu/msm/adreno_perfcounter.c
index 03db16d..94fdbc2 100644
--- a/drivers/gpu/msm/adreno_perfcounter.c
+++ b/drivers/gpu/msm/adreno_perfcounter.c
@@ -768,6 +768,21 @@ static void _power_counter_enable_default(struct adreno_device *adreno_dev,
reg->value = 0;
}
+static inline bool _perfcounter_inline_update(
+ struct adreno_device *adreno_dev, unsigned int group)
+{
+ if (adreno_is_a6xx(adreno_dev)) {
+ if ((group == KGSL_PERFCOUNTER_GROUP_HLSQ) ||
+ (group == KGSL_PERFCOUNTER_GROUP_SP) ||
+ (group == KGSL_PERFCOUNTER_GROUP_TP))
+ return true;
+ else
+ return false;
+ }
+
+ return true;
+}
+
static int _perfcounter_enable_default(struct adreno_device *adreno_dev,
struct adreno_perfcounters *counters, unsigned int group,
unsigned int counter, unsigned int countable)
@@ -775,6 +790,7 @@ static int _perfcounter_enable_default(struct adreno_device *adreno_dev,
struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
struct adreno_gpudev *gpudev = ADRENO_GPU_DEVICE(adreno_dev);
struct adreno_perfcount_register *reg;
+ struct adreno_perfcount_group *grp;
int i;
int ret = 0;
@@ -789,15 +805,20 @@ static int _perfcounter_enable_default(struct adreno_device *adreno_dev,
if (countable == invalid_countable.countables[i])
return -EACCES;
}
- reg = &(counters->groups[group].regs[counter]);
+ grp = &(counters->groups[group]);
+ reg = &(grp->regs[counter]);
- if (!adreno_is_a6xx(adreno_dev) &&
- test_bit(ADRENO_DEVICE_STARTED, &adreno_dev->priv)) {
+ if (_perfcounter_inline_update(adreno_dev, group) &&
+ test_bit(ADRENO_DEVICE_STARTED, &adreno_dev->priv)) {
struct adreno_ringbuffer *rb = &adreno_dev->ringbuffers[0];
unsigned int buf[4];
unsigned int *cmds = buf;
int ret;
+ if (gpudev->perfcounter_update && (grp->flags &
+ ADRENO_PERFCOUNTER_GROUP_RESTORE))
+ gpudev->perfcounter_update(adreno_dev, reg, false);
+
cmds += cp_wait_for_idle(adreno_dev, cmds);
*cmds++ = cp_register(adreno_dev, reg->select, 1);
*cmds++ = countable;
@@ -834,12 +855,16 @@ static int _perfcounter_enable_default(struct adreno_device *adreno_dev,
}
} else {
/* Select the desired perfcounter */
- kgsl_regwrite(device, reg->select, countable);
+ if (gpudev->perfcounter_update && (grp->flags &
+ ADRENO_PERFCOUNTER_GROUP_RESTORE))
+ ret = gpudev->perfcounter_update(adreno_dev, reg, true);
+ else
+ kgsl_regwrite(device, reg->select, countable);
}
if (!ret)
reg->value = 0;
- return 0;
+ return ret;
}
/**
diff --git a/drivers/gpu/msm/adreno_perfcounter.h b/drivers/gpu/msm/adreno_perfcounter.h
index 8c4db38..bcbc738 100644
--- a/drivers/gpu/msm/adreno_perfcounter.h
+++ b/drivers/gpu/msm/adreno_perfcounter.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2008-2015, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2008-2015, 2017 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -70,6 +70,13 @@ struct adreno_perfcount_group {
#define ADRENO_PERFCOUNTER_GROUP_FIXED BIT(0)
+/*
+ * ADRENO_PERFCOUNTER_GROUP_RESTORE indicates CP needs to restore the select
+ * registers of this perfcounter group as part of preemption and IFPC
+ */
+#define ADRENO_PERFCOUNTER_GROUP_RESTORE BIT(1)
+
+
/**
* adreno_perfcounts: all available perfcounter groups
* @groups: available groups for this device
diff --git a/drivers/gpu/msm/adreno_ringbuffer.c b/drivers/gpu/msm/adreno_ringbuffer.c
index 70043db..01d9f71 100644
--- a/drivers/gpu/msm/adreno_ringbuffer.c
+++ b/drivers/gpu/msm/adreno_ringbuffer.c
@@ -80,44 +80,6 @@ static void adreno_get_submit_time(struct adreno_device *adreno_dev,
local_irq_restore(flags);
}
-/*
- * Wait time before trying to write the register again.
- * Hopefully the GMU has finished waking up during this delay.
- * This delay must be less than the IFPC main hysteresis or
- * the GMU will start shutting down before we try again.
- */
-#define GMU_WAKEUP_DELAY 10
-/* Max amount of tries to wake up the GMU. */
-#define GMU_WAKEUP_RETRY_MAX 60
-
-/*
- * Check the WRITEDROPPED0 bit in the
- * FENCE_STATUS regsiter to check if the write went
- * through. If it didn't then we retry the write.
- */
-static inline void _gmu_wptr_update_if_dropped(struct adreno_device *adreno_dev,
- struct adreno_ringbuffer *rb)
-{
- unsigned int val, i;
-
- for (i = 0; i < GMU_WAKEUP_RETRY_MAX; i++) {
- adreno_read_gmureg(adreno_dev, ADRENO_REG_GMU_AHB_FENCE_STATUS,
- &val);
-
- /* If !writedropped, then wptr update was successful */
- if (!(val & 0x1))
- return;
-
- /* Wait a small amount of time before trying again */
- udelay(GMU_WAKEUP_DELAY);
-
- /* Try to write WPTR again */
- adreno_writereg(adreno_dev, ADRENO_REG_CP_RB_WPTR, rb->_wptr);
- }
-
- dev_err(adreno_dev->dev.dev, "GMU WPTR update timed out\n");
-}
-
static void adreno_ringbuffer_wptr(struct adreno_device *adreno_dev,
struct adreno_ringbuffer *rb)
{
@@ -132,15 +94,14 @@ static void adreno_ringbuffer_wptr(struct adreno_device *adreno_dev,
* been submitted.
*/
kgsl_pwrscale_busy(KGSL_DEVICE(adreno_dev));
- adreno_writereg(adreno_dev, ADRENO_REG_CP_RB_WPTR,
- rb->_wptr);
/*
- * If GMU, ensure the write posted after a possible
+ * Ensure the write posted after a possible
* GMU wakeup (write could have dropped during wakeup)
*/
- if (kgsl_gmu_isenabled(KGSL_DEVICE(adreno_dev)))
- _gmu_wptr_update_if_dropped(adreno_dev, rb);
+ adreno_gmu_fenced_write(adreno_dev,
+ ADRENO_REG_CP_RB_WPTR, rb->_wptr,
+ FENCE_STATUS_WRITEDROPPED0_MASK);
}
}
@@ -425,6 +386,7 @@ adreno_ringbuffer_addcmds(struct adreno_ringbuffer *rb,
struct kgsl_context *context = NULL;
bool secured_ctxt = false;
static unsigned int _seq_cnt;
+ struct adreno_firmware *fw = ADRENO_FW(adreno_dev, ADRENO_FW_SQE);
if (drawctxt != NULL && kgsl_context_detached(&drawctxt->base) &&
!(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE))
@@ -494,11 +456,11 @@ adreno_ringbuffer_addcmds(struct adreno_ringbuffer *rb,
if (gpudev->preemption_pre_ibsubmit &&
adreno_is_preemption_execution_enabled(adreno_dev))
- total_sizedwords += 22;
+ total_sizedwords += 27;
if (gpudev->preemption_post_ibsubmit &&
adreno_is_preemption_execution_enabled(adreno_dev))
- total_sizedwords += 5;
+ total_sizedwords += 10;
/*
* a5xx uses 64 bit memory address. pm4 commands that involve read/write
@@ -559,8 +521,13 @@ adreno_ringbuffer_addcmds(struct adreno_ringbuffer *rb,
*ringcmds++ = KGSL_CMD_INTERNAL_IDENTIFIER;
}
- if (gpudev->set_marker)
- ringcmds += gpudev->set_marker(ringcmds, 1);
+ if (gpudev->set_marker) {
+ /* Firmware versions before 1.49 do not support IFPC markers */
+ if (adreno_is_a6xx(adreno_dev) && (fw->version & 0xFFF) < 0x149)
+ ringcmds += gpudev->set_marker(ringcmds, IB1LIST_START);
+ else
+ ringcmds += gpudev->set_marker(ringcmds, IFPC_DISABLE);
+ }
if (flags & KGSL_CMD_FLAGS_PWRON_FIXUP) {
/* Disable protected mode for the fixup */
@@ -680,8 +647,12 @@ adreno_ringbuffer_addcmds(struct adreno_ringbuffer *rb,
*ringcmds++ = timestamp;
}
- if (gpudev->set_marker)
- ringcmds += gpudev->set_marker(ringcmds, 0);
+ if (gpudev->set_marker) {
+ if (adreno_is_a6xx(adreno_dev) && (fw->version & 0xFFF) < 0x149)
+ ringcmds += gpudev->set_marker(ringcmds, IB1LIST_END);
+ else
+ ringcmds += gpudev->set_marker(ringcmds, IFPC_ENABLE);
+ }
if (adreno_is_a3xx(adreno_dev)) {
/* Dummy set-constant to trigger context rollover */
@@ -796,8 +767,9 @@ int adreno_ringbuffer_submitcmd(struct adreno_device *adreno_dev,
struct kgsl_drawobj_profiling_buffer *profile_buffer = NULL;
unsigned int dwords = 0;
struct adreno_submit_time local;
-
struct kgsl_mem_entry *entry = cmdobj->profiling_buf_entry;
+ struct adreno_firmware *fw = ADRENO_FW(adreno_dev, ADRENO_FW_SQE);
+ bool set_ib1list_marker = false;
if (entry)
profile_buffer = kgsl_gpuaddr_to_vaddr(&entry->memdesc,
@@ -907,6 +879,17 @@ int adreno_ringbuffer_submitcmd(struct adreno_device *adreno_dev,
dwords += 8;
}
+ /*
+ * Prior to SQE FW version 1.49, there was only one marker for
+ * both preemption and IFPC. Only include the IB1LIST markers if
+ * we are using a firmware that supports them.
+ */
+ if (gpudev->set_marker && numibs && adreno_is_a6xx(adreno_dev) &&
+ ((fw->version & 0xFFF) >= 0x149)) {
+ set_ib1list_marker = true;
+ dwords += 4;
+ }
+
if (gpudev->ccu_invalidate)
dwords += 4;
@@ -940,6 +923,9 @@ int adreno_ringbuffer_submitcmd(struct adreno_device *adreno_dev,
}
if (numibs) {
+ if (set_ib1list_marker)
+ cmds += gpudev->set_marker(cmds, IB1LIST_START);
+
list_for_each_entry(ib, &cmdobj->cmdlist, node) {
/*
* Skip 0 sized IBs - these are presumed to have been
@@ -958,6 +944,9 @@ int adreno_ringbuffer_submitcmd(struct adreno_device *adreno_dev,
/* preamble is required on only for first command */
use_preamble = false;
}
+
+ if (set_ib1list_marker)
+ cmds += gpudev->set_marker(cmds, IB1LIST_END);
}
if (gpudev->ccu_invalidate)
diff --git a/drivers/gpu/msm/adreno_ringbuffer.h b/drivers/gpu/msm/adreno_ringbuffer.h
index 72fc5bf..fbee627 100644
--- a/drivers/gpu/msm/adreno_ringbuffer.h
+++ b/drivers/gpu/msm/adreno_ringbuffer.h
@@ -92,6 +92,8 @@ struct adreno_ringbuffer_pagetable_info {
* @drawctxt_active: The last pagetable that this ringbuffer is set to
* @preemption_desc: The memory descriptor containing
* preemption info written/read by CP
+ * @secure_preemption_desc: The memory descriptor containing
+ * preemption info written/read by CP for secure contexts
* @perfcounter_save_restore_desc: Used by CP to save/restore the perfcounter
* values across preemption
* @pagetable_desc: Memory to hold information about the pagetables being used
@@ -120,6 +122,7 @@ struct adreno_ringbuffer {
struct kgsl_event_group events;
struct adreno_context *drawctxt_active;
struct kgsl_memdesc preemption_desc;
+ struct kgsl_memdesc secure_preemption_desc;
struct kgsl_memdesc perfcounter_save_restore_desc;
struct kgsl_memdesc pagetable_desc;
struct adreno_dispatcher_drawqueue dispatch_q;
diff --git a/drivers/gpu/msm/adreno_sysfs.c b/drivers/gpu/msm/adreno_sysfs.c
index fcf0417..e309ab0 100644
--- a/drivers/gpu/msm/adreno_sysfs.c
+++ b/drivers/gpu/msm/adreno_sysfs.c
@@ -29,6 +29,13 @@ struct adreno_sysfs_attribute adreno_attr_##_name = { \
.store = _ ## _name ## _store, \
}
+#define _ADRENO_SYSFS_ATTR_RO(_name, __show) \
+struct adreno_sysfs_attribute adreno_attr_##_name = { \
+ .attr = __ATTR(_name, 0644, __show, NULL), \
+ .show = _ ## _name ## _show, \
+ .store = NULL, \
+}
+
#define ADRENO_SYSFS_ATTR(_a) \
container_of((_a), struct adreno_sysfs_attribute, attr)
@@ -331,6 +338,13 @@ static unsigned int _ifpc_show(struct adreno_device *adreno_dev)
return kgsl_gmu_isenabled(device) && gmu->idle_level >= GPU_HW_IFPC;
}
+static unsigned int _preempt_count_show(struct adreno_device *adreno_dev)
+{
+ struct adreno_preemption *preempt = &adreno_dev->preempt;
+
+ return preempt->count;
+}
+
static ssize_t _sysfs_store_u32(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
@@ -411,9 +425,13 @@ static ssize_t _sysfs_show_bool(struct device *dev,
#define ADRENO_SYSFS_U32(_name) \
_ADRENO_SYSFS_ATTR(_name, _sysfs_show_u32, _sysfs_store_u32)
+#define ADRENO_SYSFS_RO_U32(_name) \
+ _ADRENO_SYSFS_ATTR_RO(_name, _sysfs_show_u32)
+
static ADRENO_SYSFS_U32(ft_policy);
static ADRENO_SYSFS_U32(ft_pagefault_policy);
static ADRENO_SYSFS_U32(preempt_level);
+static ADRENO_SYSFS_RO_U32(preempt_count);
static ADRENO_SYSFS_BOOL(usesgmem);
static ADRENO_SYSFS_BOOL(skipsaverestore);
static ADRENO_SYSFS_BOOL(ft_long_ib_detect);
@@ -451,6 +469,7 @@ static const struct device_attribute *_attr_list[] = {
&adreno_attr_usesgmem.attr,
&adreno_attr_skipsaverestore.attr,
&adreno_attr_ifpc.attr,
+ &adreno_attr_preempt_count.attr,
NULL,
};
diff --git a/drivers/gpu/msm/kgsl.c b/drivers/gpu/msm/kgsl.c
index 2e1ceea..5d07380 100644
--- a/drivers/gpu/msm/kgsl.c
+++ b/drivers/gpu/msm/kgsl.c
@@ -1805,18 +1805,15 @@ long kgsl_ioctl_drawctxt_destroy(struct kgsl_device_private *dev_priv,
long gpumem_free_entry(struct kgsl_mem_entry *entry)
{
- pid_t ptname = 0;
-
if (!kgsl_mem_entry_set_pend(entry))
return -EBUSY;
trace_kgsl_mem_free(entry);
-
- if (entry->memdesc.pagetable != NULL)
- ptname = entry->memdesc.pagetable->name;
-
- kgsl_memfree_add(entry->priv->pid, ptname, entry->memdesc.gpuaddr,
- entry->memdesc.size, entry->memdesc.flags);
+ kgsl_memfree_add(entry->priv->pid,
+ entry->memdesc.pagetable ?
+ entry->memdesc.pagetable->name : 0,
+ entry->memdesc.gpuaddr, entry->memdesc.size,
+ entry->memdesc.flags);
kgsl_mem_entry_put(entry);
@@ -1835,6 +1832,12 @@ static void gpumem_free_func(struct kgsl_device *device,
/* Free the memory for all event types */
trace_kgsl_mem_timestamp_free(device, entry, KGSL_CONTEXT_ID(context),
timestamp, 0);
+ kgsl_memfree_add(entry->priv->pid,
+ entry->memdesc.pagetable ?
+ entry->memdesc.pagetable->name : 0,
+ entry->memdesc.gpuaddr, entry->memdesc.size,
+ entry->memdesc.flags);
+
kgsl_mem_entry_put(entry);
}
@@ -1928,6 +1931,13 @@ static bool gpuobj_free_fence_func(void *priv)
{
struct kgsl_mem_entry *entry = priv;
+ trace_kgsl_mem_free(entry);
+ kgsl_memfree_add(entry->priv->pid,
+ entry->memdesc.pagetable ?
+ entry->memdesc.pagetable->name : 0,
+ entry->memdesc.gpuaddr, entry->memdesc.size,
+ entry->memdesc.flags);
+
INIT_WORK(&entry->work, _deferred_put);
queue_work(kgsl_driver.mem_workqueue, &entry->work);
return true;
@@ -1960,15 +1970,15 @@ static long gpuobj_free_on_fence(struct kgsl_device_private *dev_priv,
handle = kgsl_sync_fence_async_wait(event.fd,
gpuobj_free_fence_func, entry, NULL, 0);
- /* if handle is NULL the fence has already signaled */
- if (handle == NULL)
- return gpumem_free_entry(entry);
-
if (IS_ERR(handle)) {
kgsl_mem_entry_unset_pend(entry);
return PTR_ERR(handle);
}
+ /* if handle is NULL the fence has already signaled */
+ if (handle == NULL)
+ gpuobj_free_fence_func(entry);
+
return 0;
}
@@ -2284,7 +2294,8 @@ static long _gpuobj_map_useraddr(struct kgsl_device *device,
param->flags &= KGSL_MEMFLAGS_GPUREADONLY
| KGSL_CACHEMODE_MASK
| KGSL_MEMTYPE_MASK
- | KGSL_MEMFLAGS_FORCE_32BIT;
+ | KGSL_MEMFLAGS_FORCE_32BIT
+ | KGSL_MEMFLAGS_IOCOHERENT;
/* Specifying SECURE is an explicit error */
if (param->flags & KGSL_MEMFLAGS_SECURE)
@@ -2378,7 +2389,12 @@ long kgsl_ioctl_gpuobj_import(struct kgsl_device_private *dev_priv,
| KGSL_MEMALIGN_MASK
| KGSL_MEMFLAGS_USE_CPU_MAP
| KGSL_MEMFLAGS_SECURE
- | KGSL_MEMFLAGS_FORCE_32BIT;
+ | KGSL_MEMFLAGS_FORCE_32BIT
+ | KGSL_MEMFLAGS_IOCOHERENT;
+
+ /* Disable IO coherence if it is not supported on the chip */
+ if (!MMU_FEATURE(mmu, KGSL_MMU_IO_COHERENT))
+ param->flags &= ~((uint64_t)KGSL_MEMFLAGS_IOCOHERENT);
entry->memdesc.flags = param->flags;
@@ -2663,7 +2679,13 @@ long kgsl_ioctl_map_user_mem(struct kgsl_device_private *dev_priv,
| KGSL_MEMTYPE_MASK
| KGSL_MEMALIGN_MASK
| KGSL_MEMFLAGS_USE_CPU_MAP
- | KGSL_MEMFLAGS_SECURE;
+ | KGSL_MEMFLAGS_SECURE
+ | KGSL_MEMFLAGS_IOCOHERENT;
+
+ /* Disable IO coherence if it is not supported on the chip */
+ if (!MMU_FEATURE(mmu, KGSL_MMU_IO_COHERENT))
+ param->flags &= ~((uint64_t)KGSL_MEMFLAGS_IOCOHERENT);
+
entry->memdesc.flags = ((uint64_t) param->flags)
| KGSL_MEMFLAGS_FORCE_32BIT;
@@ -3062,6 +3084,7 @@ struct kgsl_mem_entry *gpumem_alloc_entry(
int ret;
struct kgsl_process_private *private = dev_priv->process_priv;
struct kgsl_mem_entry *entry;
+ struct kgsl_mmu *mmu = &dev_priv->device->mmu;
unsigned int align;
flags &= KGSL_MEMFLAGS_GPUREADONLY
@@ -3070,14 +3093,15 @@ struct kgsl_mem_entry *gpumem_alloc_entry(
| KGSL_MEMALIGN_MASK
| KGSL_MEMFLAGS_USE_CPU_MAP
| KGSL_MEMFLAGS_SECURE
- | KGSL_MEMFLAGS_FORCE_32BIT;
+ | KGSL_MEMFLAGS_FORCE_32BIT
+ | KGSL_MEMFLAGS_IOCOHERENT;
/* Turn off SVM if the system doesn't support it */
- if (!kgsl_mmu_use_cpu_map(&dev_priv->device->mmu))
+ if (!kgsl_mmu_use_cpu_map(mmu))
flags &= ~((uint64_t) KGSL_MEMFLAGS_USE_CPU_MAP);
/* Return not supported error if secure memory isn't enabled */
- if (!kgsl_mmu_is_secured(&dev_priv->device->mmu) &&
+ if (!kgsl_mmu_is_secured(mmu) &&
(flags & KGSL_MEMFLAGS_SECURE)) {
dev_WARN_ONCE(dev_priv->device->dev, 1,
"Secure memory not supported");
@@ -3106,11 +3130,15 @@ struct kgsl_mem_entry *gpumem_alloc_entry(
flags = kgsl_filter_cachemode(flags);
+ /* Disable IO coherence if it is not supported on the chip */
+ if (!MMU_FEATURE(mmu, KGSL_MMU_IO_COHERENT))
+ flags &= ~((uint64_t)KGSL_MEMFLAGS_IOCOHERENT);
+
entry = kgsl_mem_entry_create();
if (entry == NULL)
return ERR_PTR(-ENOMEM);
- if (MMU_FEATURE(&dev_priv->device->mmu, KGSL_MMU_NEED_GUARD_PAGE))
+ if (MMU_FEATURE(mmu, KGSL_MMU_NEED_GUARD_PAGE))
entry->memdesc.priv |= KGSL_MEMDESC_GUARD_PAGE;
if (flags & KGSL_MEMFLAGS_SECURE)
diff --git a/drivers/gpu/msm/kgsl.h b/drivers/gpu/msm/kgsl.h
index f80da79..023e63e 100644
--- a/drivers/gpu/msm/kgsl.h
+++ b/drivers/gpu/msm/kgsl.h
@@ -75,6 +75,7 @@
* Used Data:
* Offset: Length(bytes): What
* 0x0: 4 * KGSL_PRIORITY_MAX_RB_LEVELS: RB0 RPTR
+ * 0x10: 8 * KGSL_PRIORITY_MAX_RB_LEVELS: RB0 CTXT RESTORE ADDR
*/
/* Shadow global helpers */
@@ -82,6 +83,13 @@
#define SCRATCH_RPTR_GPU_ADDR(dev, id) \
((dev)->scratch.gpuaddr + SCRATCH_RPTR_OFFSET(id))
+#define SCRATCH_PREEMPTION_CTXT_RESTORE_ADDR_OFFSET(id) \
+ (SCRATCH_RPTR_OFFSET(KGSL_PRIORITY_MAX_RB_LEVELS) + \
+ ((id) * sizeof(uint64_t)))
+#define SCRATCH_PREEMPTION_CTXT_RESTORE_GPU_ADDR(dev, id) \
+ ((dev)->scratch.gpuaddr + \
+ SCRATCH_PREEMPTION_CTXT_RESTORE_ADDR_OFFSET(id))
+
/* Timestamp window used to detect rollovers (half of integer range) */
#define KGSL_TIMESTAMP_WINDOW 0x80000000
diff --git a/drivers/gpu/msm/kgsl_debugfs.c b/drivers/gpu/msm/kgsl_debugfs.c
index e339a08..834706a 100644
--- a/drivers/gpu/msm/kgsl_debugfs.c
+++ b/drivers/gpu/msm/kgsl_debugfs.c
@@ -303,6 +303,7 @@ static int print_sparse_mem_entry(int id, void *ptr, void *data)
if (!(m->flags & KGSL_MEMFLAGS_SPARSE_VIRT))
return 0;
+ spin_lock(&entry->bind_lock);
node = rb_first(&entry->bind_tree);
while (node != NULL) {
@@ -313,6 +314,7 @@ static int print_sparse_mem_entry(int id, void *ptr, void *data)
obj->v_off, obj->size, obj->p_off);
node = rb_next(node);
}
+ spin_unlock(&entry->bind_lock);
seq_putc(s, '\n');
diff --git a/drivers/gpu/msm/kgsl_gmu.c b/drivers/gpu/msm/kgsl_gmu.c
index 56496f7..0a7424a 100644
--- a/drivers/gpu/msm/kgsl_gmu.c
+++ b/drivers/gpu/msm/kgsl_gmu.c
@@ -1620,3 +1620,46 @@ void gmu_remove(struct kgsl_device *device)
device->gmu.pdev = NULL;
}
+
+/*
+ * adreno_gmu_fenced_write() - Check if there is a GMU and it is enabled
+ * @adreno_dev: Pointer to the Adreno device device that owns the GMU
+ * @offset: 32bit register enum that is to be written
+ * @val: The value to be written to the register
+ * @fence_mask: The value to poll the fence status register
+ *
+ * Check the WRITEDROPPED0/1 bit in the FENCE_STATUS regsiter to check if
+ * the write to the fenced register went through. If it didn't then we retry
+ * the write until it goes through or we time out.
+ */
+void adreno_gmu_fenced_write(struct adreno_device *adreno_dev,
+ enum adreno_regs offset, unsigned int val,
+ unsigned int fence_mask)
+{
+ unsigned int status, i;
+
+ adreno_writereg(adreno_dev, offset, val);
+
+ if (!kgsl_gmu_isenabled(KGSL_DEVICE(adreno_dev)))
+ return;
+
+ for (i = 0; i < GMU_WAKEUP_RETRY_MAX; i++) {
+ adreno_read_gmureg(adreno_dev, ADRENO_REG_GMU_AHB_FENCE_STATUS,
+ &status);
+
+ /*
+ * If !writedropped0/1, then the write to fenced register
+ * was successful
+ */
+ if (!(status & fence_mask))
+ return;
+ /* Wait a small amount of time before trying again */
+ udelay(GMU_WAKEUP_DELAY_US);
+
+ /* Try to write the fenced register again */
+ adreno_writereg(adreno_dev, offset, val);
+ }
+
+ dev_err(adreno_dev->dev.dev,
+ "GMU fenced register write timed out: reg %x\n", offset);
+}
diff --git a/drivers/gpu/msm/kgsl_gmu.h b/drivers/gpu/msm/kgsl_gmu.h
index 60d9cf8..90e87e4 100644
--- a/drivers/gpu/msm/kgsl_gmu.h
+++ b/drivers/gpu/msm/kgsl_gmu.h
@@ -30,8 +30,11 @@
GMU_INT_HOST_AHB_BUS_ERR | \
GMU_INT_FENCE_ERR)
-#define MAX_GMUFW_SIZE 0x2000 /* in dwords */
-#define FENCE_RANGE_MASK ((0x1 << 31) | (0x0A << 18) | (0x8A0))
+#define MAX_GMUFW_SIZE 0x2000 /* in bytes */
+#define FENCE_RANGE_MASK ((0x1 << 31) | ((0xA << 2) << 18) | (0x8A0))
+
+#define FENCE_STATUS_WRITEDROPPED0_MASK 0x1
+#define FENCE_STATUS_WRITEDROPPED1_MASK 0x2
/* Bitmask for GPU low power mode enabling and hysterisis*/
#define SPTP_ENABLE_MASK (BIT(2) | BIT(0))
@@ -78,6 +81,19 @@
#define OOB_PERFCNTR_SET_MASK BIT(17)
#define OOB_PERFCNTR_CHECK_MASK BIT(25)
#define OOB_PERFCNTR_CLEAR_MASK BIT(25)
+#define OOB_PREEMPTION_SET_MASK BIT(18)
+#define OOB_PREEMPTION_CHECK_MASK BIT(26)
+#define OOB_PREEMPTION_CLEAR_MASK BIT(26)
+
+/*
+ * Wait time before trying to write the register again.
+ * Hopefully the GMU has finished waking up during this delay.
+ * This delay must be less than the IFPC main hysteresis or
+ * the GMU will start shutting down before we try again.
+ */
+#define GMU_WAKEUP_DELAY_US 10
+/* Max amount of tries to wake up the GMU. */
+#define GMU_WAKEUP_RETRY_MAX 60
/* Bits for the flags field in the gmu structure */
enum gmu_flags {
diff --git a/drivers/gpu/msm/kgsl_hfi.c b/drivers/gpu/msm/kgsl_hfi.c
index eef5f45..3a5b489 100644
--- a/drivers/gpu/msm/kgsl_hfi.c
+++ b/drivers/gpu/msm/kgsl_hfi.c
@@ -183,7 +183,7 @@ static void receive_ack_msg(struct gmu_device *gmu, struct hfi_msg_rsp *rsp)
rsp->ret_hdr.size,
rsp->ret_hdr.seqnum);
- spin_lock(&hfi->msglock);
+ spin_lock_bh(&hfi->msglock);
list_for_each_entry_safe(msg, next, &hfi->msglist, node) {
if (msg->msg_id == rsp->ret_hdr.id &&
msg->seqnum == rsp->ret_hdr.seqnum) {
@@ -193,7 +193,7 @@ static void receive_ack_msg(struct gmu_device *gmu, struct hfi_msg_rsp *rsp)
}
if (in_queue == false) {
- spin_unlock(&hfi->msglock);
+ spin_unlock_bh(&hfi->msglock);
dev_err(&gmu->pdev->dev,
"Cannot find receiver of ack msg with id=%d\n",
rsp->ret_hdr.id);
@@ -202,7 +202,7 @@ static void receive_ack_msg(struct gmu_device *gmu, struct hfi_msg_rsp *rsp)
memcpy(&msg->results, (void *) rsp, rsp->hdr.size << 2);
complete(&msg->msg_complete);
- spin_unlock(&hfi->msglock);
+ spin_unlock_bh(&hfi->msglock);
}
static void receive_err_msg(struct gmu_device *gmu, struct hfi_msg_rsp *rsp)
@@ -231,9 +231,9 @@ static int hfi_send_msg(struct gmu_device *gmu, struct hfi_msg_hdr *msg,
ret_msg->msg_id = msg->id;
ret_msg->seqnum = msg->seqnum;
- spin_lock(&hfi->msglock);
+ spin_lock_bh(&hfi->msglock);
list_add_tail(&ret_msg->node, &hfi->msglist);
- spin_unlock(&hfi->msglock);
+ spin_unlock_bh(&hfi->msglock);
if (hfi_cmdq_write(gmu, HFI_CMD_QUEUE, msg) != size) {
rc = -EINVAL;
@@ -253,9 +253,9 @@ static int hfi_send_msg(struct gmu_device *gmu, struct hfi_msg_hdr *msg,
/* If we got here we succeeded */
rc = 0;
done:
- spin_lock(&hfi->msglock);
+ spin_lock_bh(&hfi->msglock);
list_del(&ret_msg->node);
- spin_unlock(&hfi->msglock);
+ spin_unlock_bh(&hfi->msglock);
return rc;
}
diff --git a/drivers/gpu/msm/kgsl_iommu.c b/drivers/gpu/msm/kgsl_iommu.c
index dc0e733..ab3ab31 100644
--- a/drivers/gpu/msm/kgsl_iommu.c
+++ b/drivers/gpu/msm/kgsl_iommu.c
@@ -110,7 +110,7 @@ struct global_pt_entry {
};
static struct global_pt_entry global_pt_entries[GLOBAL_PT_ENTRIES];
-static struct kgsl_memdesc *kgsl_global_secure_pt_entry;
+static int secure_global_size;
static int global_pt_count;
uint64_t global_pt_alloc;
static struct kgsl_memdesc gpu_qdss_desc;
@@ -162,24 +162,33 @@ static int kgsl_iommu_map_globals(struct kgsl_pagetable *pagetable)
return 0;
}
-static void kgsl_iommu_unmap_global_secure_pt_entry(struct kgsl_pagetable
- *pagetable)
+void kgsl_iommu_unmap_global_secure_pt_entry(struct kgsl_device *device,
+ struct kgsl_memdesc *entry)
{
- struct kgsl_memdesc *entry = kgsl_global_secure_pt_entry;
+ if (!kgsl_mmu_is_secured(&device->mmu))
+ return;
- if (entry != NULL)
- kgsl_mmu_unmap(pagetable, entry);
+ if (entry != NULL && entry->pagetable->name == KGSL_MMU_SECURE_PT)
+ kgsl_mmu_unmap(entry->pagetable, entry);
}
-static int kgsl_map_global_secure_pt_entry(struct kgsl_pagetable *pagetable)
+int kgsl_iommu_map_global_secure_pt_entry(struct kgsl_device *device,
+ struct kgsl_memdesc *entry)
{
int ret = 0;
- struct kgsl_memdesc *entry = kgsl_global_secure_pt_entry;
+
+ if (!kgsl_mmu_is_secured(&device->mmu))
+ return -ENOTSUPP;
if (entry != NULL) {
+ struct kgsl_pagetable *pagetable = device->mmu.securepagetable;
entry->pagetable = pagetable;
+ entry->gpuaddr = KGSL_IOMMU_SECURE_BASE + secure_global_size;
+
ret = kgsl_mmu_map(pagetable, entry);
+ if (ret == 0)
+ secure_global_size += entry->size;
}
return ret;
}
@@ -224,13 +233,6 @@ static void kgsl_iommu_add_global(struct kgsl_mmu *mmu,
global_pt_count++;
}
-void kgsl_add_global_secure_entry(struct kgsl_device *device,
- struct kgsl_memdesc *memdesc)
-{
- memdesc->gpuaddr = KGSL_IOMMU_SECURE_BASE;
- kgsl_global_secure_pt_entry = memdesc;
-}
-
struct kgsl_memdesc *kgsl_iommu_get_qdss_global_entry(void)
{
return &gpu_qdss_desc;
@@ -1068,7 +1070,6 @@ static void kgsl_iommu_destroy_pagetable(struct kgsl_pagetable *pt)
if (pt->name == KGSL_MMU_SECURE_PT) {
ctx = &iommu->ctx[KGSL_IOMMU_CONTEXT_SECURE];
- kgsl_iommu_unmap_global_secure_pt_entry(pt);
} else {
ctx = &iommu->ctx[KGSL_IOMMU_CONTEXT_USER];
kgsl_iommu_unmap_globals(pt);
@@ -1089,13 +1090,10 @@ static void setup_64bit_pagetable(struct kgsl_mmu *mmu,
struct kgsl_pagetable *pagetable,
struct kgsl_iommu_pt *pt)
{
- unsigned int secure_global_size = kgsl_global_secure_pt_entry != NULL ?
- kgsl_global_secure_pt_entry->size : 0;
if (mmu->secured && pagetable->name == KGSL_MMU_SECURE_PT) {
- pt->compat_va_start = KGSL_IOMMU_SECURE_BASE +
- secure_global_size;
+ pt->compat_va_start = KGSL_IOMMU_SECURE_BASE;
pt->compat_va_end = KGSL_IOMMU_SECURE_END;
- pt->va_start = KGSL_IOMMU_SECURE_BASE + secure_global_size;
+ pt->va_start = KGSL_IOMMU_SECURE_BASE;
pt->va_end = KGSL_IOMMU_SECURE_END;
} else {
pt->compat_va_start = KGSL_IOMMU_SVM_BASE32;
@@ -1120,20 +1118,15 @@ static void setup_32bit_pagetable(struct kgsl_mmu *mmu,
struct kgsl_pagetable *pagetable,
struct kgsl_iommu_pt *pt)
{
- unsigned int secure_global_size = kgsl_global_secure_pt_entry != NULL ?
- kgsl_global_secure_pt_entry->size : 0;
if (mmu->secured) {
if (pagetable->name == KGSL_MMU_SECURE_PT) {
- pt->compat_va_start = KGSL_IOMMU_SECURE_BASE +
- secure_global_size;
+ pt->compat_va_start = KGSL_IOMMU_SECURE_BASE;
pt->compat_va_end = KGSL_IOMMU_SECURE_END;
- pt->va_start = KGSL_IOMMU_SECURE_BASE +
- secure_global_size;
+ pt->va_start = KGSL_IOMMU_SECURE_BASE;
pt->va_end = KGSL_IOMMU_SECURE_END;
} else {
pt->va_start = KGSL_IOMMU_SVM_BASE32;
- pt->va_end = KGSL_IOMMU_SECURE_BASE +
- secure_global_size;
+ pt->va_end = KGSL_IOMMU_SECURE_BASE;
pt->compat_va_start = pt->va_start;
pt->compat_va_end = pt->va_end;
}
@@ -1363,8 +1356,6 @@ static int _init_secure_pt(struct kgsl_mmu *mmu, struct kgsl_pagetable *pt)
ctx->regbase = iommu->regbase + KGSL_IOMMU_CB0_OFFSET
+ (cb_num << KGSL_IOMMU_CB_SHIFT);
- ret = kgsl_map_global_secure_pt_entry(pt);
-
done:
if (ret)
_free_pt(ctx, pt);
@@ -1608,6 +1599,18 @@ static int kgsl_iommu_init(struct kgsl_mmu *mmu)
kgsl_setup_qdss_desc(device);
kgsl_setup_qtimer_desc(device);
+ if (!mmu->secured)
+ goto done;
+
+ mmu->securepagetable = kgsl_mmu_getpagetable(mmu,
+ KGSL_MMU_SECURE_PT);
+ if (IS_ERR(mmu->securepagetable)) {
+ status = PTR_ERR(mmu->securepagetable);
+ mmu->securepagetable = NULL;
+ } else if (mmu->securepagetable == NULL) {
+ status = -ENOMEM;
+ }
+
done:
if (status)
kgsl_iommu_close(mmu);
@@ -1689,17 +1692,9 @@ static int _setup_secure_context(struct kgsl_mmu *mmu)
if (ctx->dev == NULL || !mmu->secured)
return 0;
- if (mmu->securepagetable == NULL) {
- mmu->securepagetable = kgsl_mmu_getpagetable(mmu,
- KGSL_MMU_SECURE_PT);
- if (IS_ERR(mmu->securepagetable)) {
- ret = PTR_ERR(mmu->securepagetable);
- mmu->securepagetable = NULL;
- return ret;
- } else if (mmu->securepagetable == NULL) {
- return -ENOMEM;
- }
- }
+ if (mmu->securepagetable == NULL)
+ return -ENOMEM;
+
iommu_pt = mmu->securepagetable->priv;
ret = _attach_pt(iommu_pt, ctx);
@@ -1840,6 +1835,9 @@ static unsigned int _get_protection_flags(struct kgsl_memdesc *memdesc)
if (memdesc->priv & KGSL_MEMDESC_PRIVILEGED)
flags |= IOMMU_PRIV;
+ if (memdesc->flags & KGSL_MEMFLAGS_IOCOHERENT)
+ flags |= IOMMU_CACHE;
+
return flags;
}
@@ -2502,6 +2500,13 @@ static int kgsl_iommu_get_gpuaddr(struct kgsl_pagetable *pagetable,
end = pt->va_end;
}
+ /*
+ * When mapping secure buffers, adjust the start of the va range
+ * to the end of secure global buffers.
+ */
+ if (kgsl_memdesc_is_secured(memdesc))
+ start += secure_global_size;
+
spin_lock(&pagetable->lock);
addr = _get_unmapped_area(pagetable, start, end, size, align);
diff --git a/drivers/gpu/msm/kgsl_mmu.h b/drivers/gpu/msm/kgsl_mmu.h
index 7a8ab74..430a140 100644
--- a/drivers/gpu/msm/kgsl_mmu.h
+++ b/drivers/gpu/msm/kgsl_mmu.h
@@ -138,6 +138,8 @@ struct kgsl_mmu_pt_ops {
#define KGSL_MMU_PAGED BIT(8)
/* The device requires a guard page */
#define KGSL_MMU_NEED_GUARD_PAGE BIT(9)
+/* The device supports IO coherency */
+#define KGSL_MMU_IO_COHERENT BIT(10)
/**
* struct kgsl_mmu - Master definition for KGSL MMU devices
@@ -174,7 +176,9 @@ int kgsl_mmu_start(struct kgsl_device *device);
struct kgsl_pagetable *kgsl_mmu_getpagetable_ptbase(struct kgsl_mmu *mmu,
u64 ptbase);
-void kgsl_add_global_secure_entry(struct kgsl_device *device,
+int kgsl_iommu_map_global_secure_pt_entry(struct kgsl_device *device,
+ struct kgsl_memdesc *memdesc);
+void kgsl_iommu_unmap_global_secure_pt_entry(struct kgsl_device *device,
struct kgsl_memdesc *memdesc);
void kgsl_print_global_pt_entries(struct seq_file *s);
void kgsl_mmu_putpagetable(struct kgsl_pagetable *pagetable);
diff --git a/drivers/gpu/msm/kgsl_pwrscale.c b/drivers/gpu/msm/kgsl_pwrscale.c
index 20590ea..6825c2b 100644
--- a/drivers/gpu/msm/kgsl_pwrscale.c
+++ b/drivers/gpu/msm/kgsl_pwrscale.c
@@ -372,7 +372,7 @@ static bool popp_stable(struct kgsl_device *device)
}
if (nap_time && go_time) {
percent_nap = 100 * nap_time;
- do_div(percent_nap, nap_time + go_time);
+ div64_s64(percent_nap, nap_time + go_time);
}
trace_kgsl_popp_nap(device, (int)nap_time / 1000, nap,
percent_nap);
@@ -843,13 +843,17 @@ int kgsl_busmon_target(struct device *dev, unsigned long *freq, u32 flags)
}
b = pwr->bus_mod;
- if (_check_fast_hint(bus_flag) &&
- ((pwr_level->bus_freq + pwr->bus_mod) < pwr_level->bus_max))
+ if (_check_fast_hint(bus_flag))
pwr->bus_mod++;
- else if (_check_slow_hint(bus_flag) &&
- ((pwr_level->bus_freq + pwr->bus_mod) > pwr_level->bus_min))
+ else if (_check_slow_hint(bus_flag))
pwr->bus_mod--;
+ /* trim calculated change to fit range */
+ if (pwr_level->bus_freq + pwr->bus_mod < pwr_level->bus_min)
+ pwr->bus_mod = -(pwr_level->bus_freq - pwr_level->bus_min);
+ else if (pwr_level->bus_freq + pwr->bus_mod > pwr_level->bus_max)
+ pwr->bus_mod = pwr_level->bus_max - pwr_level->bus_freq;
+
/* Update bus vote if AB or IB is modified */
if ((pwr->bus_mod != b) || (pwr->bus_ab_mbytes != ab_mbytes)) {
pwr->bus_percent_ab = device->pwrscale.bus_profile.percent_ab;
diff --git a/drivers/gpu/msm/kgsl_sharedmem.c b/drivers/gpu/msm/kgsl_sharedmem.c
index de5df54..a9c2252 100644
--- a/drivers/gpu/msm/kgsl_sharedmem.c
+++ b/drivers/gpu/msm/kgsl_sharedmem.c
@@ -126,12 +126,10 @@ static ssize_t mem_entry_sysfs_show(struct kobject *kobj,
ssize_t ret;
/*
- * 1. sysfs_remove_file waits for reads to complete before the node
- * is deleted.
- * 2. kgsl_process_init_sysfs takes a refcount to the process_private,
- * which is put at the end of kgsl_process_uninit_sysfs.
- * These two conditions imply that priv will not be freed until this
- * function completes, and no further locking is needed.
+ * kgsl_process_init_sysfs takes a refcount to the process_private,
+ * which is put when the kobj is released. This implies that priv will
+ * not be freed until this function completes, and no further locking
+ * is needed.
*/
priv = kobj ? container_of(kobj, struct kgsl_process_private, kobj) :
NULL;
@@ -144,12 +142,22 @@ static ssize_t mem_entry_sysfs_show(struct kobject *kobj,
return ret;
}
+static void mem_entry_release(struct kobject *kobj)
+{
+ struct kgsl_process_private *priv;
+
+ priv = container_of(kobj, struct kgsl_process_private, kobj);
+ /* Put the refcount we got in kgsl_process_init_sysfs */
+ kgsl_process_private_put(priv);
+}
+
static const struct sysfs_ops mem_entry_sysfs_ops = {
.show = mem_entry_sysfs_show,
};
static struct kobj_type ktype_mem_entry = {
.sysfs_ops = &mem_entry_sysfs_ops,
+ .release = &mem_entry_release,
};
static struct mem_entry_stats mem_stats[] = {
@@ -172,8 +180,6 @@ kgsl_process_uninit_sysfs(struct kgsl_process_private *private)
}
kobject_put(&private->kobj);
- /* Put the refcount we got in kgsl_process_init_sysfs */
- kgsl_process_private_put(private);
}
/**
diff --git a/drivers/gpu/msm/kgsl_sync.h b/drivers/gpu/msm/kgsl_sync.h
index d58859d..7c9f334e 100644
--- a/drivers/gpu/msm/kgsl_sync.h
+++ b/drivers/gpu/msm/kgsl_sync.h
@@ -13,7 +13,7 @@
#ifndef __KGSL_SYNC_H
#define __KGSL_SYNC_H
-#include "sync_file.h"
+#include <linux/sync_file.h>
#include "kgsl_device.h"
#define KGSL_TIMELINE_NAME_LEN 32
diff --git a/drivers/hwtracing/coresight/coresight-ost.c b/drivers/hwtracing/coresight/coresight-ost.c
index 3399c27..a5075ba 100644
--- a/drivers/hwtracing/coresight/coresight-ost.c
+++ b/drivers/hwtracing/coresight/coresight-ost.c
@@ -123,13 +123,14 @@ static int stm_trace_ost_header(void __iomem *ch_addr, uint32_t flags,
static int stm_trace_data_header(void __iomem *addr)
{
- char hdr[16];
+ char hdr[24];
int len = 0;
- *(uint16_t *)(hdr) = STM_MAKE_VERSION(0, 1);
+ *(uint16_t *)(hdr) = STM_MAKE_VERSION(0, 2);
*(uint16_t *)(hdr + 2) = STM_HEADER_MAGIC;
*(uint32_t *)(hdr + 4) = raw_smp_processor_id();
*(uint64_t *)(hdr + 8) = sched_clock();
+ *(uint64_t *)(hdr + 16) = task_tgid_nr(get_current());
len += stm_ost_send(addr, hdr, sizeof(hdr));
len += stm_ost_send(addr, current->comm, TASK_COMM_LEN);
diff --git a/drivers/i2c/busses/i2c-riic.c b/drivers/i2c/busses/i2c-riic.c
index 6263ea8..8f11d34 100644
--- a/drivers/i2c/busses/i2c-riic.c
+++ b/drivers/i2c/busses/i2c-riic.c
@@ -80,6 +80,7 @@
#define ICIER_TEIE 0x40
#define ICIER_RIE 0x20
#define ICIER_NAKIE 0x10
+#define ICIER_SPIE 0x08
#define ICSR2_NACKF 0x10
@@ -216,11 +217,10 @@ static irqreturn_t riic_tend_isr(int irq, void *data)
return IRQ_NONE;
}
- if (riic->is_last || riic->err)
+ if (riic->is_last || riic->err) {
+ riic_clear_set_bit(riic, 0, ICIER_SPIE, RIIC_ICIER);
writeb(ICCR2_SP, riic->base + RIIC_ICCR2);
-
- writeb(0, riic->base + RIIC_ICIER);
- complete(&riic->msg_done);
+ }
return IRQ_HANDLED;
}
@@ -240,13 +240,13 @@ static irqreturn_t riic_rdrf_isr(int irq, void *data)
if (riic->bytes_left == 1) {
/* STOP must come before we set ACKBT! */
- if (riic->is_last)
+ if (riic->is_last) {
+ riic_clear_set_bit(riic, 0, ICIER_SPIE, RIIC_ICIER);
writeb(ICCR2_SP, riic->base + RIIC_ICCR2);
+ }
riic_clear_set_bit(riic, 0, ICMR3_ACKBT, RIIC_ICMR3);
- writeb(0, riic->base + RIIC_ICIER);
- complete(&riic->msg_done);
} else {
riic_clear_set_bit(riic, ICMR3_ACKBT, 0, RIIC_ICMR3);
}
@@ -259,6 +259,21 @@ static irqreturn_t riic_rdrf_isr(int irq, void *data)
return IRQ_HANDLED;
}
+static irqreturn_t riic_stop_isr(int irq, void *data)
+{
+ struct riic_dev *riic = data;
+
+ /* read back registers to confirm writes have fully propagated */
+ writeb(0, riic->base + RIIC_ICSR2);
+ readb(riic->base + RIIC_ICSR2);
+ writeb(0, riic->base + RIIC_ICIER);
+ readb(riic->base + RIIC_ICIER);
+
+ complete(&riic->msg_done);
+
+ return IRQ_HANDLED;
+}
+
static u32 riic_func(struct i2c_adapter *adap)
{
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
@@ -326,6 +341,7 @@ static struct riic_irq_desc riic_irqs[] = {
{ .res_num = 0, .isr = riic_tend_isr, .name = "riic-tend" },
{ .res_num = 1, .isr = riic_rdrf_isr, .name = "riic-rdrf" },
{ .res_num = 2, .isr = riic_tdre_isr, .name = "riic-tdre" },
+ { .res_num = 3, .isr = riic_stop_isr, .name = "riic-stop" },
{ .res_num = 5, .isr = riic_tend_isr, .name = "riic-nack" },
};
diff --git a/drivers/iio/magnetometer/mag3110.c b/drivers/iio/magnetometer/mag3110.c
index f2b3bd7..b4f643f 100644
--- a/drivers/iio/magnetometer/mag3110.c
+++ b/drivers/iio/magnetometer/mag3110.c
@@ -222,29 +222,39 @@ static int mag3110_write_raw(struct iio_dev *indio_dev,
int val, int val2, long mask)
{
struct mag3110_data *data = iio_priv(indio_dev);
- int rate;
+ int rate, ret;
- if (iio_buffer_enabled(indio_dev))
- return -EBUSY;
+ ret = iio_device_claim_direct_mode(indio_dev);
+ if (ret)
+ return ret;
switch (mask) {
case IIO_CHAN_INFO_SAMP_FREQ:
rate = mag3110_get_samp_freq_index(data, val, val2);
- if (rate < 0)
- return -EINVAL;
+ if (rate < 0) {
+ ret = -EINVAL;
+ break;
+ }
data->ctrl_reg1 &= ~MAG3110_CTRL_DR_MASK;
data->ctrl_reg1 |= rate << MAG3110_CTRL_DR_SHIFT;
- return i2c_smbus_write_byte_data(data->client,
+ ret = i2c_smbus_write_byte_data(data->client,
MAG3110_CTRL_REG1, data->ctrl_reg1);
+ break;
case IIO_CHAN_INFO_CALIBBIAS:
- if (val < -10000 || val > 10000)
- return -EINVAL;
- return i2c_smbus_write_word_swapped(data->client,
+ if (val < -10000 || val > 10000) {
+ ret = -EINVAL;
+ break;
+ }
+ ret = i2c_smbus_write_word_swapped(data->client,
MAG3110_OFF_X + 2 * chan->scan_index, val << 1);
+ break;
default:
- return -EINVAL;
+ ret = -EINVAL;
+ break;
}
+ iio_device_release_direct_mode(indio_dev);
+ return ret;
}
static irqreturn_t mag3110_trigger_handler(int irq, void *p)
diff --git a/drivers/iio/pressure/ms5611_core.c b/drivers/iio/pressure/ms5611_core.c
index a74ed1f..8cc7156 100644
--- a/drivers/iio/pressure/ms5611_core.c
+++ b/drivers/iio/pressure/ms5611_core.c
@@ -308,6 +308,7 @@ static int ms5611_write_raw(struct iio_dev *indio_dev,
{
struct ms5611_state *st = iio_priv(indio_dev);
const struct ms5611_osr *osr = NULL;
+ int ret;
if (mask != IIO_CHAN_INFO_OVERSAMPLING_RATIO)
return -EINVAL;
@@ -321,12 +322,11 @@ static int ms5611_write_raw(struct iio_dev *indio_dev,
if (!osr)
return -EINVAL;
- mutex_lock(&st->lock);
+ ret = iio_device_claim_direct_mode(indio_dev);
+ if (ret)
+ return ret;
- if (iio_buffer_enabled(indio_dev)) {
- mutex_unlock(&st->lock);
- return -EBUSY;
- }
+ mutex_lock(&st->lock);
if (chan->type == IIO_TEMP)
st->temp_osr = osr;
@@ -334,6 +334,8 @@ static int ms5611_write_raw(struct iio_dev *indio_dev,
st->pressure_osr = osr;
mutex_unlock(&st->lock);
+ iio_device_release_direct_mode(indio_dev);
+
return 0;
}
diff --git a/drivers/iio/proximity/sx9500.c b/drivers/iio/proximity/sx9500.c
index 1f06282..9ea147f 100644
--- a/drivers/iio/proximity/sx9500.c
+++ b/drivers/iio/proximity/sx9500.c
@@ -387,14 +387,18 @@ static int sx9500_read_raw(struct iio_dev *indio_dev,
int *val, int *val2, long mask)
{
struct sx9500_data *data = iio_priv(indio_dev);
+ int ret;
switch (chan->type) {
case IIO_PROXIMITY:
switch (mask) {
case IIO_CHAN_INFO_RAW:
- if (iio_buffer_enabled(indio_dev))
- return -EBUSY;
- return sx9500_read_proximity(data, chan, val);
+ ret = iio_device_claim_direct_mode(indio_dev);
+ if (ret)
+ return ret;
+ ret = sx9500_read_proximity(data, chan, val);
+ iio_device_release_direct_mode(indio_dev);
+ return ret;
case IIO_CHAN_INFO_SAMP_FREQ:
return sx9500_read_samp_freq(data, val, val2);
default:
diff --git a/drivers/iio/trigger/iio-trig-interrupt.c b/drivers/iio/trigger/iio-trig-interrupt.c
index 572bc6f..e18f12b 100644
--- a/drivers/iio/trigger/iio-trig-interrupt.c
+++ b/drivers/iio/trigger/iio-trig-interrupt.c
@@ -58,7 +58,7 @@ static int iio_interrupt_trigger_probe(struct platform_device *pdev)
trig_info = kzalloc(sizeof(*trig_info), GFP_KERNEL);
if (!trig_info) {
ret = -ENOMEM;
- goto error_put_trigger;
+ goto error_free_trigger;
}
iio_trigger_set_drvdata(trig, trig_info);
trig_info->irq = irq;
@@ -83,8 +83,8 @@ static int iio_interrupt_trigger_probe(struct platform_device *pdev)
free_irq(irq, trig);
error_free_trig_info:
kfree(trig_info);
-error_put_trigger:
- iio_trigger_put(trig);
+error_free_trigger:
+ iio_trigger_free(trig);
error_ret:
return ret;
}
@@ -99,7 +99,7 @@ static int iio_interrupt_trigger_remove(struct platform_device *pdev)
iio_trigger_unregister(trig);
free_irq(trig_info->irq, trig);
kfree(trig_info);
- iio_trigger_put(trig);
+ iio_trigger_free(trig);
return 0;
}
diff --git a/drivers/iio/trigger/iio-trig-sysfs.c b/drivers/iio/trigger/iio-trig-sysfs.c
index 3dfab2b..202e8b8 100644
--- a/drivers/iio/trigger/iio-trig-sysfs.c
+++ b/drivers/iio/trigger/iio-trig-sysfs.c
@@ -174,7 +174,7 @@ static int iio_sysfs_trigger_probe(int id)
return 0;
out2:
- iio_trigger_put(t->trig);
+ iio_trigger_free(t->trig);
free_t:
kfree(t);
out1:
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 282c9fb..786f640 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -325,6 +325,27 @@ __be16 mlx5_get_roce_udp_sport(struct mlx5_ib_dev *dev, u8 port_num,
return cpu_to_be16(MLX5_CAP_ROCE(dev->mdev, r_roce_min_src_udp_port));
}
+int mlx5_get_roce_gid_type(struct mlx5_ib_dev *dev, u8 port_num,
+ int index, enum ib_gid_type *gid_type)
+{
+ struct ib_gid_attr attr;
+ union ib_gid gid;
+ int ret;
+
+ ret = ib_get_cached_gid(&dev->ib_dev, port_num, index, &gid, &attr);
+ if (ret)
+ return ret;
+
+ if (!attr.ndev)
+ return -ENODEV;
+
+ dev_put(attr.ndev);
+
+ *gid_type = attr.gid_type;
+
+ return 0;
+}
+
static int mlx5_use_mad_ifc(struct mlx5_ib_dev *dev)
{
if (MLX5_CAP_GEN(dev->mdev, port_type) == MLX5_CAP_PORT_TYPE_IB)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 7d68990..86e1e08 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -892,6 +892,8 @@ int mlx5_ib_set_vf_guid(struct ib_device *device, int vf, u8 port,
__be16 mlx5_get_roce_udp_sport(struct mlx5_ib_dev *dev, u8 port_num,
int index);
+int mlx5_get_roce_gid_type(struct mlx5_ib_dev *dev, u8 port_num,
+ int index, enum ib_gid_type *gid_type);
/* GSI QP helper functions */
struct ib_qp *mlx5_ib_gsi_create_qp(struct ib_pd *pd,
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index aee3942..2665414 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2226,6 +2226,7 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
{
enum rdma_link_layer ll = rdma_port_get_link_layer(&dev->ib_dev, port);
int err;
+ enum ib_gid_type gid_type;
if (attr_mask & IB_QP_PKEY_INDEX)
path->pkey_index = cpu_to_be16(alt ? attr->alt_pkey_index :
@@ -2244,10 +2245,16 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
if (ll == IB_LINK_LAYER_ETHERNET) {
if (!(ah->ah_flags & IB_AH_GRH))
return -EINVAL;
+ err = mlx5_get_roce_gid_type(dev, port, ah->grh.sgid_index,
+ &gid_type);
+ if (err)
+ return err;
memcpy(path->rmac, ah->dmac, sizeof(ah->dmac));
path->udp_sport = mlx5_get_roce_udp_sport(dev, port,
ah->grh.sgid_index);
path->dci_cfi_prio_sl = (ah->sl & 0x7) << 4;
+ if (gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
+ path->ecn_dscp = (ah->grh.traffic_class >> 2) & 0x3f;
} else {
path->fl_free_ar = (path_flags & MLX5_PATH_FLAG_FL) ? 0x80 : 0;
path->fl_free_ar |=
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 9f46be5..9d08478 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -633,6 +633,7 @@ int rxe_requester(void *arg)
goto exit;
}
rmr->state = RXE_MEM_STATE_FREE;
+ rxe_drop_ref(rmr);
wqe->state = wqe_state_done;
wqe->status = IB_WC_SUCCESS;
} else if (wqe->wr.opcode == IB_WR_REG_MR) {
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 8f9aba7..39101b1 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -893,6 +893,7 @@ static enum resp_states do_complete(struct rxe_qp *qp,
return RESPST_ERROR;
}
rmr->state = RXE_MEM_STATE_FREE;
+ rxe_drop_ref(rmr);
}
wc->qp = &qp->ibqp;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 0616a65..7576166 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -1392,7 +1392,7 @@ static void ipoib_cm_tx_reap(struct work_struct *work)
while (!list_empty(&priv->cm.reap_list)) {
p = list_entry(priv->cm.reap_list.next, typeof(*p), list);
- list_del(&p->list);
+ list_del_init(&p->list);
spin_unlock_irqrestore(&priv->lock, flags);
netif_tx_unlock_bh(dev);
ipoib_cm_tx_destroy(p);
diff --git a/drivers/input/keyboard/mpr121_touchkey.c b/drivers/input/keyboard/mpr121_touchkey.c
index 0fd612d..aaf43be 100644
--- a/drivers/input/keyboard/mpr121_touchkey.c
+++ b/drivers/input/keyboard/mpr121_touchkey.c
@@ -87,7 +87,8 @@ static irqreturn_t mpr_touchkey_interrupt(int irq, void *dev_id)
struct mpr121_touchkey *mpr121 = dev_id;
struct i2c_client *client = mpr121->client;
struct input_dev *input = mpr121->input_dev;
- unsigned int key_num, key_val, pressed;
+ unsigned long bit_changed;
+ unsigned int key_num;
int reg;
reg = i2c_smbus_read_byte_data(client, ELE_TOUCH_STATUS_1_ADDR);
@@ -105,19 +106,23 @@ static irqreturn_t mpr_touchkey_interrupt(int irq, void *dev_id)
reg &= TOUCH_STATUS_MASK;
/* use old press bit to figure out which bit changed */
- key_num = ffs(reg ^ mpr121->statusbits) - 1;
- pressed = reg & (1 << key_num);
+ bit_changed = reg ^ mpr121->statusbits;
mpr121->statusbits = reg;
+ for_each_set_bit(key_num, &bit_changed, mpr121->keycount) {
+ unsigned int key_val, pressed;
- key_val = mpr121->keycodes[key_num];
+ pressed = reg & BIT(key_num);
+ key_val = mpr121->keycodes[key_num];
- input_event(input, EV_MSC, MSC_SCAN, key_num);
- input_report_key(input, key_val, pressed);
+ input_event(input, EV_MSC, MSC_SCAN, key_num);
+ input_report_key(input, key_val, pressed);
+
+ dev_dbg(&client->dev, "key %d %d %s\n", key_num, key_val,
+ pressed ? "pressed" : "released");
+
+ }
input_sync(input);
- dev_dbg(&client->dev, "key %d %d %s\n", key_num, key_val,
- pressed ? "pressed" : "released");
-
out:
return IRQ_HANDLED;
}
@@ -231,6 +236,7 @@ static int mpr_touchkey_probe(struct i2c_client *client,
input_dev->id.bustype = BUS_I2C;
input_dev->dev.parent = &client->dev;
input_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REP);
+ input_set_capability(input_dev, EV_MSC, MSC_SCAN);
input_dev->keycode = mpr121->keycodes;
input_dev->keycodesize = sizeof(mpr121->keycodes[0]);
diff --git a/drivers/input/misc/ims-pcu.c b/drivers/input/misc/ims-pcu.c
index f4e8fbe..b5304e2 100644
--- a/drivers/input/misc/ims-pcu.c
+++ b/drivers/input/misc/ims-pcu.c
@@ -1635,13 +1635,25 @@ ims_pcu_get_cdc_union_desc(struct usb_interface *intf)
return NULL;
}
- while (buflen > 0) {
+ while (buflen >= sizeof(*union_desc)) {
union_desc = (struct usb_cdc_union_desc *)buf;
+ if (union_desc->bLength > buflen) {
+ dev_err(&intf->dev, "Too large descriptor\n");
+ return NULL;
+ }
+
if (union_desc->bDescriptorType == USB_DT_CS_INTERFACE &&
union_desc->bDescriptorSubType == USB_CDC_UNION_TYPE) {
dev_dbg(&intf->dev, "Found union header\n");
- return union_desc;
+
+ if (union_desc->bLength >= sizeof(*union_desc))
+ return union_desc;
+
+ dev_err(&intf->dev,
+ "Union descriptor to short (%d vs %zd\n)",
+ union_desc->bLength, sizeof(*union_desc));
+ return NULL;
}
buflen -= union_desc->bLength;
diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
index b8c50d8..c9d491b 100644
--- a/drivers/input/mouse/elan_i2c_core.c
+++ b/drivers/input/mouse/elan_i2c_core.c
@@ -1240,6 +1240,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
{ "ELAN0605", 0 },
{ "ELAN0609", 0 },
{ "ELAN060B", 0 },
+ { "ELAN060C", 0 },
{ "ELAN0611", 0 },
{ "ELAN1000", 0 },
{ }
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index e6f9b2d..d3d975a 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1040,13 +1040,8 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
}
}
- /* Nuke the existing Config, as we're going to rewrite it */
- val &= ~(STRTAB_STE_0_CFG_MASK << STRTAB_STE_0_CFG_SHIFT);
-
- if (ste->valid)
- val |= STRTAB_STE_0_V;
- else
- val &= ~STRTAB_STE_0_V;
+ /* Nuke the existing STE_0 value, as we're going to rewrite it */
+ val = ste->valid ? STRTAB_STE_0_V : 0;
if (ste->bypass) {
val |= disable_bypass ? STRTAB_STE_0_CFG_ABORT
@@ -1081,7 +1076,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
val |= (ste->s1_cfg->cdptr_dma & STRTAB_STE_0_S1CTXPTR_MASK
<< STRTAB_STE_0_S1CTXPTR_SHIFT) |
STRTAB_STE_0_CFG_S1_TRANS;
-
}
if (ste->s2_cfg) {
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 462b433..86438a9 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -1100,8 +1100,10 @@ static void arm_smmu_tlb_sync_cb(struct arm_smmu_device *smmu,
writel_relaxed(0, base + ARM_SMMU_CB_TLBSYNC);
if (readl_poll_timeout_atomic(base + ARM_SMMU_CB_TLBSTATUS, val,
!(val & TLBSTATUS_SACTIVE),
- 0, TLB_LOOP_TIMEOUT))
+ 0, TLB_LOOP_TIMEOUT)) {
+ trace_tlbsync_timeout(smmu->dev, 0);
dev_err(smmu->dev, "TLBSYNC timeout!\n");
+ }
}
static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
@@ -1132,11 +1134,15 @@ static void arm_smmu_tlb_sync(void *cookie)
static void arm_smmu_tlb_inv_context(void *cookie)
{
struct arm_smmu_domain *smmu_domain = cookie;
+ struct device *dev = smmu_domain->dev;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
struct arm_smmu_device *smmu = smmu_domain->smmu;
bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
void __iomem *base;
bool use_tlbiall = smmu->options & ARM_SMMU_OPT_NO_ASID_RETENTION;
+ ktime_t cur = ktime_get();
+
+ trace_tlbi_start(dev, 0);
if (stage1 && !use_tlbiall) {
base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
@@ -1153,6 +1159,8 @@ static void arm_smmu_tlb_inv_context(void *cookie)
base + ARM_SMMU_GR0_TLBIVMID);
__arm_smmu_tlb_sync(smmu);
}
+
+ trace_tlbi_end(dev, ktime_us_delta(ktime_get(), cur));
}
static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
@@ -2214,13 +2222,13 @@ static void arm_smmu_detach_dev(struct iommu_domain *domain,
return;
}
- arm_smmu_domain_remove_master(smmu_domain, fwspec);
+ if (atomic_domain)
+ arm_smmu_power_on_atomic(smmu->pwr);
+ else
+ arm_smmu_power_on(smmu->pwr);
- /* Remove additional vote for atomic power */
- if (atomic_domain) {
- WARN_ON(arm_smmu_power_on_atomic(smmu->pwr));
- arm_smmu_power_off(smmu->pwr);
- }
+ arm_smmu_domain_remove_master(smmu_domain, fwspec);
+ arm_smmu_power_off(smmu->pwr);
}
static int arm_smmu_assign_table(struct arm_smmu_domain *smmu_domain)
@@ -3192,65 +3200,6 @@ static void arm_smmu_trigger_fault(struct iommu_domain *domain,
arm_smmu_power_off(smmu->pwr);
}
-static unsigned long arm_smmu_reg_read(struct iommu_domain *domain,
- unsigned long offset)
-{
- struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
- struct arm_smmu_device *smmu;
- struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
- void __iomem *cb_base;
- unsigned long val;
-
- if (offset >= SZ_4K) {
- pr_err("Invalid offset: 0x%lx\n", offset);
- return 0;
- }
-
- smmu = smmu_domain->smmu;
- if (!smmu) {
- WARN(1, "Can't read registers of a detached domain\n");
- val = 0;
- return val;
- }
-
- if (arm_smmu_power_on(smmu->pwr))
- return 0;
-
- cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
- val = readl_relaxed(cb_base + offset);
-
- arm_smmu_power_off(smmu->pwr);
- return val;
-}
-
-static void arm_smmu_reg_write(struct iommu_domain *domain,
- unsigned long offset, unsigned long val)
-{
- struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
- struct arm_smmu_device *smmu;
- struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
- void __iomem *cb_base;
-
- if (offset >= SZ_4K) {
- pr_err("Invalid offset: 0x%lx\n", offset);
- return;
- }
-
- smmu = smmu_domain->smmu;
- if (!smmu) {
- WARN(1, "Can't read registers of a detached domain\n");
- return;
- }
-
- if (arm_smmu_power_on(smmu->pwr))
- return;
-
- cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
- writel_relaxed(val, cb_base + offset);
-
- arm_smmu_power_off(smmu->pwr);
-}
-
static void arm_smmu_tlbi_domain(struct iommu_domain *domain)
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
@@ -3292,8 +3241,6 @@ static struct iommu_ops arm_smmu_ops = {
.of_xlate = arm_smmu_of_xlate,
.pgsize_bitmap = -1UL, /* Restricted during device attach */
.trigger_fault = arm_smmu_trigger_fault,
- .reg_read = arm_smmu_reg_read,
- .reg_write = arm_smmu_reg_write,
.tlbi_domain = arm_smmu_tlbi_domain,
.enable_config_clocks = arm_smmu_enable_config_clocks,
.disable_config_clocks = arm_smmu_disable_config_clocks,
@@ -3526,7 +3473,7 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
/* Force bypass transaction to be Non-Shareable & not io-coherent */
reg &= ~(sCR0_SHCFG_MASK << sCR0_SHCFG_SHIFT);
- reg |= sCR0_SHCFG_NSH;
+ reg |= sCR0_SHCFG_NSH << sCR0_SHCFG_SHIFT;
/* Push the button */
__arm_smmu_tlb_sync(smmu);
@@ -4402,7 +4349,7 @@ IOMMU_OF_DECLARE(cavium_smmuv2, "cavium,smmu-v2", arm_smmu_of_init);
#define DEBUG_PAR_PA_SHIFT 12
#define DEBUG_PAR_FAULT_VAL 0x1
-#define TBU_DBG_TIMEOUT_US 30000
+#define TBU_DBG_TIMEOUT_US 100
#define QSMMUV500_ACTLR_DEEP_PREFETCH_MASK 0x3
#define QSMMUV500_ACTLR_DEEP_PREFETCH_SHIFT 0x8
@@ -4514,18 +4461,18 @@ static void __qsmmuv500_errata1_tlbiall(struct arm_smmu_domain *smmu_domain)
if (readl_poll_timeout_atomic(base + ARM_SMMU_CB_TLBSTATUS, val,
!(val & TLBSTATUS_SACTIVE), 0, 100)) {
cur = ktime_get();
- trace_errata_throttle_start(dev, 0);
+ trace_tlbi_throttle_start(dev, 0);
msm_bus_noc_throttle_wa(true);
if (readl_poll_timeout_atomic(base + ARM_SMMU_CB_TLBSTATUS, val,
!(val & TLBSTATUS_SACTIVE), 0, 10000)) {
dev_err(smmu->dev, "ERRATA1 TLBSYNC timeout");
- trace_errata_failed(dev, 0);
+ trace_tlbsync_timeout(dev, 0);
}
msm_bus_noc_throttle_wa(false);
- trace_errata_throttle_end(
+ trace_tlbi_throttle_end(
dev, ktime_us_delta(ktime_get(), cur));
}
}
@@ -4542,7 +4489,7 @@ static void qsmmuv500_errata1_tlb_inv_context(void *cookie)
bool errata;
cur = ktime_get();
- trace_errata_tlbi_start(dev, 0);
+ trace_tlbi_start(dev, 0);
errata = qsmmuv500_errata1_required(smmu_domain, data);
remote_spin_lock_irqsave(&data->errata1_lock, flags);
@@ -4561,7 +4508,7 @@ static void qsmmuv500_errata1_tlb_inv_context(void *cookie)
}
remote_spin_unlock_irqrestore(&data->errata1_lock, flags);
- trace_errata_tlbi_end(dev, ktime_us_delta(ktime_get(), cur));
+ trace_tlbi_end(dev, ktime_us_delta(ktime_get(), cur));
}
static struct iommu_gather_ops qsmmuv500_errata1_smmu_gather_ops = {
@@ -4570,11 +4517,12 @@ static struct iommu_gather_ops qsmmuv500_errata1_smmu_gather_ops = {
.free_pages_exact = arm_smmu_free_pages_exact,
};
-static int qsmmuv500_tbu_halt(struct qsmmuv500_tbu_device *tbu)
+static int qsmmuv500_tbu_halt(struct qsmmuv500_tbu_device *tbu,
+ struct arm_smmu_domain *smmu_domain)
{
unsigned long flags;
- u32 val;
- void __iomem *base;
+ u32 halt, fsr, sctlr_orig, sctlr, status;
+ void __iomem *base, *cb_base;
spin_lock_irqsave(&tbu->halt_lock, flags);
if (tbu->halt_count) {
@@ -4583,19 +4531,49 @@ static int qsmmuv500_tbu_halt(struct qsmmuv500_tbu_device *tbu)
return 0;
}
+ cb_base = ARM_SMMU_CB_BASE(smmu_domain->smmu) +
+ ARM_SMMU_CB(smmu_domain->smmu, smmu_domain->cfg.cbndx);
base = tbu->base;
- val = readl_relaxed(base + DEBUG_SID_HALT_REG);
- val |= DEBUG_SID_HALT_VAL;
- writel_relaxed(val, base + DEBUG_SID_HALT_REG);
+ halt = readl_relaxed(base + DEBUG_SID_HALT_REG);
+ halt |= DEBUG_SID_HALT_VAL;
+ writel_relaxed(halt, base + DEBUG_SID_HALT_REG);
- if (readl_poll_timeout_atomic(base + DEBUG_SR_HALT_ACK_REG,
- val, (val & DEBUG_SR_HALT_ACK_VAL),
- 0, TBU_DBG_TIMEOUT_US)) {
+ if (!readl_poll_timeout_atomic(base + DEBUG_SR_HALT_ACK_REG, status,
+ (status & DEBUG_SR_HALT_ACK_VAL),
+ 0, TBU_DBG_TIMEOUT_US))
+ goto out;
+
+ fsr = readl_relaxed(cb_base + ARM_SMMU_CB_FSR);
+ if (!(fsr & FSR_FAULT)) {
dev_err(tbu->dev, "Couldn't halt TBU!\n");
spin_unlock_irqrestore(&tbu->halt_lock, flags);
return -ETIMEDOUT;
}
+ /*
+ * We are in a fault; Our request to halt the bus will not complete
+ * until transactions in front of us (such as the fault itself) have
+ * completed. Disable iommu faults and terminate any existing
+ * transactions.
+ */
+ sctlr_orig = readl_relaxed(cb_base + ARM_SMMU_CB_SCTLR);
+ sctlr = sctlr_orig & ~(SCTLR_CFCFG | SCTLR_CFIE);
+ writel_relaxed(sctlr, cb_base + ARM_SMMU_CB_SCTLR);
+
+ writel_relaxed(fsr, cb_base + ARM_SMMU_CB_FSR);
+ writel_relaxed(RESUME_TERMINATE, cb_base + ARM_SMMU_CB_RESUME);
+
+ if (readl_poll_timeout_atomic(base + DEBUG_SR_HALT_ACK_REG, status,
+ (status & DEBUG_SR_HALT_ACK_VAL),
+ 0, TBU_DBG_TIMEOUT_US)) {
+ dev_err(tbu->dev, "Couldn't halt TBU from fault context!\n");
+ writel_relaxed(sctlr_orig, cb_base + ARM_SMMU_CB_SCTLR);
+ spin_unlock_irqrestore(&tbu->halt_lock, flags);
+ return -ETIMEDOUT;
+ }
+
+ writel_relaxed(sctlr_orig, cb_base + ARM_SMMU_CB_SCTLR);
+out:
tbu->halt_count = 1;
spin_unlock_irqrestore(&tbu->halt_lock, flags);
return 0;
@@ -4696,6 +4674,14 @@ static phys_addr_t qsmmuv500_iova_to_phys(
void __iomem *cb_base;
u32 sctlr_orig, sctlr;
int needs_redo = 0;
+ ktime_t timeout;
+
+ /* only 36 bit iova is supported */
+ if (iova >= (1ULL << 36)) {
+ dev_err_ratelimited(smmu->dev, "ECATS: address too large: %pad\n",
+ &iova);
+ return 0;
+ }
cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
tbu = qsmmuv500_find_tbu(smmu, sid);
@@ -4706,35 +4692,23 @@ static phys_addr_t qsmmuv500_iova_to_phys(
if (ret)
return 0;
- /*
- * Disable client transactions & wait for existing operations to
- * complete.
- */
- ret = qsmmuv500_tbu_halt(tbu);
+ ret = qsmmuv500_tbu_halt(tbu, smmu_domain);
if (ret)
goto out_power_off;
+ /*
+ * ECATS can trigger the fault interrupt, so disable it temporarily
+ * and check for an interrupt manually.
+ */
+ sctlr_orig = readl_relaxed(cb_base + ARM_SMMU_CB_SCTLR);
+ sctlr = sctlr_orig & ~(SCTLR_CFCFG | SCTLR_CFIE);
+ writel_relaxed(sctlr, cb_base + ARM_SMMU_CB_SCTLR);
+
/* Only one concurrent atos operation */
ret = qsmmuv500_ecats_lock(smmu_domain, tbu, &flags);
if (ret)
goto out_resume;
- /*
- * We can be called from an interrupt handler with FSR already set
- * so terminate the faulting transaction prior to starting ecats.
- * No new racing faults can occur since we in the halted state.
- * ECATS can trigger the fault interrupt, so disable it temporarily
- * and check for an interrupt manually.
- */
- fsr = readl_relaxed(cb_base + ARM_SMMU_CB_FSR);
- if (fsr & FSR_FAULT) {
- writel_relaxed(fsr, cb_base + ARM_SMMU_CB_FSR);
- writel_relaxed(RESUME_TERMINATE, cb_base + ARM_SMMU_CB_RESUME);
- }
- sctlr_orig = readl_relaxed(cb_base + ARM_SMMU_CB_SCTLR);
- sctlr = sctlr_orig & ~(SCTLR_CFCFG | SCTLR_CFIE);
- writel_relaxed(sctlr, cb_base + ARM_SMMU_CB_SCTLR);
-
redo:
/* Set address and stream-id */
val = readq_relaxed(tbu->base + DEBUG_SID_HALT_REG);
@@ -4753,16 +4727,26 @@ static phys_addr_t qsmmuv500_iova_to_phys(
writeq_relaxed(val, tbu->base + DEBUG_TXN_TRIGG_REG);
ret = 0;
- if (readl_poll_timeout_atomic(tbu->base + DEBUG_SR_HALT_ACK_REG,
- val, !(val & DEBUG_SR_ECATS_RUNNING_VAL),
- 0, TBU_DBG_TIMEOUT_US)) {
- dev_err(tbu->dev, "ECATS translation timed out!\n");
+ //based on readx_poll_timeout_atomic
+ timeout = ktime_add_us(ktime_get(), TBU_DBG_TIMEOUT_US);
+ for (;;) {
+ val = readl_relaxed(tbu->base + DEBUG_SR_HALT_ACK_REG);
+ if (!(val & DEBUG_SR_ECATS_RUNNING_VAL))
+ break;
+ val = readl_relaxed(cb_base + ARM_SMMU_CB_FSR);
+ if (val & FSR_FAULT)
+ break;
+ if (ktime_compare(ktime_get(), timeout) > 0) {
+ dev_err(tbu->dev, "ECATS translation timed out!\n");
+ ret = -ETIMEDOUT;
+ break;
+ }
}
fsr = readl_relaxed(cb_base + ARM_SMMU_CB_FSR);
if (fsr & FSR_FAULT) {
dev_err(tbu->dev, "ECATS generated a fault interrupt! FSR = %llx\n",
- val);
+ fsr);
ret = -EINVAL;
writel_relaxed(val, cb_base + ARM_SMMU_CB_FSR);
@@ -4889,7 +4873,7 @@ static void qsmmuv500_init_cb(struct arm_smmu_domain *smmu_domain,
* Prefetch only works properly if the start and end of all
* buffers in the page table are aligned to 16 Kb.
*/
- if ((iommudata->actlr >> QSMMUV500_ACTLR_DEEP_PREFETCH_SHIFT) &&
+ if ((iommudata->actlr >> QSMMUV500_ACTLR_DEEP_PREFETCH_SHIFT) &
QSMMUV500_ACTLR_DEEP_PREFETCH_MASK)
smmu_domain->qsmmuv500_errata2_min_align = true;
diff --git a/drivers/iommu/iommu-debug.c b/drivers/iommu/iommu-debug.c
index 6d79cfb..22a708e 100644
--- a/drivers/iommu/iommu-debug.c
+++ b/drivers/iommu/iommu-debug.c
@@ -165,6 +165,7 @@ static void *test_virt_addr;
struct iommu_debug_device {
struct device *dev;
struct iommu_domain *domain;
+ struct dma_iommu_mapping *mapping;
u64 iova;
u64 phys;
size_t len;
@@ -1251,6 +1252,8 @@ static ssize_t __iommu_debug_dma_attach_write(struct file *file,
if (arm_iommu_attach_device(dev, dma_mapping))
goto out_release_mapping;
+
+ ddev->mapping = dma_mapping;
pr_err("Attached\n");
} else {
if (!dev->archdata.mapping) {
@@ -1264,7 +1267,7 @@ static ssize_t __iommu_debug_dma_attach_write(struct file *file,
goto out;
}
arm_iommu_detach_device(dev);
- arm_iommu_release_mapping(dev->archdata.mapping);
+ arm_iommu_release_mapping(ddev->mapping);
pr_err("Detached\n");
}
retval = count;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 1ccaa3f..c333a36 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1677,30 +1677,6 @@ void iommu_trigger_fault(struct iommu_domain *domain, unsigned long flags)
domain->ops->trigger_fault(domain, flags);
}
-/**
- * iommu_reg_read() - read an IOMMU register
- *
- * Reads the IOMMU register at the given offset.
- */
-unsigned long iommu_reg_read(struct iommu_domain *domain, unsigned long offset)
-{
- if (domain->ops->reg_read)
- return domain->ops->reg_read(domain, offset);
- return 0;
-}
-
-/**
- * iommu_reg_write() - write an IOMMU register
- *
- * Writes the given value to the IOMMU register at the given offset.
- */
-void iommu_reg_write(struct iommu_domain *domain, unsigned long offset,
- unsigned long val)
-{
- if (domain->ops->reg_write)
- domain->ops->reg_write(domain, offset, val);
-}
-
void iommu_get_dm_regions(struct device *dev, struct list_head *list)
{
const struct iommu_ops *ops = dev->bus->iommu_ops;
diff --git a/drivers/iommu/msm_dma_iommu_mapping.c b/drivers/iommu/msm_dma_iommu_mapping.c
index 07e5236..3f739a2 100644
--- a/drivers/iommu/msm_dma_iommu_mapping.c
+++ b/drivers/iommu/msm_dma_iommu_mapping.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -27,9 +27,11 @@
* @dev - Device this is mapped to. Used as key
* @sgl - The scatterlist for this mapping
* @nents - Number of entries in sgl
- * @dir - The direction for the unmap.
+ * @dir - The direction for the map.
* @meta - Backpointer to the meta this guy belongs to.
* @ref - for reference counting this mapping
+ * @map_attrs - dma mapping attributes
+ * @buf_start_addr - address of start of buffer
*
* Represents a mapping of one dma_buf buffer to a particular device
* and address range. There may exist other mappings of this buffer in
@@ -44,6 +46,8 @@ struct msm_iommu_map {
enum dma_data_direction dir;
struct msm_iommu_meta *meta;
struct kref ref;
+ unsigned long map_attrs;
+ dma_addr_t buf_start_addr;
};
struct msm_iommu_meta {
@@ -199,21 +203,43 @@ static inline int __msm_dma_map_sg(struct device *dev, struct scatterlist *sg,
iommu_map->sgl.dma_address = sg->dma_address;
iommu_map->sgl.dma_length = sg->dma_length;
iommu_map->dev = dev;
+ iommu_map->dir = dir;
+ iommu_map->nents = nents;
+ iommu_map->map_attrs = attrs;
+ iommu_map->buf_start_addr = sg_phys(sg);
msm_iommu_add(iommu_meta, iommu_map);
} else {
- sg->dma_address = iommu_map->sgl.dma_address;
- sg->dma_length = iommu_map->sgl.dma_length;
+ if (nents == iommu_map->nents &&
+ dir == iommu_map->dir &&
+ attrs == iommu_map->map_attrs &&
+ sg_phys(sg) == iommu_map->buf_start_addr) {
+ sg->dma_address = iommu_map->sgl.dma_address;
+ sg->dma_length = iommu_map->sgl.dma_length;
- kref_get(&iommu_map->ref);
- if (is_device_dma_coherent(dev))
- /*
- * Ensure all outstanding changes for coherent
- * buffers are applied to the cache before any
- * DMA occurs.
- */
- dmb(ish);
- ret = nents;
+ kref_get(&iommu_map->ref);
+ if (is_device_dma_coherent(dev))
+ /*
+ * Ensure all outstanding changes for coherent
+ * buffers are applied to the cache before any
+ * DMA occurs.
+ */
+ dmb(ish);
+ ret = nents;
+ } else {
+ bool start_diff = (sg_phys(sg) !=
+ iommu_map->buf_start_addr);
+
+ dev_err(dev, "lazy map request differs:\n"
+ "req dir:%d, original dir:%d\n"
+ "req nents:%d, original nents:%d\n"
+ "req map attrs:%lu, original map attrs:%lu\n"
+ "req buffer start address differs:%d\n",
+ dir, iommu_map->dir, nents,
+ iommu_map->nents, attrs, iommu_map->map_attrs,
+ start_diff);
+ ret = -EINVAL;
+ }
}
mutex_unlock(&iommu_meta->lock);
return ret;
@@ -321,13 +347,9 @@ void msm_dma_unmap_sg(struct device *dev, struct scatterlist *sgl, int nents,
goto out;
}
- /*
- * Save direction for later use when we actually unmap.
- * Not used right now but in the future if we go to coherent mapping
- * API we might want to call the appropriate API when client asks
- * to unmap
- */
- iommu_map->dir = dir;
+ if (dir != iommu_map->dir)
+ WARN(1, "%s: (%pK) dir:%d differs from original dir:%d\n",
+ __func__, dma_buf, dir, iommu_map->dir);
kref_put(&iommu_map->ref, msm_iommu_map_release);
mutex_unlock(&meta->lock);
diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
index 785d689..b8f30cd 100644
--- a/drivers/leds/Kconfig
+++ b/drivers/leds/Kconfig
@@ -668,6 +668,15 @@
LEDs in both PWM and light pattern generator (LPG) modes. For older
PMICs, it also supports WLEDs and flash LEDs.
+config LEDS_QPNP_FLASH
+ tristate "Support for QPNP Flash LEDs"
+ depends on LEDS_CLASS && MFD_SPMI_PMIC
+ help
+ This driver supports the flash LED functionality of Qualcomm
+ Technologies, Inc. QPNP PMICs. This driver supports PMICs up through
+ PM8994. It can configure the flash LED target current for several
+ independent channels.
+
config LEDS_QPNP_FLASH_V2
tristate "Support for QPNP V2 Flash LEDs"
depends on LEDS_CLASS && MFD_SPMI_PMIC
diff --git a/drivers/leds/Makefile b/drivers/leds/Makefile
index 2ff9a7c..ba9bb8d 100644
--- a/drivers/leds/Makefile
+++ b/drivers/leds/Makefile
@@ -72,6 +72,7 @@
obj-$(CONFIG_LEDS_PM8058) += leds-pm8058.o
obj-$(CONFIG_LEDS_MLXCPLD) += leds-mlxcpld.o
obj-$(CONFIG_LEDS_QPNP) += leds-qpnp.o
+obj-$(CONFIG_LEDS_QPNP_FLASH) += leds-qpnp-flash.o
obj-$(CONFIG_LEDS_QPNP_FLASH_V2) += leds-qpnp-flash-v2.o
obj-$(CONFIG_LEDS_QPNP_WLED) += leds-qpnp-wled.o
obj-$(CONFIG_LEDS_QPNP_HAPTICS) += leds-qpnp-haptics.o
diff --git a/drivers/leds/leds-qpnp-flash.c b/drivers/leds/leds-qpnp-flash.c
new file mode 100644
index 0000000..3b07af8
--- /dev/null
+++ b/drivers/leds/leds-qpnp-flash.c
@@ -0,0 +1,2709 @@
+/* Copyright (c) 2014-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/regmap.h>
+#include <linux/errno.h>
+#include <linux/leds.h>
+#include <linux/slab.h>
+#include <linux/of_device.h>
+#include <linux/spmi.h>
+#include <linux/platform_device.h>
+#include <linux/err.h>
+#include <linux/delay.h>
+#include <linux/of.h>
+#include <linux/regulator/consumer.h>
+#include <linux/workqueue.h>
+#include <linux/power_supply.h>
+#include <linux/leds-qpnp-flash.h>
+#include <linux/qpnp/qpnp-adc.h>
+#include <linux/qpnp/qpnp-revid.h>
+#include <linux/debugfs.h>
+#include <linux/uaccess.h>
+#include "leds.h"
+
+#define FLASH_LED_PERIPHERAL_SUBTYPE(base) (base + 0x05)
+#define FLASH_SAFETY_TIMER(base) (base + 0x40)
+#define FLASH_MAX_CURRENT(base) (base + 0x41)
+#define FLASH_LED0_CURRENT(base) (base + 0x42)
+#define FLASH_LED1_CURRENT(base) (base + 0x43)
+#define FLASH_CLAMP_CURRENT(base) (base + 0x44)
+#define FLASH_MODULE_ENABLE_CTRL(base) (base + 0x46)
+#define FLASH_LED_STROBE_CTRL(base) (base + 0x47)
+#define FLASH_LED_TMR_CTRL(base) (base + 0x48)
+#define FLASH_HEADROOM(base) (base + 0x4A)
+#define FLASH_STARTUP_DELAY(base) (base + 0x4B)
+#define FLASH_MASK_ENABLE(base) (base + 0x4C)
+#define FLASH_VREG_OK_FORCE(base) (base + 0x4F)
+#define FLASH_FAULT_DETECT(base) (base + 0x51)
+#define FLASH_THERMAL_DRATE(base) (base + 0x52)
+#define FLASH_CURRENT_RAMP(base) (base + 0x54)
+#define FLASH_VPH_PWR_DROOP(base) (base + 0x5A)
+#define FLASH_HDRM_SNS_ENABLE_CTRL0(base) (base + 0x5C)
+#define FLASH_HDRM_SNS_ENABLE_CTRL1(base) (base + 0x5D)
+#define FLASH_LED_UNLOCK_SECURE(base) (base + 0xD0)
+#define FLASH_PERPH_RESET_CTRL(base) (base + 0xDA)
+#define FLASH_TORCH(base) (base + 0xE4)
+
+#define FLASH_STATUS_REG_MASK 0xFF
+#define FLASH_LED_FAULT_STATUS(base) (base + 0x08)
+#define INT_LATCHED_STS(base) (base + 0x18)
+#define IN_POLARITY_HIGH(base) (base + 0x12)
+#define INT_SET_TYPE(base) (base + 0x11)
+#define INT_EN_SET(base) (base + 0x15)
+#define INT_LATCHED_CLR(base) (base + 0x14)
+
+#define FLASH_HEADROOM_MASK 0x03
+#define FLASH_STARTUP_DLY_MASK 0x03
+#define FLASH_VREG_OK_FORCE_MASK 0xC0
+#define FLASH_FAULT_DETECT_MASK 0x80
+#define FLASH_THERMAL_DERATE_MASK 0xBF
+#define FLASH_SECURE_MASK 0xFF
+#define FLASH_TORCH_MASK 0x03
+#define FLASH_CURRENT_MASK 0x7F
+#define FLASH_TMR_MASK 0x03
+#define FLASH_TMR_SAFETY 0x00
+#define FLASH_SAFETY_TIMER_MASK 0x7F
+#define FLASH_MODULE_ENABLE_MASK 0xE0
+#define FLASH_STROBE_MASK 0xC0
+#define FLASH_CURRENT_RAMP_MASK 0xBF
+#define FLASH_VPH_PWR_DROOP_MASK 0xF3
+#define FLASH_LED_HDRM_SNS_ENABLE_MASK 0x81
+#define FLASH_MASK_MODULE_CONTRL_MASK 0xE0
+#define FLASH_FOLLOW_OTST2_RB_MASK 0x08
+
+#define FLASH_LED_TRIGGER_DEFAULT "none"
+#define FLASH_LED_HEADROOM_DEFAULT_MV 500
+#define FLASH_LED_STARTUP_DELAY_DEFAULT_US 128
+#define FLASH_LED_CLAMP_CURRENT_DEFAULT_MA 200
+#define FLASH_LED_THERMAL_DERATE_THRESHOLD_DEFAULT_C 80
+#define FLASH_LED_RAMP_UP_STEP_DEFAULT_US 3
+#define FLASH_LED_RAMP_DN_STEP_DEFAULT_US 3
+#define FLASH_LED_VPH_PWR_DROOP_THRESHOLD_DEFAULT_MV 3200
+#define FLASH_LED_VPH_PWR_DROOP_DEBOUNCE_TIME_DEFAULT_US 10
+#define FLASH_LED_THERMAL_DERATE_RATE_DEFAULT_PERCENT 2
+#define FLASH_RAMP_UP_DELAY_US_MIN 1000
+#define FLASH_RAMP_UP_DELAY_US_MAX 1001
+#define FLASH_RAMP_DN_DELAY_US_MIN 2160
+#define FLASH_RAMP_DN_DELAY_US_MAX 2161
+#define FLASH_BOOST_REGULATOR_PROBE_DELAY_MS 2000
+#define FLASH_TORCH_MAX_LEVEL 0x0F
+#define FLASH_MAX_LEVEL 0x4F
+#define FLASH_LED_FLASH_HW_VREG_OK 0x40
+#define FLASH_LED_FLASH_SW_VREG_OK 0x80
+#define FLASH_LED_STROBE_TYPE_HW 0x04
+#define FLASH_DURATION_DIVIDER 10
+#define FLASH_LED_HEADROOM_DIVIDER 100
+#define FLASH_LED_HEADROOM_OFFSET 2
+#define FLASH_LED_MAX_CURRENT_MA 1000
+#define FLASH_LED_THERMAL_THRESHOLD_MIN 95
+#define FLASH_LED_THERMAL_DEVIDER 10
+#define FLASH_LED_VPH_DROOP_THRESHOLD_MIN_MV 2500
+#define FLASH_LED_VPH_DROOP_THRESHOLD_DIVIDER 100
+#define FLASH_LED_HDRM_SNS_ENABLE 0x81
+#define FLASH_LED_HDRM_SNS_DISABLE 0x01
+#define FLASH_LED_UA_PER_MA 1000
+#define FLASH_LED_MASK_MODULE_MASK2_ENABLE 0x20
+#define FLASH_LED_MASK3_ENABLE_SHIFT 7
+#define FLASH_LED_MODULE_CTRL_DEFAULT 0x60
+#define FLASH_LED_CURRENT_READING_DELAY_MIN 5000
+#define FLASH_LED_CURRENT_READING_DELAY_MAX 5001
+#define FLASH_LED_OPEN_FAULT_DETECTED 0xC
+
+#define FLASH_UNLOCK_SECURE 0xA5
+#define FLASH_LED_TORCH_ENABLE 0x00
+#define FLASH_LED_TORCH_DISABLE 0x03
+#define FLASH_MODULE_ENABLE 0x80
+#define FLASH_LED0_TRIGGER 0x80
+#define FLASH_LED1_TRIGGER 0x40
+#define FLASH_LED0_ENABLEMENT 0x40
+#define FLASH_LED1_ENABLEMENT 0x20
+#define FLASH_LED_DISABLE 0x00
+#define FLASH_LED_MIN_CURRENT_MA 13
+#define FLASH_SUBTYPE_DUAL 0x01
+#define FLASH_SUBTYPE_SINGLE 0x02
+
+/*
+ * ID represents physical LEDs for individual control purpose.
+ */
+enum flash_led_id {
+ FLASH_LED_0 = 0,
+ FLASH_LED_1,
+ FLASH_LED_SWITCH,
+};
+
+enum flash_led_type {
+ FLASH = 0,
+ TORCH,
+ SWITCH,
+};
+
+enum thermal_derate_rate {
+ RATE_1_PERCENT = 0,
+ RATE_1P25_PERCENT,
+ RATE_2_PERCENT,
+ RATE_2P5_PERCENT,
+ RATE_5_PERCENT,
+};
+
+enum current_ramp_steps {
+ RAMP_STEP_0P2_US = 0,
+ RAMP_STEP_0P4_US,
+ RAMP_STEP_0P8_US,
+ RAMP_STEP_1P6_US,
+ RAMP_STEP_3P3_US,
+ RAMP_STEP_6P7_US,
+ RAMP_STEP_13P5_US,
+ RAMP_STEP_27US,
+};
+
+struct flash_regulator_data {
+ struct regulator *regs;
+ const char *reg_name;
+ u32 max_volt_uv;
+};
+
+/*
+ * Configurations for each individual LED
+ */
+struct flash_node_data {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct led_classdev cdev;
+ struct work_struct work;
+ struct flash_regulator_data *reg_data;
+ u16 max_current;
+ u16 prgm_current;
+ u16 prgm_current2;
+ u16 duration;
+ u8 id;
+ u8 type;
+ u8 trigger;
+ u8 enable;
+ u8 num_regulators;
+ bool flash_on;
+};
+
+/*
+ * Flash LED configuration read from device tree
+ */
+struct flash_led_platform_data {
+ unsigned int temp_threshold_num;
+ unsigned int temp_derate_curr_num;
+ unsigned int *die_temp_derate_curr_ma;
+ unsigned int *die_temp_threshold_degc;
+ u16 ramp_up_step;
+ u16 ramp_dn_step;
+ u16 vph_pwr_droop_threshold;
+ u16 headroom;
+ u16 clamp_current;
+ u8 thermal_derate_threshold;
+ u8 vph_pwr_droop_debounce_time;
+ u8 startup_dly;
+ u8 thermal_derate_rate;
+ bool pmic_charger_support;
+ bool self_check_en;
+ bool thermal_derate_en;
+ bool current_ramp_en;
+ bool vph_pwr_droop_en;
+ bool hdrm_sns_ch0_en;
+ bool hdrm_sns_ch1_en;
+ bool power_detect_en;
+ bool mask3_en;
+ bool follow_rb_disable;
+ bool die_current_derate_en;
+};
+
+struct qpnp_flash_led_buffer {
+ struct mutex debugfs_lock; /* Prevent thread concurrency */
+ size_t rpos;
+ size_t wpos;
+ size_t len;
+ struct qpnp_flash_led *led;
+ u32 buffer_cnt;
+ char data[0];
+};
+
+/*
+ * Flash LED data structure containing flash LED attributes
+ */
+struct qpnp_flash_led {
+ struct pmic_revid_data *revid_data;
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct flash_led_platform_data *pdata;
+ struct pinctrl *pinctrl;
+ struct pinctrl_state *gpio_state_active;
+ struct pinctrl_state *gpio_state_suspend;
+ struct flash_node_data *flash_node;
+ struct power_supply *battery_psy;
+ struct workqueue_struct *ordered_workq;
+ struct qpnp_vadc_chip *vadc_dev;
+ struct mutex flash_led_lock;
+ struct dentry *dbgfs_root;
+ int num_leds;
+ u16 base;
+ u16 current_addr;
+ u16 current2_addr;
+ u8 peripheral_type;
+ u8 fault_reg;
+ bool gpio_enabled;
+ bool charging_enabled;
+ bool strobe_debug;
+ bool dbg_feature_en;
+ bool open_fault;
+};
+
+static u8 qpnp_flash_led_ctrl_dbg_regs[] = {
+ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48,
+ 0x4A, 0x4B, 0x4C, 0x4F, 0x51, 0x52, 0x54, 0x55, 0x5A, 0x5C, 0x5D,
+};
+
+static int flash_led_dbgfs_file_open(struct qpnp_flash_led *led,
+ struct file *file)
+{
+ struct qpnp_flash_led_buffer *log;
+ size_t logbufsize = SZ_4K;
+
+ log = kzalloc(logbufsize, GFP_KERNEL);
+ if (!log)
+ return -ENOMEM;
+
+ log->rpos = 0;
+ log->wpos = 0;
+ log->len = logbufsize - sizeof(*log);
+ mutex_init(&log->debugfs_lock);
+ log->led = led;
+
+ log->buffer_cnt = 1;
+ file->private_data = log;
+
+ return 0;
+}
+
+static int flash_led_dfs_open(struct inode *inode, struct file *file)
+{
+ struct qpnp_flash_led *led = inode->i_private;
+
+ return flash_led_dbgfs_file_open(led, file);
+}
+
+static int flash_led_dfs_close(struct inode *inode, struct file *file)
+{
+ struct qpnp_flash_led_buffer *log = file->private_data;
+
+ if (log) {
+ file->private_data = NULL;
+ mutex_destroy(&log->debugfs_lock);
+ kfree(log);
+ }
+
+ return 0;
+}
+
+#define MIN_BUFFER_WRITE_LEN 20
+static int print_to_log(struct qpnp_flash_led_buffer *log,
+ const char *fmt, ...)
+{
+ va_list args;
+ int cnt;
+ char *log_buf;
+ size_t size = log->len - log->wpos;
+
+ if (size < MIN_BUFFER_WRITE_LEN)
+ return 0; /* not enough buffer left */
+
+ log_buf = &log->data[log->wpos];
+ va_start(args, fmt);
+ cnt = vscnprintf(log_buf, size, fmt, args);
+ va_end(args);
+
+ log->wpos += cnt;
+ return cnt;
+}
+
+static ssize_t flash_led_dfs_latched_reg_read(struct file *fp, char __user *buf,
+ size_t count, loff_t *ppos) {
+ struct qpnp_flash_led_buffer *log = fp->private_data;
+ struct qpnp_flash_led *led;
+ uint val;
+ int rc = 0;
+ size_t len;
+ size_t ret;
+
+ if (!log) {
+ pr_err("error: file private data is NULL\n");
+ return -EFAULT;
+ }
+ led = log->led;
+
+ mutex_lock(&log->debugfs_lock);
+ if ((log->rpos >= log->wpos && log->buffer_cnt == 0) ||
+ ((log->len - log->wpos) < MIN_BUFFER_WRITE_LEN))
+ goto unlock_mutex;
+
+ rc = regmap_read(led->regmap, INT_LATCHED_STS(led->base), &val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Unable to read from address %x, rc(%d)\n",
+ INT_LATCHED_STS(led->base), rc);
+ goto unlock_mutex;
+ }
+ log->buffer_cnt--;
+
+ rc = print_to_log(log, "0x%05X ", INT_LATCHED_STS(led->base));
+ if (rc == 0)
+ goto unlock_mutex;
+
+ rc = print_to_log(log, "0x%02X ", val);
+ if (rc == 0)
+ goto unlock_mutex;
+
+ if (log->wpos > 0 && log->data[log->wpos - 1] == ' ')
+ log->data[log->wpos - 1] = '\n';
+
+ len = min(count, log->wpos - log->rpos);
+
+ ret = copy_to_user(buf, &log->data[log->rpos], len);
+ if (ret) {
+ pr_err("error copy register value to user\n");
+ rc = -EFAULT;
+ goto unlock_mutex;
+ }
+
+ len -= ret;
+ *ppos += len;
+ log->rpos += len;
+
+ rc = len;
+
+unlock_mutex:
+ mutex_unlock(&log->debugfs_lock);
+ return rc;
+}
+
+static ssize_t flash_led_dfs_fault_reg_read(struct file *fp, char __user *buf,
+ size_t count, loff_t *ppos) {
+ struct qpnp_flash_led_buffer *log = fp->private_data;
+ struct qpnp_flash_led *led;
+ int rc = 0;
+ size_t len;
+ size_t ret;
+
+ if (!log) {
+ pr_err("error: file private data is NULL\n");
+ return -EFAULT;
+ }
+ led = log->led;
+
+ mutex_lock(&log->debugfs_lock);
+ if ((log->rpos >= log->wpos && log->buffer_cnt == 0) ||
+ ((log->len - log->wpos) < MIN_BUFFER_WRITE_LEN))
+ goto unlock_mutex;
+
+ log->buffer_cnt--;
+
+ rc = print_to_log(log, "0x%05X ", FLASH_LED_FAULT_STATUS(led->base));
+ if (rc == 0)
+ goto unlock_mutex;
+
+ rc = print_to_log(log, "0x%02X ", led->fault_reg);
+ if (rc == 0)
+ goto unlock_mutex;
+
+ if (log->wpos > 0 && log->data[log->wpos - 1] == ' ')
+ log->data[log->wpos - 1] = '\n';
+
+ len = min(count, log->wpos - log->rpos);
+
+ ret = copy_to_user(buf, &log->data[log->rpos], len);
+ if (ret) {
+ pr_err("error copy register value to user\n");
+ rc = -EFAULT;
+ goto unlock_mutex;
+ }
+
+ len -= ret;
+ *ppos += len;
+ log->rpos += len;
+
+ rc = len;
+
+unlock_mutex:
+ mutex_unlock(&log->debugfs_lock);
+ return rc;
+}
+
+static ssize_t flash_led_dfs_fault_reg_enable(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos) {
+
+ u8 *val;
+ int pos = 0;
+ int cnt = 0;
+ int data;
+ size_t ret = 0;
+
+ struct qpnp_flash_led_buffer *log = file->private_data;
+ struct qpnp_flash_led *led;
+ char *kbuf;
+
+ if (!log) {
+ pr_err("error: file private data is NULL\n");
+ return -EFAULT;
+ }
+ led = log->led;
+
+ mutex_lock(&log->debugfs_lock);
+ kbuf = kmalloc(count + 1, GFP_KERNEL);
+ if (!kbuf) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ ret = copy_from_user(kbuf, buf, count);
+ if (!ret) {
+ pr_err("failed to copy data from user\n");
+ ret = -EFAULT;
+ goto free_buf;
+ }
+
+ count -= ret;
+ *ppos += count;
+ kbuf[count] = '\0';
+ val = kbuf;
+ while (sscanf(kbuf + pos, "%i", &data) == 1) {
+ pos++;
+ val[cnt++] = data & 0xff;
+ }
+
+ if (!cnt)
+ goto free_buf;
+
+ ret = count;
+ if (*val == 1)
+ led->strobe_debug = true;
+ else
+ led->strobe_debug = false;
+
+free_buf:
+ kfree(kbuf);
+unlock_mutex:
+ mutex_unlock(&log->debugfs_lock);
+ return ret;
+}
+
+static ssize_t flash_led_dfs_dbg_enable(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos) {
+
+ u8 *val;
+ int pos = 0;
+ int cnt = 0;
+ int data;
+ size_t ret = 0;
+ struct qpnp_flash_led_buffer *log = file->private_data;
+ struct qpnp_flash_led *led;
+ char *kbuf;
+
+ if (!log) {
+ pr_err("error: file private data is NULL\n");
+ return -EFAULT;
+ }
+ led = log->led;
+
+ mutex_lock(&log->debugfs_lock);
+ kbuf = kmalloc(count + 1, GFP_KERNEL);
+ if (!kbuf) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ ret = copy_from_user(kbuf, buf, count);
+ if (ret == count) {
+ pr_err("failed to copy data from user\n");
+ ret = -EFAULT;
+ goto free_buf;
+ }
+ count -= ret;
+ *ppos += count;
+ kbuf[count] = '\0';
+ val = kbuf;
+ while (sscanf(kbuf + pos, "%i", &data) == 1) {
+ pos++;
+ val[cnt++] = data & 0xff;
+ }
+
+ if (!cnt)
+ goto free_buf;
+
+ ret = count;
+ if (*val == 1)
+ led->dbg_feature_en = true;
+ else
+ led->dbg_feature_en = false;
+
+free_buf:
+ kfree(kbuf);
+unlock_mutex:
+ mutex_unlock(&log->debugfs_lock);
+ return ret;
+}
+
+static const struct file_operations flash_led_dfs_latched_reg_fops = {
+ .open = flash_led_dfs_open,
+ .release = flash_led_dfs_close,
+ .read = flash_led_dfs_latched_reg_read,
+};
+
+static const struct file_operations flash_led_dfs_strobe_reg_fops = {
+ .open = flash_led_dfs_open,
+ .release = flash_led_dfs_close,
+ .read = flash_led_dfs_fault_reg_read,
+ .write = flash_led_dfs_fault_reg_enable,
+};
+
+static const struct file_operations flash_led_dfs_dbg_feature_fops = {
+ .open = flash_led_dfs_open,
+ .release = flash_led_dfs_close,
+ .write = flash_led_dfs_dbg_enable,
+};
+
+static int
+qpnp_led_masked_write(struct qpnp_flash_led *led, u16 addr, u8 mask, u8 val)
+{
+ int rc;
+
+ rc = regmap_update_bits(led->regmap, addr, mask, val);
+ if (rc)
+ dev_err(&led->pdev->dev,
+ "Unable to update_bits to addr=%x, rc(%d)\n", addr, rc);
+
+ dev_dbg(&led->pdev->dev, "Write 0x%02X to addr 0x%02X\n", val, addr);
+
+ return rc;
+}
+
+static int qpnp_flash_led_get_allowed_die_temp_curr(struct qpnp_flash_led *led,
+ int64_t die_temp_degc)
+{
+ int die_temp_curr_ma;
+
+ if (die_temp_degc >= led->pdata->die_temp_threshold_degc[0])
+ die_temp_curr_ma = 0;
+ else if (die_temp_degc >= led->pdata->die_temp_threshold_degc[1])
+ die_temp_curr_ma = led->pdata->die_temp_derate_curr_ma[0];
+ else if (die_temp_degc >= led->pdata->die_temp_threshold_degc[2])
+ die_temp_curr_ma = led->pdata->die_temp_derate_curr_ma[1];
+ else if (die_temp_degc >= led->pdata->die_temp_threshold_degc[3])
+ die_temp_curr_ma = led->pdata->die_temp_derate_curr_ma[2];
+ else if (die_temp_degc >= led->pdata->die_temp_threshold_degc[4])
+ die_temp_curr_ma = led->pdata->die_temp_derate_curr_ma[3];
+ else
+ die_temp_curr_ma = led->pdata->die_temp_derate_curr_ma[4];
+
+ return die_temp_curr_ma;
+}
+
+static int64_t qpnp_flash_led_get_die_temp(struct qpnp_flash_led *led)
+{
+ struct qpnp_vadc_result die_temp_result;
+ int rc;
+
+ rc = qpnp_vadc_read(led->vadc_dev, SPARE2, &die_temp_result);
+ if (rc) {
+ pr_err("failed to read the die temp\n");
+ return -EINVAL;
+ }
+
+ return die_temp_result.physical;
+}
+
+static int qpnp_get_pmic_revid(struct qpnp_flash_led *led)
+{
+ struct device_node *revid_dev_node;
+
+ revid_dev_node = of_parse_phandle(led->pdev->dev.of_node,
+ "qcom,pmic-revid", 0);
+ if (!revid_dev_node) {
+ dev_err(&led->pdev->dev,
+ "qcom,pmic-revid property missing\n");
+ return -EINVAL;
+ }
+
+ led->revid_data = get_revid_data(revid_dev_node);
+ if (IS_ERR(led->revid_data)) {
+ pr_err("Couldn't get revid data rc = %ld\n",
+ PTR_ERR(led->revid_data));
+ return PTR_ERR(led->revid_data);
+ }
+
+ return 0;
+}
+
+static int
+qpnp_flash_led_get_max_avail_current(struct flash_node_data *flash_node,
+ struct qpnp_flash_led *led)
+{
+ union power_supply_propval prop;
+ int64_t chg_temp_milidegc, die_temp_degc;
+ int max_curr_avail_ma = 2000;
+ int allowed_die_temp_curr_ma = 2000;
+ int rc;
+
+ if (led->pdata->power_detect_en) {
+ if (!led->battery_psy) {
+ dev_err(&led->pdev->dev,
+ "Failed to query power supply\n");
+ return -EINVAL;
+ }
+
+ /*
+ * When charging is enabled, enforce this new enablement
+ * sequence to reduce fuel gauge reading resolution.
+ */
+ if (led->charging_enabled) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_MODULE_ENABLE_CTRL(led->base),
+ FLASH_MODULE_ENABLE, FLASH_MODULE_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Module enable reg write failed\n");
+ return -EINVAL;
+ }
+
+ usleep_range(FLASH_LED_CURRENT_READING_DELAY_MIN,
+ FLASH_LED_CURRENT_READING_DELAY_MAX);
+ }
+
+ power_supply_get_property(led->battery_psy,
+ POWER_SUPPLY_PROP_FLASH_CURRENT_MAX, &prop);
+ if (!prop.intval) {
+ dev_err(&led->pdev->dev,
+ "battery too low for flash\n");
+ return -EINVAL;
+ }
+
+ max_curr_avail_ma = (prop.intval / FLASH_LED_UA_PER_MA);
+ }
+
+ /*
+ * When thermal mitigation is available, this logic will execute to
+ * derate current based upon the PMIC die temperature.
+ */
+ if (led->pdata->die_current_derate_en) {
+ chg_temp_milidegc = qpnp_flash_led_get_die_temp(led);
+ if (chg_temp_milidegc < 0)
+ return -EINVAL;
+
+ die_temp_degc = div_s64(chg_temp_milidegc, 1000);
+ allowed_die_temp_curr_ma =
+ qpnp_flash_led_get_allowed_die_temp_curr(led,
+ die_temp_degc);
+ if (allowed_die_temp_curr_ma < 0)
+ return -EINVAL;
+ }
+
+ max_curr_avail_ma = (max_curr_avail_ma >= allowed_die_temp_curr_ma)
+ ? allowed_die_temp_curr_ma : max_curr_avail_ma;
+
+ return max_curr_avail_ma;
+}
+
+static ssize_t qpnp_flash_led_die_temp_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct qpnp_flash_led *led;
+ struct flash_node_data *flash_node;
+ unsigned long val;
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
+ ssize_t ret;
+
+ ret = kstrtoul(buf, 10, &val);
+ if (ret)
+ return ret;
+
+ flash_node = container_of(led_cdev, struct flash_node_data, cdev);
+ led = dev_get_drvdata(&flash_node->pdev->dev);
+
+ /*'0' for disable die_temp feature; non-zero to enable feature*/
+ if (val == 0)
+ led->pdata->die_current_derate_en = false;
+ else
+ led->pdata->die_current_derate_en = true;
+
+ return count;
+}
+
+static ssize_t qpnp_led_strobe_type_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct flash_node_data *flash_node;
+ unsigned long state;
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
+ ssize_t ret = -EINVAL;
+
+ ret = kstrtoul(buf, 10, &state);
+ if (ret)
+ return ret;
+
+ flash_node = container_of(led_cdev, struct flash_node_data, cdev);
+
+ /* '0' for sw strobe; '1' for hw strobe */
+ if (state == 1)
+ flash_node->trigger |= FLASH_LED_STROBE_TYPE_HW;
+ else
+ flash_node->trigger &= ~FLASH_LED_STROBE_TYPE_HW;
+
+ return count;
+}
+
+static ssize_t qpnp_flash_led_dump_regs_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct qpnp_flash_led *led;
+ struct flash_node_data *flash_node;
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
+ int rc, i, count = 0;
+ u16 addr;
+ uint val;
+
+ flash_node = container_of(led_cdev, struct flash_node_data, cdev);
+ led = dev_get_drvdata(&flash_node->pdev->dev);
+ for (i = 0; i < ARRAY_SIZE(qpnp_flash_led_ctrl_dbg_regs); i++) {
+ addr = led->base + qpnp_flash_led_ctrl_dbg_regs[i];
+ rc = regmap_read(led->regmap, addr, &val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Unable to read from addr=%x, rc(%d)\n",
+ addr, rc);
+ return -EINVAL;
+ }
+
+ count += snprintf(buf + count, PAGE_SIZE - count,
+ "REG_0x%x = 0x%02x\n", addr, val);
+
+ if (count >= PAGE_SIZE)
+ return PAGE_SIZE - 1;
+ }
+
+ return count;
+}
+
+static ssize_t qpnp_flash_led_current_derate_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct qpnp_flash_led *led;
+ struct flash_node_data *flash_node;
+ unsigned long val;
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
+ ssize_t ret;
+
+ ret = kstrtoul(buf, 10, &val);
+ if (ret)
+ return ret;
+
+ flash_node = container_of(led_cdev, struct flash_node_data, cdev);
+ led = dev_get_drvdata(&flash_node->pdev->dev);
+
+ /*'0' for disable derate feature; non-zero to enable derate feature */
+ if (val == 0)
+ led->pdata->power_detect_en = false;
+ else
+ led->pdata->power_detect_en = true;
+
+ return count;
+}
+
+static ssize_t qpnp_flash_led_max_current_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct qpnp_flash_led *led;
+ struct flash_node_data *flash_node;
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
+ int max_curr_avail_ma = 0;
+
+ flash_node = container_of(led_cdev, struct flash_node_data, cdev);
+ led = dev_get_drvdata(&flash_node->pdev->dev);
+
+ if (led->flash_node[0].flash_on)
+ max_curr_avail_ma += led->flash_node[0].max_current;
+ if (led->flash_node[1].flash_on)
+ max_curr_avail_ma += led->flash_node[1].max_current;
+
+ if (led->pdata->power_detect_en ||
+ led->pdata->die_current_derate_en) {
+ max_curr_avail_ma =
+ qpnp_flash_led_get_max_avail_current(flash_node, led);
+
+ if (max_curr_avail_ma < 0)
+ return -EINVAL;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%u\n", max_curr_avail_ma);
+}
+
+static struct device_attribute qpnp_flash_led_attrs[] = {
+ __ATTR(strobe, 0664, NULL, qpnp_led_strobe_type_store),
+ __ATTR(reg_dump, 0664, qpnp_flash_led_dump_regs_show, NULL),
+ __ATTR(enable_current_derate, 0664, NULL,
+ qpnp_flash_led_current_derate_store),
+ __ATTR(max_allowed_current, 0664, qpnp_flash_led_max_current_show,
+ NULL),
+ __ATTR(enable_die_temp_current_derate, 0664, NULL,
+ qpnp_flash_led_die_temp_store),
+};
+
+static int qpnp_flash_led_get_thermal_derate_rate(const char *rate)
+{
+ /*
+ * return 5% derate as default value if user specifies
+ * a value un-supported
+ */
+ if (strcmp(rate, "1_PERCENT") == 0)
+ return RATE_1_PERCENT;
+ else if (strcmp(rate, "1P25_PERCENT") == 0)
+ return RATE_1P25_PERCENT;
+ else if (strcmp(rate, "2_PERCENT") == 0)
+ return RATE_2_PERCENT;
+ else if (strcmp(rate, "2P5_PERCENT") == 0)
+ return RATE_2P5_PERCENT;
+ else if (strcmp(rate, "5_PERCENT") == 0)
+ return RATE_5_PERCENT;
+ else
+ return RATE_5_PERCENT;
+}
+
+static int qpnp_flash_led_get_ramp_step(const char *step)
+{
+ /*
+ * return 27 us as default value if user specifies
+ * a value un-supported
+ */
+ if (strcmp(step, "0P2_US") == 0)
+ return RAMP_STEP_0P2_US;
+ else if (strcmp(step, "0P4_US") == 0)
+ return RAMP_STEP_0P4_US;
+ else if (strcmp(step, "0P8_US") == 0)
+ return RAMP_STEP_0P8_US;
+ else if (strcmp(step, "1P6_US") == 0)
+ return RAMP_STEP_1P6_US;
+ else if (strcmp(step, "3P3_US") == 0)
+ return RAMP_STEP_3P3_US;
+ else if (strcmp(step, "6P7_US") == 0)
+ return RAMP_STEP_6P7_US;
+ else if (strcmp(step, "13P5_US") == 0)
+ return RAMP_STEP_13P5_US;
+ else
+ return RAMP_STEP_27US;
+}
+
+static u8 qpnp_flash_led_get_droop_debounce_time(u8 val)
+{
+ /*
+ * return 10 us as default value if user specifies
+ * a value un-supported
+ */
+ switch (val) {
+ case 0:
+ return 0;
+ case 10:
+ return 1;
+ case 32:
+ return 2;
+ case 64:
+ return 3;
+ default:
+ return 1;
+ }
+}
+
+static u8 qpnp_flash_led_get_startup_dly(u8 val)
+{
+ /*
+ * return 128 us as default value if user specifies
+ * a value un-supported
+ */
+ switch (val) {
+ case 10:
+ return 0;
+ case 32:
+ return 1;
+ case 64:
+ return 2;
+ case 128:
+ return 3;
+ default:
+ return 3;
+ }
+}
+
+static int
+qpnp_flash_led_get_peripheral_type(struct qpnp_flash_led *led)
+{
+ int rc;
+ uint val;
+
+ rc = regmap_read(led->regmap,
+ FLASH_LED_PERIPHERAL_SUBTYPE(led->base), &val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Unable to read peripheral subtype\n");
+ return -EINVAL;
+ }
+
+ return val;
+}
+
+static int qpnp_flash_led_module_disable(struct qpnp_flash_led *led,
+ struct flash_node_data *flash_node)
+{
+ union power_supply_propval psy_prop;
+ int rc;
+ uint val, tmp;
+
+ rc = regmap_read(led->regmap, FLASH_LED_STROBE_CTRL(led->base), &val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Unable to read strobe reg\n");
+ return -EINVAL;
+ }
+
+ tmp = (~flash_node->trigger) & val;
+ if (!tmp) {
+ if (flash_node->type == TORCH) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_UNLOCK_SECURE(led->base),
+ FLASH_SECURE_MASK, FLASH_UNLOCK_SECURE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Secure reg write failed\n");
+ return -EINVAL;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_TORCH(led->base),
+ FLASH_TORCH_MASK, FLASH_LED_TORCH_DISABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Torch reg write failed\n");
+ return -EINVAL;
+ }
+ }
+
+ if (led->battery_psy &&
+ led->revid_data->pmic_subtype == PMI8996_SUBTYPE &&
+ !led->revid_data->rev3) {
+ psy_prop.intval = false;
+ rc = power_supply_set_property(led->battery_psy,
+ POWER_SUPPLY_PROP_FLASH_TRIGGER,
+ &psy_prop);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Failed to enble charger i/p current limit\n");
+ return -EINVAL;
+ }
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_MODULE_ENABLE_CTRL(led->base),
+ FLASH_MODULE_ENABLE_MASK,
+ FLASH_LED_MODULE_CTRL_DEFAULT);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Module disable failed\n");
+ return -EINVAL;
+ }
+
+ if (led->pinctrl) {
+ rc = pinctrl_select_state(led->pinctrl,
+ led->gpio_state_suspend);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "failed to disable GPIO\n");
+ return -EINVAL;
+ }
+ led->gpio_enabled = false;
+ }
+
+ if (led->battery_psy) {
+ psy_prop.intval = false;
+ rc = power_supply_set_property(led->battery_psy,
+ POWER_SUPPLY_PROP_FLASH_ACTIVE,
+ &psy_prop);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Failed to setup OTG pulse skip enable\n");
+ return -EINVAL;
+ }
+ }
+ }
+
+ if (flash_node->trigger & FLASH_LED0_TRIGGER) {
+ rc = qpnp_led_masked_write(led,
+ led->current_addr,
+ FLASH_CURRENT_MASK, 0x00);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "current register write failed\n");
+ return -EINVAL;
+ }
+ }
+
+ if (flash_node->trigger & FLASH_LED1_TRIGGER) {
+ rc = qpnp_led_masked_write(led,
+ led->current2_addr,
+ FLASH_CURRENT_MASK, 0x00);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "current register write failed\n");
+ return -EINVAL;
+ }
+ }
+
+ if (flash_node->id == FLASH_LED_SWITCH)
+ flash_node->trigger &= FLASH_LED_STROBE_TYPE_HW;
+
+ return 0;
+}
+
+static enum
+led_brightness qpnp_flash_led_brightness_get(struct led_classdev *led_cdev)
+{
+ return led_cdev->brightness;
+}
+
+static int flash_regulator_parse_dt(struct qpnp_flash_led *led,
+ struct flash_node_data *flash_node) {
+
+ int i = 0, rc;
+ struct device_node *node = flash_node->cdev.dev->of_node;
+ struct device_node *temp = NULL;
+ const char *temp_string;
+ u32 val;
+
+ flash_node->reg_data = devm_kzalloc(&led->pdev->dev,
+ sizeof(struct flash_regulator_data *) *
+ flash_node->num_regulators,
+ GFP_KERNEL);
+ if (!flash_node->reg_data) {
+ dev_err(&led->pdev->dev,
+ "Unable to allocate memory\n");
+ return -ENOMEM;
+ }
+
+ for_each_child_of_node(node, temp) {
+ rc = of_property_read_string(temp, "regulator-name",
+ &temp_string);
+ if (!rc)
+ flash_node->reg_data[i].reg_name = temp_string;
+ else {
+ dev_err(&led->pdev->dev,
+ "Unable to read regulator name\n");
+ return rc;
+ }
+
+ rc = of_property_read_u32(temp, "max-voltage", &val);
+ if (!rc) {
+ flash_node->reg_data[i].max_volt_uv = val;
+ } else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev,
+ "Unable to read max voltage\n");
+ return rc;
+ }
+
+ i++;
+ }
+
+ return 0;
+}
+
+static int flash_regulator_setup(struct qpnp_flash_led *led,
+ struct flash_node_data *flash_node, bool on)
+{
+ int i, rc = 0;
+
+ if (on == false) {
+ i = flash_node->num_regulators;
+ goto error_regulator_setup;
+ }
+
+ for (i = 0; i < flash_node->num_regulators; i++) {
+ flash_node->reg_data[i].regs =
+ regulator_get(flash_node->cdev.dev,
+ flash_node->reg_data[i].reg_name);
+ if (IS_ERR(flash_node->reg_data[i].regs)) {
+ rc = PTR_ERR(flash_node->reg_data[i].regs);
+ dev_err(&led->pdev->dev,
+ "Failed to get regulator\n");
+ goto error_regulator_setup;
+ }
+
+ if (regulator_count_voltages(flash_node->reg_data[i].regs)
+ > 0) {
+ rc = regulator_set_voltage(flash_node->reg_data[i].regs,
+ flash_node->reg_data[i].max_volt_uv,
+ flash_node->reg_data[i].max_volt_uv);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "regulator set voltage failed\n");
+ regulator_put(flash_node->reg_data[i].regs);
+ goto error_regulator_setup;
+ }
+ }
+ }
+
+ return rc;
+
+error_regulator_setup:
+ while (i--) {
+ if (regulator_count_voltages(flash_node->reg_data[i].regs)
+ > 0) {
+ regulator_set_voltage(flash_node->reg_data[i].regs,
+ 0, flash_node->reg_data[i].max_volt_uv);
+ }
+
+ regulator_put(flash_node->reg_data[i].regs);
+ }
+
+ return rc;
+}
+
+static int flash_regulator_enable(struct qpnp_flash_led *led,
+ struct flash_node_data *flash_node, bool on)
+{
+ int i, rc = 0;
+
+ if (on == false) {
+ i = flash_node->num_regulators;
+ goto error_regulator_enable;
+ }
+
+ for (i = 0; i < flash_node->num_regulators; i++) {
+ rc = regulator_enable(flash_node->reg_data[i].regs);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "regulator enable failed\n");
+ goto error_regulator_enable;
+ }
+ }
+
+ return rc;
+
+error_regulator_enable:
+ while (i--)
+ regulator_disable(flash_node->reg_data[i].regs);
+
+ return rc;
+}
+
+int qpnp_flash_led_prepare(struct led_trigger *trig, int options,
+ int *max_current)
+{
+ struct led_classdev *led_cdev = trigger_to_lcdev(trig);
+ struct flash_node_data *flash_node;
+ struct qpnp_flash_led *led;
+ int rc;
+
+ if (!led_cdev) {
+ pr_err("Invalid led_trigger provided\n");
+ return -EINVAL;
+ }
+
+ flash_node = container_of(led_cdev, struct flash_node_data, cdev);
+ led = dev_get_drvdata(&flash_node->pdev->dev);
+
+ if (!(options & FLASH_LED_PREPARE_OPTIONS_MASK)) {
+ dev_err(&led->pdev->dev, "Invalid options %d\n", options);
+ return -EINVAL;
+ }
+
+ if (options & ENABLE_REGULATOR) {
+ rc = flash_regulator_enable(led, flash_node, true);
+ if (rc < 0) {
+ dev_err(&led->pdev->dev,
+ "enable regulator failed, rc=%d\n", rc);
+ return rc;
+ }
+ }
+
+ if (options & DISABLE_REGULATOR) {
+ rc = flash_regulator_enable(led, flash_node, false);
+ if (rc < 0) {
+ dev_err(&led->pdev->dev,
+ "disable regulator failed, rc=%d\n", rc);
+ return rc;
+ }
+ }
+
+ if (options & QUERY_MAX_CURRENT) {
+ rc = qpnp_flash_led_get_max_avail_current(flash_node, led);
+ if (rc < 0) {
+ dev_err(&led->pdev->dev,
+ "query max current failed, rc=%d\n", rc);
+ return rc;
+ }
+ *max_current = rc;
+ }
+
+ return 0;
+}
+
+static void qpnp_flash_led_work(struct work_struct *work)
+{
+ struct flash_node_data *flash_node = container_of(work,
+ struct flash_node_data, work);
+ struct qpnp_flash_led *led = dev_get_drvdata(&flash_node->pdev->dev);
+ union power_supply_propval psy_prop;
+ int rc, brightness = flash_node->cdev.brightness;
+ int max_curr_avail_ma = 0;
+ int total_curr_ma = 0;
+ int i;
+ u8 val = 0;
+ uint temp;
+
+ mutex_lock(&led->flash_led_lock);
+
+ if (!brightness)
+ goto turn_off;
+
+ if (led->open_fault) {
+ dev_err(&led->pdev->dev, "Open fault detected\n");
+ mutex_unlock(&led->flash_led_lock);
+ return;
+ }
+
+ if (!flash_node->flash_on && flash_node->num_regulators > 0) {
+ rc = flash_regulator_enable(led, flash_node, true);
+ if (rc) {
+ mutex_unlock(&led->flash_led_lock);
+ return;
+ }
+ }
+
+ if (!led->gpio_enabled && led->pinctrl) {
+ rc = pinctrl_select_state(led->pinctrl,
+ led->gpio_state_active);
+ if (rc) {
+ dev_err(&led->pdev->dev, "failed to enable GPIO\n");
+ goto error_enable_gpio;
+ }
+ led->gpio_enabled = true;
+ }
+
+ if (led->dbg_feature_en) {
+ rc = qpnp_led_masked_write(led,
+ INT_SET_TYPE(led->base),
+ FLASH_STATUS_REG_MASK, 0x1F);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "INT_SET_TYPE write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ IN_POLARITY_HIGH(led->base),
+ FLASH_STATUS_REG_MASK, 0x1F);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "IN_POLARITY_HIGH write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ INT_EN_SET(led->base),
+ FLASH_STATUS_REG_MASK, 0x1F);
+ if (rc) {
+ dev_err(&led->pdev->dev, "INT_EN_SET write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ INT_LATCHED_CLR(led->base),
+ FLASH_STATUS_REG_MASK, 0x1F);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "INT_LATCHED_CLR write failed\n");
+ goto exit_flash_led_work;
+ }
+ }
+
+ if (led->flash_node[led->num_leds - 1].id == FLASH_LED_SWITCH &&
+ flash_node->id != FLASH_LED_SWITCH) {
+ led->flash_node[led->num_leds - 1].trigger |=
+ (0x80 >> flash_node->id);
+ if (flash_node->id == FLASH_LED_0)
+ led->flash_node[led->num_leds - 1].prgm_current =
+ flash_node->prgm_current;
+ else if (flash_node->id == FLASH_LED_1)
+ led->flash_node[led->num_leds - 1].prgm_current2 =
+ flash_node->prgm_current;
+ }
+
+ if (flash_node->type == TORCH) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_UNLOCK_SECURE(led->base),
+ FLASH_SECURE_MASK, FLASH_UNLOCK_SECURE);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Secure reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_TORCH(led->base),
+ FLASH_TORCH_MASK, FLASH_LED_TORCH_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Torch reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ if (flash_node->id == FLASH_LED_SWITCH) {
+ val = (u8)(flash_node->prgm_current *
+ FLASH_TORCH_MAX_LEVEL
+ / flash_node->max_current);
+ rc = qpnp_led_masked_write(led,
+ led->current_addr,
+ FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Torch reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ val = (u8)(flash_node->prgm_current2 *
+ FLASH_TORCH_MAX_LEVEL
+ / flash_node->max_current);
+ rc = qpnp_led_masked_write(led,
+ led->current2_addr,
+ FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Torch reg write failed\n");
+ goto exit_flash_led_work;
+ }
+ } else {
+ val = (u8)(flash_node->prgm_current *
+ FLASH_TORCH_MAX_LEVEL /
+ flash_node->max_current);
+ if (flash_node->id == FLASH_LED_0) {
+ rc = qpnp_led_masked_write(led,
+ led->current_addr,
+ FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "current reg write failed\n");
+ goto exit_flash_led_work;
+ }
+ } else {
+ rc = qpnp_led_masked_write(led,
+ led->current2_addr,
+ FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "current reg write failed\n");
+ goto exit_flash_led_work;
+ }
+ }
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_MAX_CURRENT(led->base),
+ FLASH_CURRENT_MASK, FLASH_TORCH_MAX_LEVEL);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Max current reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_MODULE_ENABLE_CTRL(led->base),
+ FLASH_MODULE_ENABLE_MASK, FLASH_MODULE_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Module enable reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ if (led->pdata->hdrm_sns_ch0_en ||
+ led->pdata->hdrm_sns_ch1_en) {
+ if (flash_node->id == FLASH_LED_SWITCH) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL0(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ flash_node->trigger &
+ FLASH_LED0_TRIGGER ?
+ FLASH_LED_HDRM_SNS_ENABLE :
+ FLASH_LED_HDRM_SNS_DISABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense enable failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL1(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ flash_node->trigger &
+ FLASH_LED1_TRIGGER ?
+ FLASH_LED_HDRM_SNS_ENABLE :
+ FLASH_LED_HDRM_SNS_DISABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense enable failed\n");
+ goto exit_flash_led_work;
+ }
+ } else if (flash_node->id == FLASH_LED_0) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL0(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ FLASH_LED_HDRM_SNS_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense disable failed\n");
+ goto exit_flash_led_work;
+ }
+ } else if (flash_node->id == FLASH_LED_1) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL1(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ FLASH_LED_HDRM_SNS_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense disable failed\n");
+ goto exit_flash_led_work;
+ }
+ }
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_STROBE_CTRL(led->base),
+ (flash_node->id == FLASH_LED_SWITCH ? FLASH_STROBE_MASK
+ | FLASH_LED_STROBE_TYPE_HW
+ : flash_node->trigger |
+ FLASH_LED_STROBE_TYPE_HW),
+ flash_node->trigger);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Strobe reg write failed\n");
+ goto exit_flash_led_work;
+ }
+ } else if (flash_node->type == FLASH) {
+ if (flash_node->trigger & FLASH_LED0_TRIGGER)
+ max_curr_avail_ma += flash_node->max_current;
+ if (flash_node->trigger & FLASH_LED1_TRIGGER)
+ max_curr_avail_ma += flash_node->max_current;
+
+ psy_prop.intval = true;
+ rc = power_supply_set_property(led->battery_psy,
+ POWER_SUPPLY_PROP_FLASH_ACTIVE,
+ &psy_prop);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Failed to setup OTG pulse skip enable\n");
+ goto exit_flash_led_work;
+ }
+
+ if (led->pdata->power_detect_en ||
+ led->pdata->die_current_derate_en) {
+ if (led->battery_psy) {
+ power_supply_get_property(led->battery_psy,
+ POWER_SUPPLY_PROP_STATUS,
+ &psy_prop);
+ if (psy_prop.intval < 0) {
+ dev_err(&led->pdev->dev,
+ "Invalid battery status\n");
+ goto exit_flash_led_work;
+ }
+
+ if (psy_prop.intval ==
+ POWER_SUPPLY_STATUS_CHARGING)
+ led->charging_enabled = true;
+ else if (psy_prop.intval ==
+ POWER_SUPPLY_STATUS_DISCHARGING
+ || psy_prop.intval ==
+ POWER_SUPPLY_STATUS_NOT_CHARGING)
+ led->charging_enabled = false;
+ }
+ max_curr_avail_ma =
+ qpnp_flash_led_get_max_avail_current
+ (flash_node, led);
+ if (max_curr_avail_ma < 0) {
+ dev_err(&led->pdev->dev,
+ "Failed to get max avail curr\n");
+ goto exit_flash_led_work;
+ }
+ }
+
+ if (flash_node->id == FLASH_LED_SWITCH) {
+ if (flash_node->trigger & FLASH_LED0_TRIGGER)
+ total_curr_ma += flash_node->prgm_current;
+ if (flash_node->trigger & FLASH_LED1_TRIGGER)
+ total_curr_ma += flash_node->prgm_current2;
+
+ if (max_curr_avail_ma < total_curr_ma) {
+ flash_node->prgm_current =
+ (flash_node->prgm_current *
+ max_curr_avail_ma) / total_curr_ma;
+ flash_node->prgm_current2 =
+ (flash_node->prgm_current2 *
+ max_curr_avail_ma) / total_curr_ma;
+ }
+
+ val = (u8)(flash_node->prgm_current *
+ FLASH_MAX_LEVEL / flash_node->max_current);
+ rc = qpnp_led_masked_write(led,
+ led->current_addr, FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Current register write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ val = (u8)(flash_node->prgm_current2 *
+ FLASH_MAX_LEVEL / flash_node->max_current);
+ rc = qpnp_led_masked_write(led,
+ led->current2_addr, FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Current register write failed\n");
+ goto exit_flash_led_work;
+ }
+ } else {
+ if (max_curr_avail_ma < flash_node->prgm_current) {
+ dev_err(&led->pdev->dev,
+ "battery only supprots %d mA\n",
+ max_curr_avail_ma);
+ flash_node->prgm_current =
+ (u16)max_curr_avail_ma;
+ }
+
+ val = (u8)(flash_node->prgm_current *
+ FLASH_MAX_LEVEL
+ / flash_node->max_current);
+ if (flash_node->id == FLASH_LED_0) {
+ rc = qpnp_led_masked_write(
+ led,
+ led->current_addr,
+ FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "current reg write failed\n");
+ goto exit_flash_led_work;
+ }
+ } else if (flash_node->id == FLASH_LED_1) {
+ rc = qpnp_led_masked_write(
+ led,
+ led->current2_addr,
+ FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "current reg write failed\n");
+ goto exit_flash_led_work;
+ }
+ }
+ }
+
+ val = (u8)((flash_node->duration - FLASH_DURATION_DIVIDER)
+ / FLASH_DURATION_DIVIDER);
+ rc = qpnp_led_masked_write(led,
+ FLASH_SAFETY_TIMER(led->base),
+ FLASH_SAFETY_TIMER_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Safety timer reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_MAX_CURRENT(led->base),
+ FLASH_CURRENT_MASK, FLASH_MAX_LEVEL);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Max current reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ if (!led->charging_enabled) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_MODULE_ENABLE_CTRL(led->base),
+ FLASH_MODULE_ENABLE, FLASH_MODULE_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Module enable reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ usleep_range(FLASH_RAMP_UP_DELAY_US_MIN,
+ FLASH_RAMP_UP_DELAY_US_MAX);
+ }
+
+ if (led->revid_data->pmic_subtype == PMI8996_SUBTYPE &&
+ !led->revid_data->rev3) {
+ rc = power_supply_set_property(led->battery_psy,
+ POWER_SUPPLY_PROP_FLASH_TRIGGER,
+ &psy_prop);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Failed to disable charger i/p curr limit\n");
+ goto exit_flash_led_work;
+ }
+ }
+
+ if (led->pdata->hdrm_sns_ch0_en ||
+ led->pdata->hdrm_sns_ch1_en) {
+ if (flash_node->id == FLASH_LED_SWITCH) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL0(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ (flash_node->trigger &
+ FLASH_LED0_TRIGGER ?
+ FLASH_LED_HDRM_SNS_ENABLE :
+ FLASH_LED_HDRM_SNS_DISABLE));
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense enable failed\n");
+ goto exit_flash_led_work;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL1(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ (flash_node->trigger &
+ FLASH_LED1_TRIGGER ?
+ FLASH_LED_HDRM_SNS_ENABLE :
+ FLASH_LED_HDRM_SNS_DISABLE));
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense enable failed\n");
+ goto exit_flash_led_work;
+ }
+ } else if (flash_node->id == FLASH_LED_0) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL0(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ FLASH_LED_HDRM_SNS_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense disable failed\n");
+ goto exit_flash_led_work;
+ }
+ } else if (flash_node->id == FLASH_LED_1) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL1(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ FLASH_LED_HDRM_SNS_ENABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense disable failed\n");
+ goto exit_flash_led_work;
+ }
+ }
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_STROBE_CTRL(led->base),
+ (flash_node->id == FLASH_LED_SWITCH ? FLASH_STROBE_MASK
+ | FLASH_LED_STROBE_TYPE_HW
+ : flash_node->trigger |
+ FLASH_LED_STROBE_TYPE_HW),
+ flash_node->trigger);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Strobe reg write failed\n");
+ goto exit_flash_led_work;
+ }
+
+ if (led->strobe_debug && led->dbg_feature_en) {
+ udelay(2000);
+ rc = regmap_read(led->regmap,
+ FLASH_LED_FAULT_STATUS(led->base),
+ &temp);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Unable to read from addr= %x, rc(%d)\n",
+ FLASH_LED_FAULT_STATUS(led->base), rc);
+ goto exit_flash_led_work;
+ }
+ led->fault_reg = temp;
+ }
+ } else {
+ pr_err("Both Torch and Flash cannot be select at same time\n");
+ for (i = 0; i < led->num_leds; i++)
+ led->flash_node[i].flash_on = false;
+ goto turn_off;
+ }
+
+ flash_node->flash_on = true;
+ mutex_unlock(&led->flash_led_lock);
+
+ return;
+
+turn_off:
+ if (led->flash_node[led->num_leds - 1].id == FLASH_LED_SWITCH &&
+ flash_node->id != FLASH_LED_SWITCH)
+ led->flash_node[led->num_leds - 1].trigger &=
+ ~(0x80 >> flash_node->id);
+ if (flash_node->type == TORCH) {
+ /*
+ * Checking LED fault status detects hardware open fault.
+ * If fault occurs, all subsequent LED enablement requests
+ * will be rejected to protect hardware.
+ */
+ rc = regmap_read(led->regmap,
+ FLASH_LED_FAULT_STATUS(led->base), &temp);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Failed to read out fault status register\n");
+ goto exit_flash_led_work;
+ }
+
+ led->open_fault |= (val & FLASH_LED_OPEN_FAULT_DETECTED);
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_STROBE_CTRL(led->base),
+ (flash_node->id == FLASH_LED_SWITCH ? FLASH_STROBE_MASK
+ | FLASH_LED_STROBE_TYPE_HW
+ : flash_node->trigger
+ | FLASH_LED_STROBE_TYPE_HW),
+ FLASH_LED_DISABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Strobe disable failed\n");
+ goto exit_flash_led_work;
+ }
+
+ usleep_range(FLASH_RAMP_DN_DELAY_US_MIN, FLASH_RAMP_DN_DELAY_US_MAX);
+exit_flash_hdrm_sns:
+ if (led->pdata->hdrm_sns_ch0_en) {
+ if (flash_node->id == FLASH_LED_0 ||
+ flash_node->id == FLASH_LED_SWITCH) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL0(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ FLASH_LED_HDRM_SNS_DISABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense disable failed\n");
+ goto exit_flash_hdrm_sns;
+ }
+ }
+ }
+
+ if (led->pdata->hdrm_sns_ch1_en) {
+ if (flash_node->id == FLASH_LED_1 ||
+ flash_node->id == FLASH_LED_SWITCH) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_HDRM_SNS_ENABLE_CTRL1(led->base),
+ FLASH_LED_HDRM_SNS_ENABLE_MASK,
+ FLASH_LED_HDRM_SNS_DISABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Headroom sense disable failed\n");
+ goto exit_flash_hdrm_sns;
+ }
+ }
+ }
+exit_flash_led_work:
+ rc = qpnp_flash_led_module_disable(led, flash_node);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Module disable failed\n");
+ goto exit_flash_led_work;
+ }
+error_enable_gpio:
+ if (flash_node->flash_on && flash_node->num_regulators > 0)
+ flash_regulator_enable(led, flash_node, false);
+
+ flash_node->flash_on = false;
+ mutex_unlock(&led->flash_led_lock);
+}
+
+static void qpnp_flash_led_brightness_set(struct led_classdev *led_cdev,
+ enum led_brightness value)
+{
+ struct flash_node_data *flash_node;
+ struct qpnp_flash_led *led;
+
+ flash_node = container_of(led_cdev, struct flash_node_data, cdev);
+ led = dev_get_drvdata(&flash_node->pdev->dev);
+
+ if (value < LED_OFF) {
+ pr_err("Invalid brightness value\n");
+ return;
+ }
+
+ if (value > flash_node->cdev.max_brightness)
+ value = flash_node->cdev.max_brightness;
+
+ flash_node->cdev.brightness = value;
+ if (led->flash_node[led->num_leds - 1].id ==
+ FLASH_LED_SWITCH) {
+ if (flash_node->type == TORCH)
+ led->flash_node[led->num_leds - 1].type = TORCH;
+ else if (flash_node->type == FLASH)
+ led->flash_node[led->num_leds - 1].type = FLASH;
+
+ led->flash_node[led->num_leds - 1].max_current
+ = flash_node->max_current;
+
+ if (flash_node->id == FLASH_LED_0 ||
+ flash_node->id == FLASH_LED_1) {
+ if (value < FLASH_LED_MIN_CURRENT_MA && value != 0)
+ value = FLASH_LED_MIN_CURRENT_MA;
+
+ flash_node->prgm_current = value;
+ flash_node->flash_on = value ? true : false;
+ } else if (flash_node->id == FLASH_LED_SWITCH) {
+ if (!value) {
+ flash_node->prgm_current = 0;
+ flash_node->prgm_current2 = 0;
+ }
+ }
+ } else {
+ if (value < FLASH_LED_MIN_CURRENT_MA && value != 0)
+ value = FLASH_LED_MIN_CURRENT_MA;
+ flash_node->prgm_current = value;
+ }
+
+ queue_work(led->ordered_workq, &flash_node->work);
+}
+
+static int qpnp_flash_led_init_settings(struct qpnp_flash_led *led)
+{
+ int rc;
+ u8 val, temp_val;
+ uint val_int;
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_MODULE_ENABLE_CTRL(led->base),
+ FLASH_MODULE_ENABLE_MASK,
+ FLASH_LED_MODULE_CTRL_DEFAULT);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Module disable failed\n");
+ return rc;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_STROBE_CTRL(led->base),
+ FLASH_STROBE_MASK, FLASH_LED_DISABLE);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Strobe disable failed\n");
+ return rc;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_TMR_CTRL(led->base),
+ FLASH_TMR_MASK, FLASH_TMR_SAFETY);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "LED timer ctrl reg write failed(%d)\n", rc);
+ return rc;
+ }
+
+ val = (u8)(led->pdata->headroom / FLASH_LED_HEADROOM_DIVIDER -
+ FLASH_LED_HEADROOM_OFFSET);
+ rc = qpnp_led_masked_write(led,
+ FLASH_HEADROOM(led->base),
+ FLASH_HEADROOM_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Headroom reg write failed\n");
+ return rc;
+ }
+
+ val = qpnp_flash_led_get_startup_dly(led->pdata->startup_dly);
+
+ rc = qpnp_led_masked_write(led,
+ FLASH_STARTUP_DELAY(led->base),
+ FLASH_STARTUP_DLY_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Startup delay reg write failed\n");
+ return rc;
+ }
+
+ val = (u8)(led->pdata->clamp_current * FLASH_MAX_LEVEL /
+ FLASH_LED_MAX_CURRENT_MA);
+ rc = qpnp_led_masked_write(led,
+ FLASH_CLAMP_CURRENT(led->base),
+ FLASH_CURRENT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Clamp current reg write failed\n");
+ return rc;
+ }
+
+ if (led->pdata->pmic_charger_support)
+ val = FLASH_LED_FLASH_HW_VREG_OK;
+ else
+ val = FLASH_LED_FLASH_SW_VREG_OK;
+ rc = qpnp_led_masked_write(led,
+ FLASH_VREG_OK_FORCE(led->base),
+ FLASH_VREG_OK_FORCE_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "VREG OK force reg write failed\n");
+ return rc;
+ }
+
+ if (led->pdata->self_check_en)
+ val = FLASH_MODULE_ENABLE;
+ else
+ val = FLASH_LED_DISABLE;
+ rc = qpnp_led_masked_write(led,
+ FLASH_FAULT_DETECT(led->base),
+ FLASH_FAULT_DETECT_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Fault detect reg write failed\n");
+ return rc;
+ }
+
+ val = 0x0;
+ val |= led->pdata->mask3_en << FLASH_LED_MASK3_ENABLE_SHIFT;
+ val |= FLASH_LED_MASK_MODULE_MASK2_ENABLE;
+ rc = qpnp_led_masked_write(led, FLASH_MASK_ENABLE(led->base),
+ FLASH_MASK_MODULE_CONTRL_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Mask module enable failed\n");
+ return rc;
+ }
+
+ rc = regmap_read(led->regmap, FLASH_PERPH_RESET_CTRL(led->base),
+ &val_int);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "Unable to read from address %x, rc(%d)\n",
+ FLASH_PERPH_RESET_CTRL(led->base), rc);
+ return -EINVAL;
+ }
+ val = (u8)val_int;
+
+ if (led->pdata->follow_rb_disable) {
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_UNLOCK_SECURE(led->base),
+ FLASH_SECURE_MASK, FLASH_UNLOCK_SECURE);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Secure reg write failed\n");
+ return -EINVAL;
+ }
+
+ val |= FLASH_FOLLOW_OTST2_RB_MASK;
+ rc = qpnp_led_masked_write(led,
+ FLASH_PERPH_RESET_CTRL(led->base),
+ FLASH_FOLLOW_OTST2_RB_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "failed to reset OTST2_RB bit\n");
+ return rc;
+ }
+ } else {
+ rc = qpnp_led_masked_write(led,
+ FLASH_LED_UNLOCK_SECURE(led->base),
+ FLASH_SECURE_MASK, FLASH_UNLOCK_SECURE);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Secure reg write failed\n");
+ return -EINVAL;
+ }
+
+ val &= ~FLASH_FOLLOW_OTST2_RB_MASK;
+ rc = qpnp_led_masked_write(led,
+ FLASH_PERPH_RESET_CTRL(led->base),
+ FLASH_FOLLOW_OTST2_RB_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "failed to reset OTST2_RB bit\n");
+ return rc;
+ }
+ }
+
+ if (!led->pdata->thermal_derate_en)
+ val = 0x0;
+ else {
+ val = led->pdata->thermal_derate_en << 7;
+ val |= led->pdata->thermal_derate_rate << 3;
+ val |= (led->pdata->thermal_derate_threshold -
+ FLASH_LED_THERMAL_THRESHOLD_MIN) /
+ FLASH_LED_THERMAL_DEVIDER;
+ }
+ rc = qpnp_led_masked_write(led,
+ FLASH_THERMAL_DRATE(led->base),
+ FLASH_THERMAL_DERATE_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Thermal derate reg write failed\n");
+ return rc;
+ }
+
+ if (!led->pdata->current_ramp_en)
+ val = 0x0;
+ else {
+ val = led->pdata->current_ramp_en << 7;
+ val |= led->pdata->ramp_up_step << 3;
+ val |= led->pdata->ramp_dn_step;
+ }
+ rc = qpnp_led_masked_write(led,
+ FLASH_CURRENT_RAMP(led->base),
+ FLASH_CURRENT_RAMP_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "Current ramp reg write failed\n");
+ return rc;
+ }
+
+ if (!led->pdata->vph_pwr_droop_en)
+ val = 0x0;
+ else {
+ val = led->pdata->vph_pwr_droop_en << 7;
+ val |= ((led->pdata->vph_pwr_droop_threshold -
+ FLASH_LED_VPH_DROOP_THRESHOLD_MIN_MV) /
+ FLASH_LED_VPH_DROOP_THRESHOLD_DIVIDER) << 4;
+ temp_val =
+ qpnp_flash_led_get_droop_debounce_time(
+ led->pdata->vph_pwr_droop_debounce_time);
+ if (temp_val == 0xFF) {
+ dev_err(&led->pdev->dev, "Invalid debounce time\n");
+ return temp_val;
+ }
+
+ val |= temp_val;
+ }
+ rc = qpnp_led_masked_write(led,
+ FLASH_VPH_PWR_DROOP(led->base),
+ FLASH_VPH_PWR_DROOP_MASK, val);
+ if (rc) {
+ dev_err(&led->pdev->dev, "VPH PWR droop reg write failed\n");
+ return rc;
+ }
+
+ led->battery_psy = power_supply_get_by_name("battery");
+ if (!led->battery_psy) {
+ dev_err(&led->pdev->dev,
+ "Failed to get battery power supply\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int qpnp_flash_led_parse_each_led_dt(struct qpnp_flash_led *led,
+ struct flash_node_data *flash_node)
+{
+ const char *temp_string;
+ struct device_node *node = flash_node->cdev.dev->of_node;
+ struct device_node *temp = NULL;
+ int rc = 0, num_regs = 0;
+ u32 val;
+
+ rc = of_property_read_string(node, "label", &temp_string);
+ if (!rc) {
+ if (strcmp(temp_string, "flash") == 0)
+ flash_node->type = FLASH;
+ else if (strcmp(temp_string, "torch") == 0)
+ flash_node->type = TORCH;
+ else if (strcmp(temp_string, "switch") == 0)
+ flash_node->type = SWITCH;
+ else {
+ dev_err(&led->pdev->dev, "Wrong flash LED type\n");
+ return -EINVAL;
+ }
+ } else if (rc < 0) {
+ dev_err(&led->pdev->dev, "Unable to read flash type\n");
+ return rc;
+ }
+
+ rc = of_property_read_u32(node, "qcom,current", &val);
+ if (!rc) {
+ if (val < FLASH_LED_MIN_CURRENT_MA)
+ val = FLASH_LED_MIN_CURRENT_MA;
+ flash_node->prgm_current = val;
+ } else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev, "Unable to read current\n");
+ return rc;
+ }
+
+ rc = of_property_read_u32(node, "qcom,id", &val);
+ if (!rc)
+ flash_node->id = (u8)val;
+ else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev, "Unable to read led ID\n");
+ return rc;
+ }
+
+ if (flash_node->type == SWITCH || flash_node->type == FLASH) {
+ rc = of_property_read_u32(node, "qcom,duration", &val);
+ if (!rc)
+ flash_node->duration = (u16)val;
+ else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev, "Unable to read duration\n");
+ return rc;
+ }
+ }
+
+ switch (led->peripheral_type) {
+ case FLASH_SUBTYPE_SINGLE:
+ flash_node->trigger = FLASH_LED0_TRIGGER;
+ break;
+ case FLASH_SUBTYPE_DUAL:
+ if (flash_node->id == FLASH_LED_0)
+ flash_node->trigger = FLASH_LED0_TRIGGER;
+ else if (flash_node->id == FLASH_LED_1)
+ flash_node->trigger = FLASH_LED1_TRIGGER;
+ break;
+ default:
+ dev_err(&led->pdev->dev, "Invalid peripheral type\n");
+ }
+
+ while ((temp = of_get_next_child(node, temp))) {
+ if (of_find_property(temp, "regulator-name", NULL))
+ num_regs++;
+ }
+
+ if (num_regs)
+ flash_node->num_regulators = num_regs;
+
+ return rc;
+}
+
+static int qpnp_flash_led_parse_common_dt(
+ struct qpnp_flash_led *led,
+ struct device_node *node)
+{
+ int rc;
+ u32 val, temp_val;
+ const char *temp;
+
+ led->pdata->headroom = FLASH_LED_HEADROOM_DEFAULT_MV;
+ rc = of_property_read_u32(node, "qcom,headroom", &val);
+ if (!rc)
+ led->pdata->headroom = (u16)val;
+ else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev, "Unable to read headroom\n");
+ return rc;
+ }
+
+ led->pdata->startup_dly = FLASH_LED_STARTUP_DELAY_DEFAULT_US;
+ rc = of_property_read_u32(node, "qcom,startup-dly", &val);
+ if (!rc)
+ led->pdata->startup_dly = (u8)val;
+ else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev, "Unable to read startup delay\n");
+ return rc;
+ }
+
+ led->pdata->clamp_current = FLASH_LED_CLAMP_CURRENT_DEFAULT_MA;
+ rc = of_property_read_u32(node, "qcom,clamp-current", &val);
+ if (!rc) {
+ if (val < FLASH_LED_MIN_CURRENT_MA)
+ val = FLASH_LED_MIN_CURRENT_MA;
+ led->pdata->clamp_current = (u16)val;
+ } else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev, "Unable to read clamp current\n");
+ return rc;
+ }
+
+ led->pdata->pmic_charger_support =
+ of_property_read_bool(node,
+ "qcom,pmic-charger-support");
+
+ led->pdata->self_check_en =
+ of_property_read_bool(node, "qcom,self-check-enabled");
+
+ led->pdata->thermal_derate_en =
+ of_property_read_bool(node,
+ "qcom,thermal-derate-enabled");
+
+ if (led->pdata->thermal_derate_en) {
+ led->pdata->thermal_derate_rate =
+ FLASH_LED_THERMAL_DERATE_RATE_DEFAULT_PERCENT;
+ rc = of_property_read_string(node, "qcom,thermal-derate-rate",
+ &temp);
+ if (!rc) {
+ temp_val =
+ qpnp_flash_led_get_thermal_derate_rate(temp);
+ if (temp_val < 0) {
+ dev_err(&led->pdev->dev,
+ "Invalid thermal derate rate\n");
+ return -EINVAL;
+ }
+
+ led->pdata->thermal_derate_rate = (u8)temp_val;
+ } else {
+ dev_err(&led->pdev->dev,
+ "Unable to read thermal derate rate\n");
+ return -EINVAL;
+ }
+
+ led->pdata->thermal_derate_threshold =
+ FLASH_LED_THERMAL_DERATE_THRESHOLD_DEFAULT_C;
+ rc = of_property_read_u32(node, "qcom,thermal-derate-threshold",
+ &val);
+ if (!rc)
+ led->pdata->thermal_derate_threshold = (u8)val;
+ else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev,
+ "Unable to read thermal derate threshold\n");
+ return rc;
+ }
+ }
+
+ led->pdata->current_ramp_en =
+ of_property_read_bool(node,
+ "qcom,current-ramp-enabled");
+ if (led->pdata->current_ramp_en) {
+ led->pdata->ramp_up_step = FLASH_LED_RAMP_UP_STEP_DEFAULT_US;
+ rc = of_property_read_string(node, "qcom,ramp_up_step", &temp);
+ if (!rc) {
+ temp_val = qpnp_flash_led_get_ramp_step(temp);
+ if (temp_val < 0) {
+ dev_err(&led->pdev->dev,
+ "Invalid ramp up step values\n");
+ return -EINVAL;
+ }
+ led->pdata->ramp_up_step = (u8)temp_val;
+ } else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev,
+ "Unable to read ramp up steps\n");
+ return rc;
+ }
+
+ led->pdata->ramp_dn_step = FLASH_LED_RAMP_DN_STEP_DEFAULT_US;
+ rc = of_property_read_string(node, "qcom,ramp_dn_step", &temp);
+ if (!rc) {
+ temp_val = qpnp_flash_led_get_ramp_step(temp);
+ if (temp_val < 0) {
+ dev_err(&led->pdev->dev,
+ "Invalid ramp down step values\n");
+ return rc;
+ }
+ led->pdata->ramp_dn_step = (u8)temp_val;
+ } else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev,
+ "Unable to read ramp down steps\n");
+ return rc;
+ }
+ }
+
+ led->pdata->vph_pwr_droop_en = of_property_read_bool(node,
+ "qcom,vph-pwr-droop-enabled");
+ if (led->pdata->vph_pwr_droop_en) {
+ led->pdata->vph_pwr_droop_threshold =
+ FLASH_LED_VPH_PWR_DROOP_THRESHOLD_DEFAULT_MV;
+ rc = of_property_read_u32(node,
+ "qcom,vph-pwr-droop-threshold", &val);
+ if (!rc) {
+ led->pdata->vph_pwr_droop_threshold = (u16)val;
+ } else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev,
+ "Unable to read VPH PWR droop threshold\n");
+ return rc;
+ }
+
+ led->pdata->vph_pwr_droop_debounce_time =
+ FLASH_LED_VPH_PWR_DROOP_DEBOUNCE_TIME_DEFAULT_US;
+ rc = of_property_read_u32(node,
+ "qcom,vph-pwr-droop-debounce-time", &val);
+ if (!rc)
+ led->pdata->vph_pwr_droop_debounce_time = (u8)val;
+ else if (rc != -EINVAL) {
+ dev_err(&led->pdev->dev,
+ "Unable to read VPH PWR droop debounce time\n");
+ return rc;
+ }
+ }
+
+ led->pdata->hdrm_sns_ch0_en = of_property_read_bool(node,
+ "qcom,headroom-sense-ch0-enabled");
+
+ led->pdata->hdrm_sns_ch1_en = of_property_read_bool(node,
+ "qcom,headroom-sense-ch1-enabled");
+
+ led->pdata->power_detect_en = of_property_read_bool(node,
+ "qcom,power-detect-enabled");
+
+ led->pdata->mask3_en = of_property_read_bool(node,
+ "qcom,otst2-module-enabled");
+
+ led->pdata->follow_rb_disable = of_property_read_bool(node,
+ "qcom,follow-otst2-rb-disabled");
+
+ led->pdata->die_current_derate_en = of_property_read_bool(node,
+ "qcom,die-current-derate-enabled");
+
+ if (led->pdata->die_current_derate_en) {
+ led->vadc_dev = qpnp_get_vadc(&led->pdev->dev, "die-temp");
+ if (IS_ERR(led->vadc_dev)) {
+ pr_err("VADC channel property Missing\n");
+ return -EINVAL;
+ }
+
+ if (of_find_property(node, "qcom,die-temp-threshold",
+ &led->pdata->temp_threshold_num)) {
+ if (led->pdata->temp_threshold_num > 0) {
+ led->pdata->die_temp_threshold_degc =
+ devm_kzalloc(&led->pdev->dev,
+ led->pdata->temp_threshold_num,
+ GFP_KERNEL);
+
+ if (led->pdata->die_temp_threshold_degc
+ == NULL) {
+ dev_err(&led->pdev->dev,
+ "failed to allocate die temp array\n");
+ return -ENOMEM;
+ }
+ led->pdata->temp_threshold_num /=
+ sizeof(unsigned int);
+
+ rc = of_property_read_u32_array(node,
+ "qcom,die-temp-threshold",
+ led->pdata->die_temp_threshold_degc,
+ led->pdata->temp_threshold_num);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "couldn't read temp threshold rc=%d\n",
+ rc);
+ return rc;
+ }
+ }
+ }
+
+ if (of_find_property(node, "qcom,die-temp-derate-current",
+ &led->pdata->temp_derate_curr_num)) {
+ if (led->pdata->temp_derate_curr_num > 0) {
+ led->pdata->die_temp_derate_curr_ma =
+ devm_kzalloc(&led->pdev->dev,
+ led->pdata->temp_derate_curr_num,
+ GFP_KERNEL);
+ if (led->pdata->die_temp_derate_curr_ma
+ == NULL) {
+ dev_err(&led->pdev->dev,
+ "failed to allocate die derate current array\n");
+ return -ENOMEM;
+ }
+ led->pdata->temp_derate_curr_num /=
+ sizeof(unsigned int);
+
+ rc = of_property_read_u32_array(node,
+ "qcom,die-temp-derate-current",
+ led->pdata->die_temp_derate_curr_ma,
+ led->pdata->temp_derate_curr_num);
+ if (rc) {
+ dev_err(&led->pdev->dev,
+ "couldn't read temp limits rc =%d\n",
+ rc);
+ return rc;
+ }
+ }
+ }
+ if (led->pdata->temp_threshold_num !=
+ led->pdata->temp_derate_curr_num) {
+ pr_err("Both array size are not same\n");
+ return -EINVAL;
+ }
+ }
+
+ led->pinctrl = devm_pinctrl_get(&led->pdev->dev);
+ if (IS_ERR_OR_NULL(led->pinctrl)) {
+ dev_err(&led->pdev->dev, "Unable to acquire pinctrl\n");
+ led->pinctrl = NULL;
+ return 0;
+ }
+
+ led->gpio_state_active = pinctrl_lookup_state(led->pinctrl,
+ "flash_led_enable");
+ if (IS_ERR_OR_NULL(led->gpio_state_active)) {
+ dev_err(&led->pdev->dev, "Cannot lookup LED active state\n");
+ devm_pinctrl_put(led->pinctrl);
+ led->pinctrl = NULL;
+ return PTR_ERR(led->gpio_state_active);
+ }
+
+ led->gpio_state_suspend = pinctrl_lookup_state(led->pinctrl,
+ "flash_led_disable");
+ if (IS_ERR_OR_NULL(led->gpio_state_suspend)) {
+ dev_err(&led->pdev->dev, "Cannot lookup LED disable state\n");
+ devm_pinctrl_put(led->pinctrl);
+ led->pinctrl = NULL;
+ return PTR_ERR(led->gpio_state_suspend);
+ }
+
+ return 0;
+}
+
+static int qpnp_flash_led_probe(struct platform_device *pdev)
+{
+ struct qpnp_flash_led *led;
+ unsigned int base;
+ struct device_node *node, *temp;
+ struct dentry *root, *file;
+ int rc, i = 0, j, num_leds = 0;
+ u32 val;
+
+ root = NULL;
+ node = pdev->dev.of_node;
+ if (node == NULL) {
+ dev_info(&pdev->dev, "No flash device defined\n");
+ return -ENODEV;
+ }
+
+ rc = of_property_read_u32(pdev->dev.of_node, "reg", &base);
+ if (rc < 0) {
+ dev_err(&pdev->dev,
+ "Couldn't find reg in node = %s rc = %d\n",
+ pdev->dev.of_node->full_name, rc);
+ return rc;
+ }
+
+ led = devm_kzalloc(&pdev->dev, sizeof(*led), GFP_KERNEL);
+ if (!led)
+ return -ENOMEM;
+
+ led->regmap = dev_get_regmap(pdev->dev.parent, NULL);
+ if (!led->regmap) {
+ dev_err(&pdev->dev, "Couldn't get parent's regmap\n");
+ return -EINVAL;
+ }
+
+ led->base = base;
+ led->pdev = pdev;
+ led->current_addr = FLASH_LED0_CURRENT(led->base);
+ led->current2_addr = FLASH_LED1_CURRENT(led->base);
+
+ led->pdata = devm_kzalloc(&pdev->dev, sizeof(*led->pdata), GFP_KERNEL);
+ if (!led->pdata)
+ return -ENOMEM;
+
+ led->peripheral_type = (u8)qpnp_flash_led_get_peripheral_type(led);
+ if (led->peripheral_type < 0) {
+ dev_err(&pdev->dev, "Failed to get peripheral type\n");
+ return rc;
+ }
+
+ rc = qpnp_flash_led_parse_common_dt(led, node);
+ if (rc) {
+ dev_err(&pdev->dev,
+ "Failed to get common config for flash LEDs\n");
+ return rc;
+ }
+
+ rc = qpnp_flash_led_init_settings(led);
+ if (rc) {
+ dev_err(&pdev->dev, "Failed to initialize flash LED\n");
+ return rc;
+ }
+
+ rc = qpnp_get_pmic_revid(led);
+ if (rc)
+ return rc;
+
+ temp = NULL;
+ while ((temp = of_get_next_child(node, temp)))
+ num_leds++;
+
+ if (!num_leds)
+ return -ECHILD;
+
+ led->flash_node = devm_kzalloc(&pdev->dev,
+ (sizeof(struct flash_node_data) * num_leds),
+ GFP_KERNEL);
+ if (!led->flash_node) {
+ dev_err(&pdev->dev, "Unable to allocate memory\n");
+ return -ENOMEM;
+ }
+
+ mutex_init(&led->flash_led_lock);
+
+ led->ordered_workq = alloc_ordered_workqueue("flash_led_workqueue", 0);
+ if (!led->ordered_workq) {
+ dev_err(&pdev->dev, "Failed to allocate ordered workqueue\n");
+ return -ENOMEM;
+ }
+
+ for_each_child_of_node(node, temp) {
+ led->flash_node[i].cdev.brightness_set =
+ qpnp_flash_led_brightness_set;
+ led->flash_node[i].cdev.brightness_get =
+ qpnp_flash_led_brightness_get;
+ led->flash_node[i].pdev = pdev;
+
+ INIT_WORK(&led->flash_node[i].work, qpnp_flash_led_work);
+ rc = of_property_read_string(temp, "qcom,led-name",
+ &led->flash_node[i].cdev.name);
+ if (rc < 0) {
+ dev_err(&led->pdev->dev,
+ "Unable to read flash name\n");
+ return rc;
+ }
+
+ rc = of_property_read_string(temp, "qcom,default-led-trigger",
+ &led->flash_node[i].cdev.default_trigger);
+ if (rc < 0) {
+ dev_err(&led->pdev->dev,
+ "Unable to read trigger name\n");
+ return rc;
+ }
+
+ rc = of_property_read_u32(temp, "qcom,max-current", &val);
+ if (!rc) {
+ if (val < FLASH_LED_MIN_CURRENT_MA)
+ val = FLASH_LED_MIN_CURRENT_MA;
+ led->flash_node[i].max_current = (u16)val;
+ led->flash_node[i].cdev.max_brightness = val;
+ } else {
+ dev_err(&led->pdev->dev,
+ "Unable to read max current\n");
+ return rc;
+ }
+ rc = led_classdev_register(&pdev->dev,
+ &led->flash_node[i].cdev);
+ if (rc) {
+ dev_err(&pdev->dev, "Unable to register led\n");
+ goto error_led_register;
+ }
+
+ led->flash_node[i].cdev.dev->of_node = temp;
+
+ rc = qpnp_flash_led_parse_each_led_dt(led, &led->flash_node[i]);
+ if (rc) {
+ dev_err(&pdev->dev,
+ "Failed to parse config for each LED\n");
+ goto error_led_register;
+ }
+
+ if (led->flash_node[i].num_regulators) {
+ rc = flash_regulator_parse_dt(led, &led->flash_node[i]);
+ if (rc) {
+ dev_err(&pdev->dev,
+ "Unable to parse regulator data\n");
+ goto error_led_register;
+ }
+
+ rc = flash_regulator_setup(led, &led->flash_node[i],
+ true);
+ if (rc) {
+ dev_err(&pdev->dev,
+ "Unable to set up regulator\n");
+ goto error_led_register;
+ }
+ }
+
+ for (j = 0; j < ARRAY_SIZE(qpnp_flash_led_attrs); j++) {
+ rc =
+ sysfs_create_file(&led->flash_node[i].cdev.dev->kobj,
+ &qpnp_flash_led_attrs[j].attr);
+ if (rc)
+ goto error_led_register;
+ }
+
+ i++;
+ }
+
+ led->num_leds = i;
+
+ root = debugfs_create_dir("flashLED", NULL);
+ if (IS_ERR_OR_NULL(root)) {
+ pr_err("Error creating top level directory err%ld",
+ (long)root);
+ if (PTR_ERR(root) == -ENODEV)
+ pr_err("debugfs is not enabled in kernel");
+ goto error_led_debugfs;
+ }
+
+ led->dbgfs_root = root;
+ file = debugfs_create_file("enable_debug", 0600, root, led,
+ &flash_led_dfs_dbg_feature_fops);
+ if (!file) {
+ pr_err("error creating 'enable_debug' entry\n");
+ goto error_led_debugfs;
+ }
+
+ file = debugfs_create_file("latched", 0600, root, led,
+ &flash_led_dfs_latched_reg_fops);
+ if (!file) {
+ pr_err("error creating 'latched' entry\n");
+ goto error_led_debugfs;
+ }
+
+ file = debugfs_create_file("strobe", 0600, root, led,
+ &flash_led_dfs_strobe_reg_fops);
+ if (!file) {
+ pr_err("error creating 'strobe' entry\n");
+ goto error_led_debugfs;
+ }
+
+ dev_set_drvdata(&pdev->dev, led);
+
+ return 0;
+
+error_led_debugfs:
+ i = led->num_leds - 1;
+ j = ARRAY_SIZE(qpnp_flash_led_attrs) - 1;
+error_led_register:
+ for (; i >= 0; i--) {
+ for (; j >= 0; j--)
+ sysfs_remove_file(&led->flash_node[i].cdev.dev->kobj,
+ &qpnp_flash_led_attrs[j].attr);
+ j = ARRAY_SIZE(qpnp_flash_led_attrs) - 1;
+ led_classdev_unregister(&led->flash_node[i].cdev);
+ }
+ debugfs_remove_recursive(root);
+ mutex_destroy(&led->flash_led_lock);
+ destroy_workqueue(led->ordered_workq);
+
+ return rc;
+}
+
+static int qpnp_flash_led_remove(struct platform_device *pdev)
+{
+ struct qpnp_flash_led *led = dev_get_drvdata(&pdev->dev);
+ int i, j;
+
+ for (i = led->num_leds - 1; i >= 0; i--) {
+ if (led->flash_node[i].reg_data) {
+ if (led->flash_node[i].flash_on)
+ flash_regulator_enable(led,
+ &led->flash_node[i], false);
+ flash_regulator_setup(led, &led->flash_node[i],
+ false);
+ }
+ for (j = 0; j < ARRAY_SIZE(qpnp_flash_led_attrs); j++)
+ sysfs_remove_file(&led->flash_node[i].cdev.dev->kobj,
+ &qpnp_flash_led_attrs[j].attr);
+ led_classdev_unregister(&led->flash_node[i].cdev);
+ }
+ debugfs_remove_recursive(led->dbgfs_root);
+ mutex_destroy(&led->flash_led_lock);
+ destroy_workqueue(led->ordered_workq);
+
+ return 0;
+}
+
+static const struct of_device_id spmi_match_table[] = {
+ { .compatible = "qcom,qpnp-flash-led",},
+ { },
+};
+
+static struct platform_driver qpnp_flash_led_driver = {
+ .driver = {
+ .name = "qcom,qpnp-flash-led",
+ .of_match_table = spmi_match_table,
+ },
+ .probe = qpnp_flash_led_probe,
+ .remove = qpnp_flash_led_remove,
+};
+
+static int __init qpnp_flash_led_init(void)
+{
+ return platform_driver_register(&qpnp_flash_led_driver);
+}
+late_initcall(qpnp_flash_led_init);
+
+static void __exit qpnp_flash_led_exit(void)
+{
+ platform_driver_unregister(&qpnp_flash_led_driver);
+}
+module_exit(qpnp_flash_led_exit);
+
+MODULE_DESCRIPTION("QPNP Flash LED driver");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("leds:leds-qpnp-flash");
diff --git a/drivers/leds/leds-qpnp-wled.c b/drivers/leds/leds-qpnp-wled.c
index 98ca29e..29e09c9 100644
--- a/drivers/leds/leds-qpnp-wled.c
+++ b/drivers/leds/leds-qpnp-wled.c
@@ -2215,7 +2215,8 @@ static int qpnp_wled_parse_dt(struct qpnp_wled *wled)
if (wled->pmic_rev_id->pmic_subtype == PMI8998_SUBTYPE ||
wled->pmic_rev_id->pmic_subtype == PM660L_SUBTYPE) {
- if (wled->pmic_rev_id->rev4 == PMI8998_V2P0_REV4)
+ if (wled->pmic_rev_id->pmic_subtype == PMI8998_SUBTYPE &&
+ wled->pmic_rev_id->rev4 == PMI8998_V2P0_REV4)
wled->lcd_auto_pfm_en = false;
else
wled->lcd_auto_pfm_en = true;
diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
index 4003831..7b1935a 100644
--- a/drivers/media/i2c/adv7604.c
+++ b/drivers/media/i2c/adv7604.c
@@ -3118,6 +3118,9 @@ static int adv76xx_parse_dt(struct adv76xx_state *state)
state->pdata.blank_data = 1;
state->pdata.op_format_mode_sel = ADV7604_OP_FORMAT_MODE0;
state->pdata.bus_order = ADV7604_BUS_ORDER_RGB;
+ state->pdata.dr_str_data = ADV76XX_DR_STR_MEDIUM_HIGH;
+ state->pdata.dr_str_clk = ADV76XX_DR_STR_MEDIUM_HIGH;
+ state->pdata.dr_str_sync = ADV76XX_DR_STR_MEDIUM_HIGH;
return 0;
}
diff --git a/drivers/media/pci/bt8xx/dvb-bt8xx.c b/drivers/media/pci/bt8xx/dvb-bt8xx.c
index e69d338..ae550a1 100644
--- a/drivers/media/pci/bt8xx/dvb-bt8xx.c
+++ b/drivers/media/pci/bt8xx/dvb-bt8xx.c
@@ -680,6 +680,7 @@ static void frontend_init(struct dvb_bt8xx_card *card, u32 type)
/* DST is not a frontend, attaching the ASIC */
if (dvb_attach(dst_attach, state, &card->dvb_adapter) == NULL) {
pr_err("%s: Could not find a Twinhan DST\n", __func__);
+ kfree(state);
break;
}
/* Attach other DST peripherals if any */
diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
index 518ad34..7f92144 100644
--- a/drivers/media/platform/exynos4-is/fimc-is.c
+++ b/drivers/media/platform/exynos4-is/fimc-is.c
@@ -825,12 +825,13 @@ static int fimc_is_probe(struct platform_device *pdev)
is->irq = irq_of_parse_and_map(dev->of_node, 0);
if (!is->irq) {
dev_err(dev, "no irq found\n");
- return -EINVAL;
+ ret = -EINVAL;
+ goto err_iounmap;
}
ret = fimc_is_get_clocks(is);
if (ret < 0)
- return ret;
+ goto err_iounmap;
platform_set_drvdata(pdev, is);
@@ -891,6 +892,8 @@ static int fimc_is_probe(struct platform_device *pdev)
free_irq(is->irq, is);
err_clk:
fimc_is_put_clocks(is);
+err_iounmap:
+ iounmap(is->pmu_regs);
return ret;
}
@@ -947,6 +950,7 @@ static int fimc_is_remove(struct platform_device *pdev)
fimc_is_unregister_subdevs(is);
vb2_dma_contig_clear_max_seg_size(dev);
fimc_is_put_clocks(is);
+ iounmap(is->pmu_regs);
fimc_is_debugfs_remove(is);
release_firmware(is->fw.f_w);
fimc_is_free_cpu_memory(is);
diff --git a/drivers/media/platform/msm/camera/Makefile b/drivers/media/platform/msm/camera/Makefile
index 48fa1c0..9e0aee9 100644
--- a/drivers/media/platform/msm/camera/Makefile
+++ b/drivers/media/platform/msm/camera/Makefile
@@ -10,3 +10,4 @@
obj-$(CONFIG_SPECTRA_CAMERA) += cam_icp/
obj-$(CONFIG_SPECTRA_CAMERA) += cam_jpeg/
obj-$(CONFIG_SPECTRA_CAMERA) += cam_fd/
+obj-$(CONFIG_SPECTRA_CAMERA) += cam_lrme/
diff --git a/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.c b/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.c
index 3fbb3f0..6d699cf 100644
--- a/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.c
+++ b/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.c
@@ -67,11 +67,15 @@ bool cam_cdm_set_cam_hw_version(
return false;
}
-void cam_cdm_cpas_cb(uint32_t client_handle, void *userdata,
- enum cam_camnoc_irq_type evt_type, uint32_t evt_data)
+bool cam_cdm_cpas_cb(uint32_t client_handle, void *userdata,
+ struct cam_cpas_irq_data *irq_data)
{
- CAM_ERR(CAM_CDM, "CPAS error callback type=%d with data=%x", evt_type,
- evt_data);
+ if (!irq_data)
+ return false;
+
+ CAM_DBG(CAM_CDM, "CPAS error callback type=%d", irq_data->irq_type);
+
+ return false;
}
struct cam_cdm_utils_ops *cam_cdm_get_ops(
diff --git a/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.h b/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.h
index fa3ae04..497832b 100644
--- a/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.h
+++ b/drivers/media/platform/msm/camera/cam_cdm/cam_cdm_core_common.h
@@ -32,8 +32,8 @@ int cam_cdm_process_cmd(void *hw_priv, uint32_t cmd, void *cmd_args,
uint32_t arg_size);
bool cam_cdm_set_cam_hw_version(
uint32_t ver, struct cam_hw_version *cam_version);
-void cam_cdm_cpas_cb(uint32_t client_handle, void *userdata,
- enum cam_camnoc_irq_type evt_type, uint32_t evt_data);
+bool cam_cdm_cpas_cb(uint32_t client_handle, void *userdata,
+ struct cam_cpas_irq_data *irq_data);
struct cam_cdm_utils_ops *cam_cdm_get_ops(
uint32_t ver, struct cam_hw_version *cam_version, bool by_cam_version);
int cam_virtual_cdm_submit_bl(struct cam_hw_info *cdm_hw,
diff --git a/drivers/media/platform/msm/camera/cam_core/cam_context.c b/drivers/media/platform/msm/camera/cam_core/cam_context.c
index d039d75..84402e4 100644
--- a/drivers/media/platform/msm/camera/cam_core/cam_context.c
+++ b/drivers/media/platform/msm/camera/cam_core/cam_context.c
@@ -134,8 +134,8 @@ int cam_context_handle_crm_unlink(struct cam_context *ctx,
rc = ctx->state_machine[ctx->state].crm_ops.unlink(
ctx, unlink);
} else {
- CAM_ERR(CAM_CORE, "No crm unlink in dev %d, state %d",
- ctx->dev_hdl, ctx->state);
+ CAM_ERR(CAM_CORE, "No crm unlink in dev %d, name %s, state %d",
+ ctx->dev_hdl, ctx->dev_name, ctx->state);
rc = -EPROTO;
}
mutex_unlock(&ctx->ctx_mutex);
diff --git a/drivers/media/platform/msm/camera/cam_core/cam_context_utils.c b/drivers/media/platform/msm/camera/cam_core/cam_context_utils.c
index f8c0692..6b872b9 100644
--- a/drivers/media/platform/msm/camera/cam_core/cam_context_utils.c
+++ b/drivers/media/platform/msm/camera/cam_core/cam_context_utils.c
@@ -178,6 +178,7 @@ static void cam_context_sync_callback(int32_t sync_obj, int status, void *data)
req->ctx = NULL;
req->flushed = 0;
spin_lock(&ctx->lock);
+ list_del_init(&req->list);
list_add_tail(&req->list, &ctx->free_req_list);
spin_unlock(&ctx->lock);
}
@@ -200,7 +201,6 @@ int32_t cam_context_release_dev_to_hw(struct cam_context *ctx,
return -EINVAL;
}
- cam_context_stop_dev_to_hw(ctx);
arg.ctxt_to_hw_map = ctx->ctxt_to_hw_map;
arg.active_req = false;
@@ -501,11 +501,10 @@ int32_t cam_context_stop_dev_to_hw(struct cam_context *ctx)
mutex_unlock(&ctx->sync_mutex);
/* stop hw first */
- if (ctx->ctxt_to_hw_map) {
+ if (ctx->hw_mgr_intf->hw_stop) {
stop.ctxt_to_hw_map = ctx->ctxt_to_hw_map;
- if (ctx->hw_mgr_intf->hw_stop)
- ctx->hw_mgr_intf->hw_stop(ctx->hw_mgr_intf->hw_mgr_priv,
- &stop);
+ ctx->hw_mgr_intf->hw_stop(ctx->hw_mgr_intf->hw_mgr_priv,
+ &stop);
}
/*
diff --git a/drivers/media/platform/msm/camera/cam_cpas/cam_cpas_hw.c b/drivers/media/platform/msm/camera/cam_cpas/cam_cpas_hw.c
index fc84d9d..f1dfc7c 100644
--- a/drivers/media/platform/msm/camera/cam_cpas/cam_cpas_hw.c
+++ b/drivers/media/platform/msm/camera/cam_cpas/cam_cpas_hw.c
@@ -596,19 +596,29 @@ static int cam_cpas_util_apply_client_axi_vote(
}
static int cam_cpas_hw_update_axi_vote(struct cam_hw_info *cpas_hw,
- uint32_t client_handle, struct cam_axi_vote *axi_vote)
+ uint32_t client_handle, struct cam_axi_vote *client_axi_vote)
{
+ struct cam_axi_vote axi_vote;
struct cam_cpas *cpas_core = (struct cam_cpas *) cpas_hw->core_info;
uint32_t client_indx = CAM_CPAS_GET_CLIENT_IDX(client_handle);
int rc = 0;
- if (!axi_vote || ((axi_vote->compressed_bw == 0) &&
- (axi_vote->uncompressed_bw == 0))) {
- CAM_ERR(CAM_CPAS, "Invalid vote, client_handle=%d",
+ if (!client_axi_vote) {
+ CAM_ERR(CAM_CPAS, "Invalid arg client_handle=%d",
client_handle);
return -EINVAL;
}
+ axi_vote = *client_axi_vote;
+
+ if ((axi_vote.compressed_bw == 0) &&
+ (axi_vote.uncompressed_bw == 0)) {
+ CAM_DBG(CAM_CPAS, "0 vote from client_handle=%d",
+ client_handle);
+ axi_vote.compressed_bw = CAM_CPAS_DEFAULT_AXI_BW;
+ axi_vote.uncompressed_bw = CAM_CPAS_DEFAULT_AXI_BW;
+ }
+
if (!CAM_CPAS_CLIENT_VALID(client_indx))
return -EINVAL;
@@ -622,12 +632,12 @@ static int cam_cpas_hw_update_axi_vote(struct cam_hw_info *cpas_hw,
CAM_DBG(CAM_CPAS,
"Client[%d] Requested compressed[%llu], uncompressed[%llu]",
- client_indx, axi_vote->compressed_bw,
- axi_vote->uncompressed_bw);
+ client_indx, axi_vote.compressed_bw,
+ axi_vote.uncompressed_bw);
rc = cam_cpas_util_apply_client_axi_vote(cpas_core,
cpas_hw->soc_info.soc_private,
- cpas_core->cpas_client[client_indx], axi_vote);
+ cpas_core->cpas_client[client_indx], &axi_vote);
unlock_client:
mutex_unlock(&cpas_core->client_mutex[client_indx]);
@@ -742,17 +752,27 @@ static int cam_cpas_util_apply_client_ahb_vote(struct cam_hw_info *cpas_hw,
}
static int cam_cpas_hw_update_ahb_vote(struct cam_hw_info *cpas_hw,
- uint32_t client_handle, struct cam_ahb_vote *ahb_vote)
+ uint32_t client_handle, struct cam_ahb_vote *client_ahb_vote)
{
+ struct cam_ahb_vote ahb_vote;
struct cam_cpas *cpas_core = (struct cam_cpas *) cpas_hw->core_info;
uint32_t client_indx = CAM_CPAS_GET_CLIENT_IDX(client_handle);
int rc = 0;
- if (!ahb_vote || (ahb_vote->vote.level == 0)) {
- CAM_ERR(CAM_CPAS, "Invalid AHB vote, %pK", ahb_vote);
+ if (!client_ahb_vote) {
+ CAM_ERR(CAM_CPAS, "Invalid input arg");
return -EINVAL;
}
+ ahb_vote = *client_ahb_vote;
+
+ if (ahb_vote.vote.level == 0) {
+ CAM_DBG(CAM_CPAS, "0 ahb vote from client %d",
+ client_handle);
+ ahb_vote.type = CAM_VOTE_ABSOLUTE;
+ ahb_vote.vote.level = CAM_SVS_VOTE;
+ }
+
if (!CAM_CPAS_CLIENT_VALID(client_indx))
return -EINVAL;
@@ -766,12 +786,12 @@ static int cam_cpas_hw_update_ahb_vote(struct cam_hw_info *cpas_hw,
CAM_DBG(CAM_CPAS,
"client[%d] : type[%d], level[%d], freq[%ld], applied[%d]",
- client_indx, ahb_vote->type, ahb_vote->vote.level,
- ahb_vote->vote.freq,
+ client_indx, ahb_vote.type, ahb_vote.vote.level,
+ ahb_vote.vote.freq,
cpas_core->cpas_client[client_indx]->ahb_level);
rc = cam_cpas_util_apply_client_ahb_vote(cpas_hw,
- cpas_core->cpas_client[client_indx], ahb_vote, NULL);
+ cpas_core->cpas_client[client_indx], &ahb_vote, NULL);
unlock_client:
mutex_unlock(&cpas_core->client_mutex[client_indx]);
diff --git a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.c b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.c
index 4b0cc74..0e5ce85 100644
--- a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.c
+++ b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.c
@@ -24,6 +24,18 @@
struct cam_camnoc_info *camnoc_info;
+#define CAMNOC_SLAVE_MAX_ERR_CODE 7
+static const char * const camnoc_salve_err_code[] = {
+ "Target Error", /* err code 0 */
+ "Address decode error", /* err code 1 */
+ "Unsupported request", /* err code 2 */
+ "Disconnected target", /* err code 3 */
+ "Security violation", /* err code 4 */
+ "Hidden security violation", /* err code 5 */
+ "Timeout Error", /* err code 6 */
+ "Unknown Error", /* unknown err code */
+};
+
static int cam_cpastop_get_hw_info(struct cam_hw_info *cpas_hw,
struct cam_cpas_hw_caps *hw_caps)
{
@@ -106,91 +118,155 @@ static int cam_cpastop_setup_regbase_indices(struct cam_hw_soc_info *soc_info,
}
static int cam_cpastop_handle_errlogger(struct cam_cpas *cpas_core,
- struct cam_hw_soc_info *soc_info)
+ struct cam_hw_soc_info *soc_info,
+ struct cam_camnoc_irq_slave_err_data *slave_err)
{
- uint32_t reg_value[4];
- int i;
- int size = camnoc_info->error_logger_size;
int camnoc_index = cpas_core->regbase_index[CAM_CPAS_REG_CAMNOC];
+ int err_code_index = 0;
- for (i = 0; (i + 3) < size; i = i + 4) {
- reg_value[0] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i]);
- reg_value[1] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i + 1]);
- reg_value[2] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i + 2]);
- reg_value[3] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i + 3]);
- CAM_ERR(CAM_CPAS,
- "offset[0x%x] values [0x%x] [0x%x] [0x%x] [0x%x]",
- camnoc_info->error_logger[i], reg_value[0],
- reg_value[1], reg_value[2], reg_value[3]);
+ if (!camnoc_info->err_logger) {
+ CAM_ERR_RATE_LIMIT(CAM_CPAS, "Invalid err logger info");
+ return -EINVAL;
}
- if ((i + 2) < size) {
- reg_value[0] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i]);
- reg_value[1] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i + 1]);
- reg_value[2] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i + 2]);
- CAM_ERR(CAM_CPAS, "offset[0x%x] values [0x%x] [0x%x] [0x%x]",
- camnoc_info->error_logger[i], reg_value[0],
- reg_value[1], reg_value[2]);
- i = i + 3;
- }
+ slave_err->mainctrl.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->mainctrl);
- if ((i + 1) < size) {
- reg_value[0] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i]);
- reg_value[1] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i + 1]);
- CAM_ERR(CAM_CPAS, "offset[0x%x] values [0x%x] [0x%x]",
- camnoc_info->error_logger[i], reg_value[0],
- reg_value[1]);
- i = i + 2;
- }
+ slave_err->errvld.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errvld);
- if (i < size) {
- reg_value[0] = cam_io_r_mb(
- soc_info->reg_map[camnoc_index].mem_base +
- camnoc_info->error_logger[i]);
- CAM_ERR(CAM_CPAS, "offset[0x%x] values [0x%x]",
- camnoc_info->error_logger[i], reg_value[0]);
- }
+ slave_err->errlog0_low.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog0_low);
+
+ slave_err->errlog0_high.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog0_high);
+
+ slave_err->errlog1_low.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog1_low);
+
+ slave_err->errlog1_high.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog1_high);
+
+ slave_err->errlog2_low.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog2_low);
+
+ slave_err->errlog2_high.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog2_high);
+
+ slave_err->errlog3_low.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog3_low);
+
+ slave_err->errlog3_high.value = cam_io_r_mb(
+ soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->err_logger->errlog3_high);
+
+ CAM_ERR_RATE_LIMIT(CAM_CPAS,
+ "Possible memory configuration issue, fault at SMMU raised as CAMNOC SLAVE_IRQ");
+
+ CAM_ERR_RATE_LIMIT(CAM_CPAS,
+ "mainctrl[0x%x 0x%x] errvld[0x%x 0x%x] stall_en=%d, fault_en=%d, err_vld=%d",
+ camnoc_info->err_logger->mainctrl,
+ slave_err->mainctrl.value,
+ camnoc_info->err_logger->errvld,
+ slave_err->errvld.value,
+ slave_err->mainctrl.stall_en,
+ slave_err->mainctrl.fault_en,
+ slave_err->errvld.err_vld);
+
+ err_code_index = slave_err->errlog0_low.err_code;
+ if (err_code_index > CAMNOC_SLAVE_MAX_ERR_CODE)
+ err_code_index = CAMNOC_SLAVE_MAX_ERR_CODE;
+
+ CAM_ERR_RATE_LIMIT(CAM_CPAS,
+ "errlog0 low[0x%x 0x%x] high[0x%x 0x%x] loginfo_vld=%d, word_error=%d, non_secure=%d, device=%d, opc=%d, err_code=%d(%s) sizef=%d, addr_space=%d, len1=%d",
+ camnoc_info->err_logger->errlog0_low,
+ slave_err->errlog0_low.value,
+ camnoc_info->err_logger->errlog0_high,
+ slave_err->errlog0_high.value,
+ slave_err->errlog0_low.loginfo_vld,
+ slave_err->errlog0_low.word_error,
+ slave_err->errlog0_low.non_secure,
+ slave_err->errlog0_low.device,
+ slave_err->errlog0_low.opc,
+ slave_err->errlog0_low.err_code,
+ camnoc_salve_err_code[err_code_index],
+ slave_err->errlog0_low.sizef,
+ slave_err->errlog0_low.addr_space,
+ slave_err->errlog0_high.len1);
+
+ CAM_ERR_RATE_LIMIT(CAM_CPAS,
+ "errlog1_low[0x%x 0x%x] errlog1_high[0x%x 0x%x] errlog2_low[0x%x 0x%x] errlog2_high[0x%x 0x%x] errlog3_low[0x%x 0x%x] errlog3_high[0x%x 0x%x]",
+ camnoc_info->err_logger->errlog1_low,
+ slave_err->errlog1_low.value,
+ camnoc_info->err_logger->errlog1_high,
+ slave_err->errlog1_high.value,
+ camnoc_info->err_logger->errlog2_low,
+ slave_err->errlog2_low.value,
+ camnoc_info->err_logger->errlog2_high,
+ slave_err->errlog2_high.value,
+ camnoc_info->err_logger->errlog3_low,
+ slave_err->errlog3_low.value,
+ camnoc_info->err_logger->errlog3_high,
+ slave_err->errlog3_high.value);
return 0;
}
-static int cam_cpastop_handle_ubwc_err(struct cam_cpas *cpas_core,
- struct cam_hw_soc_info *soc_info, int i)
+static int cam_cpastop_handle_ubwc_enc_err(struct cam_cpas *cpas_core,
+ struct cam_hw_soc_info *soc_info, int i,
+ struct cam_camnoc_irq_ubwc_enc_data *enc_err)
{
- uint32_t reg_value;
int camnoc_index = cpas_core->regbase_index[CAM_CPAS_REG_CAMNOC];
- reg_value = cam_io_r_mb(soc_info->reg_map[camnoc_index].mem_base +
+ enc_err->encerr_status.value =
+ cam_io_r_mb(soc_info->reg_map[camnoc_index].mem_base +
camnoc_info->irq_err[i].err_status.offset);
- CAM_ERR(CAM_CPAS,
- "Dumping ubwc error status [%d]: offset[0x%x] value[0x%x]",
- i, camnoc_info->irq_err[i].err_status.offset, reg_value);
+ /* Let clients handle the UBWC errors */
+ CAM_DBG(CAM_CPAS,
+ "ubwc enc err [%d]: offset[0x%x] value[0x%x]",
+ i, camnoc_info->irq_err[i].err_status.offset,
+ enc_err->encerr_status.value);
- return reg_value;
+ return 0;
}
-static int cam_cpastop_handle_ahb_timeout_err(struct cam_hw_info *cpas_hw)
+static int cam_cpastop_handle_ubwc_dec_err(struct cam_cpas *cpas_core,
+ struct cam_hw_soc_info *soc_info, int i,
+ struct cam_camnoc_irq_ubwc_dec_data *dec_err)
{
- CAM_ERR(CAM_CPAS, "ahb timout error");
+ int camnoc_index = cpas_core->regbase_index[CAM_CPAS_REG_CAMNOC];
+
+ dec_err->decerr_status.value =
+ cam_io_r_mb(soc_info->reg_map[camnoc_index].mem_base +
+ camnoc_info->irq_err[i].err_status.offset);
+
+ /* Let clients handle the UBWC errors */
+ CAM_DBG(CAM_CPAS,
+ "ubwc dec err status [%d]: offset[0x%x] value[0x%x] thr_err=%d, fcl_err=%d, len_md_err=%d, format_err=%d",
+ i, camnoc_info->irq_err[i].err_status.offset,
+ dec_err->decerr_status.value,
+ dec_err->decerr_status.thr_err,
+ dec_err->decerr_status.fcl_err,
+ dec_err->decerr_status.len_md_err,
+ dec_err->decerr_status.format_err);
+
+ return 0;
+}
+
+static int cam_cpastop_handle_ahb_timeout_err(struct cam_hw_info *cpas_hw,
+ struct cam_camnoc_irq_ahb_timeout_data *ahb_err)
+{
+ CAM_ERR_RATE_LIMIT(CAM_CPAS, "ahb timout error");
return 0;
}
@@ -228,10 +304,11 @@ static int cam_cpastop_reset_irq(struct cam_hw_info *cpas_hw)
}
static void cam_cpastop_notify_clients(struct cam_cpas *cpas_core,
- enum cam_camnoc_hw_irq_type irq_type, uint32_t irq_data)
+ struct cam_cpas_irq_data *irq_data)
{
int i;
struct cam_cpas_client *cpas_client;
+ bool error_handled = false;
CAM_DBG(CAM_CPAS,
"Notify CB : num_clients=%d, registered=%d, started=%d",
@@ -243,13 +320,15 @@ static void cam_cpastop_notify_clients(struct cam_cpas *cpas_core,
cpas_client = cpas_core->cpas_client[i];
if (cpas_client->data.cam_cpas_client_cb) {
CAM_DBG(CAM_CPAS,
- "Calling client CB %d : %d 0x%x",
- i, irq_type, irq_data);
- cpas_client->data.cam_cpas_client_cb(
+ "Calling client CB %d : %d",
+ i, irq_data->irq_type);
+ error_handled =
+ cpas_client->data.cam_cpas_client_cb(
cpas_client->data.client_handle,
cpas_client->data.userdata,
- (enum cam_camnoc_irq_type)irq_type,
irq_data);
+ if (error_handled)
+ break;
}
}
}
@@ -263,7 +342,7 @@ static void cam_cpastop_work(struct work_struct *work)
struct cam_hw_soc_info *soc_info;
int i;
enum cam_camnoc_hw_irq_type irq_type;
- uint32_t irq_data;
+ struct cam_cpas_irq_data irq_data;
payload = container_of(work, struct cam_cpas_work_payload, work);
if (!payload) {
@@ -280,23 +359,30 @@ static void cam_cpastop_work(struct work_struct *work)
(camnoc_info->irq_err[i].enable)) {
irq_type = camnoc_info->irq_err[i].irq_type;
CAM_ERR(CAM_CPAS, "Error occurred, type=%d", irq_type);
- irq_data = 0;
+ memset(&irq_data, 0x0, sizeof(irq_data));
+ irq_data.irq_type = (enum cam_camnoc_irq_type)irq_type;
switch (irq_type) {
case CAM_CAMNOC_HW_IRQ_SLAVE_ERROR:
- irq_data = cam_cpastop_handle_errlogger(
- cpas_core, soc_info);
+ cam_cpastop_handle_errlogger(
+ cpas_core, soc_info,
+ &irq_data.u.slave_err);
break;
case CAM_CAMNOC_HW_IRQ_IFE02_UBWC_ENCODE_ERROR:
case CAM_CAMNOC_HW_IRQ_IFE13_UBWC_ENCODE_ERROR:
- case CAM_CAMNOC_HW_IRQ_IPE_BPS_UBWC_DECODE_ERROR:
case CAM_CAMNOC_HW_IRQ_IPE_BPS_UBWC_ENCODE_ERROR:
- irq_data = cam_cpastop_handle_ubwc_err(
- cpas_core, soc_info, i);
+ cam_cpastop_handle_ubwc_enc_err(
+ cpas_core, soc_info, i,
+ &irq_data.u.enc_err);
+ break;
+ case CAM_CAMNOC_HW_IRQ_IPE_BPS_UBWC_DECODE_ERROR:
+ cam_cpastop_handle_ubwc_dec_err(
+ cpas_core, soc_info, i,
+ &irq_data.u.dec_err);
break;
case CAM_CAMNOC_HW_IRQ_AHB_TIMEOUT:
- irq_data = cam_cpastop_handle_ahb_timeout_err(
- cpas_hw);
+ cam_cpastop_handle_ahb_timeout_err(
+ cpas_hw, &irq_data.u.ahb_err);
break;
case CAM_CAMNOC_HW_IRQ_CAMNOC_TEST:
CAM_DBG(CAM_CPAS, "TEST IRQ");
@@ -306,8 +392,7 @@ static void cam_cpastop_work(struct work_struct *work)
break;
}
- cam_cpastop_notify_clients(cpas_core, irq_type,
- irq_data);
+ cam_cpastop_notify_clients(cpas_core, &irq_data);
payload->irq_status &=
~camnoc_info->irq_err[i].sbm_port;
diff --git a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.h b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.h
index e3639a6..73f7e9b 100644
--- a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.h
+++ b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cam_cpastop_hw.h
@@ -173,6 +173,34 @@ struct cam_cpas_hw_errata_wa_list {
};
/**
+ * struct cam_camnoc_err_logger_info : CAMNOC error logger register offsets
+ *
+ * @mainctrl: Register offset for mainctrl
+ * @errvld: Register offset for errvld
+ * @errlog0_low: Register offset for errlog0_low
+ * @errlog0_high: Register offset for errlog0_high
+ * @errlog1_low: Register offset for errlog1_low
+ * @errlog1_high: Register offset for errlog1_high
+ * @errlog2_low: Register offset for errlog2_low
+ * @errlog2_high: Register offset for errlog2_high
+ * @errlog3_low: Register offset for errlog3_low
+ * @errlog3_high: Register offset for errlog3_high
+ *
+ */
+struct cam_camnoc_err_logger_info {
+ uint32_t mainctrl;
+ uint32_t errvld;
+ uint32_t errlog0_low;
+ uint32_t errlog0_high;
+ uint32_t errlog1_low;
+ uint32_t errlog1_high;
+ uint32_t errlog2_low;
+ uint32_t errlog2_high;
+ uint32_t errlog3_low;
+ uint32_t errlog3_high;
+};
+
+/**
* struct cam_camnoc_info : Overall CAMNOC settings info
*
* @specific: Pointer to CAMNOC SPECIFICTONTTPTR settings
@@ -180,8 +208,7 @@ struct cam_cpas_hw_errata_wa_list {
* @irq_sbm: Pointer to CAMNOC IRQ SBM settings
* @irq_err: Pointer to CAMNOC IRQ Error settings
* @irq_err_size: Array size of IRQ Error settings
- * @error_logger: Pointer to CAMNOC IRQ Error logger read registers
- * @error_logger_size: Array size of IRQ Error logger
+ * @err_logger: Pointer to CAMNOC IRQ Error logger read registers
* @errata_wa_list: HW Errata workaround info
*
*/
@@ -191,8 +218,7 @@ struct cam_camnoc_info {
struct cam_camnoc_irq_sbm *irq_sbm;
struct cam_camnoc_irq_err *irq_err;
int irq_err_size;
- uint32_t *error_logger;
- int error_logger_size;
+ struct cam_camnoc_err_logger_info *err_logger;
struct cam_cpas_hw_errata_wa_list *errata_wa_list;
};
diff --git a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop100.h b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop100.h
index b30cd05..2654b47 100644
--- a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop100.h
+++ b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop100.h
@@ -498,19 +498,17 @@ static struct cam_camnoc_specific
}
};
-uint32_t slave_error_logger[] = {
- 0x2700, /* ERRLOGGER_SWID_LOW */
- 0x2704, /* ERRLOGGER_SWID_HIGH */
- 0x2708, /* ERRLOGGER_MAINCTL_LOW */
- 0x2710, /* ERRLOGGER_ERRVLD_LOW */
- 0x2720, /* ERRLOGGER_ERRLOG0_LOW */
- 0x2724, /* ERRLOGGER_ERRLOG0_HIGH */
- 0x2728, /* ERRLOGGER_ERRLOG1_LOW */
- 0x272c, /* ERRLOGGER_ERRLOG1_HIGH */
- 0x2730, /* ERRLOGGER_ERRLOG2_LOW */
- 0x2734, /* ERRLOGGER_ERRLOG2_HIGH */
- 0x2738, /* ERRLOGGER_ERRLOG3_LOW */
- 0x273c, /* ERRLOGGER_ERRLOG3_HIGH */
+static struct cam_camnoc_err_logger_info cam170_cpas100_err_logger_offsets = {
+ .mainctrl = 0x2708, /* ERRLOGGER_MAINCTL_LOW */
+ .errvld = 0x2710, /* ERRLOGGER_ERRVLD_LOW */
+ .errlog0_low = 0x2720, /* ERRLOGGER_ERRLOG0_LOW */
+ .errlog0_high = 0x2724, /* ERRLOGGER_ERRLOG0_HIGH */
+ .errlog1_low = 0x2728, /* ERRLOGGER_ERRLOG1_LOW */
+ .errlog1_high = 0x272c, /* ERRLOGGER_ERRLOG1_HIGH */
+ .errlog2_low = 0x2730, /* ERRLOGGER_ERRLOG2_LOW */
+ .errlog2_high = 0x2734, /* ERRLOGGER_ERRLOG2_HIGH */
+ .errlog3_low = 0x2738, /* ERRLOGGER_ERRLOG3_LOW */
+ .errlog3_high = 0x273c, /* ERRLOGGER_ERRLOG3_HIGH */
};
static struct cam_cpas_hw_errata_wa_list cam170_cpas100_errata_wa_list = {
@@ -533,9 +531,7 @@ struct cam_camnoc_info cam170_cpas100_camnoc_info = {
.irq_err = &cam_cpas100_irq_err[0],
.irq_err_size = sizeof(cam_cpas100_irq_err) /
sizeof(cam_cpas100_irq_err[0]),
- .error_logger = &slave_error_logger[0],
- .error_logger_size = sizeof(slave_error_logger) /
- sizeof(slave_error_logger[0]),
+ .err_logger = &cam170_cpas100_err_logger_offsets,
.errata_wa_list = &cam170_cpas100_errata_wa_list,
};
diff --git a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop_v170_110.h b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop_v170_110.h
index b1aef1f..4418fb1 100644
--- a/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop_v170_110.h
+++ b/drivers/media/platform/msm/camera/cam_cpas/cpas_top/cpastop_v170_110.h
@@ -505,19 +505,17 @@ static struct cam_camnoc_specific
},
};
-static uint32_t cam_cpas110_slave_error_logger[] = {
- 0x2700, /* ERRLOGGER_SWID_LOW */
- 0x2704, /* ERRLOGGER_SWID_HIGH */
- 0x2708, /* ERRLOGGER_MAINCTL_LOW */
- 0x2710, /* ERRLOGGER_ERRVLD_LOW */
- 0x2720, /* ERRLOGGER_ERRLOG0_LOW */
- 0x2724, /* ERRLOGGER_ERRLOG0_HIGH */
- 0x2728, /* ERRLOGGER_ERRLOG1_LOW */
- 0x272c, /* ERRLOGGER_ERRLOG1_HIGH */
- 0x2730, /* ERRLOGGER_ERRLOG2_LOW */
- 0x2734, /* ERRLOGGER_ERRLOG2_HIGH */
- 0x2738, /* ERRLOGGER_ERRLOG3_LOW */
- 0x273c, /* ERRLOGGER_ERRLOG3_HIGH */
+static struct cam_camnoc_err_logger_info cam170_cpas110_err_logger_offsets = {
+ .mainctrl = 0x2708, /* ERRLOGGER_MAINCTL_LOW */
+ .errvld = 0x2710, /* ERRLOGGER_ERRVLD_LOW */
+ .errlog0_low = 0x2720, /* ERRLOGGER_ERRLOG0_LOW */
+ .errlog0_high = 0x2724, /* ERRLOGGER_ERRLOG0_HIGH */
+ .errlog1_low = 0x2728, /* ERRLOGGER_ERRLOG1_LOW */
+ .errlog1_high = 0x272c, /* ERRLOGGER_ERRLOG1_HIGH */
+ .errlog2_low = 0x2730, /* ERRLOGGER_ERRLOG2_LOW */
+ .errlog2_high = 0x2734, /* ERRLOGGER_ERRLOG2_HIGH */
+ .errlog3_low = 0x2738, /* ERRLOGGER_ERRLOG3_LOW */
+ .errlog3_high = 0x273c, /* ERRLOGGER_ERRLOG3_HIGH */
};
static struct cam_cpas_hw_errata_wa_list cam170_cpas110_errata_wa_list = {
@@ -540,9 +538,7 @@ static struct cam_camnoc_info cam170_cpas110_camnoc_info = {
.irq_err = &cam_cpas110_irq_err[0],
.irq_err_size = sizeof(cam_cpas110_irq_err) /
sizeof(cam_cpas110_irq_err[0]),
- .error_logger = &cam_cpas110_slave_error_logger[0],
- .error_logger_size = sizeof(cam_cpas110_slave_error_logger) /
- sizeof(cam_cpas110_slave_error_logger[0]),
+ .err_logger = &cam170_cpas110_err_logger_offsets,
.errata_wa_list = &cam170_cpas110_errata_wa_list,
};
diff --git a/drivers/media/platform/msm/camera/cam_cpas/include/cam_cpas_api.h b/drivers/media/platform/msm/camera/cam_cpas/include/cam_cpas_api.h
index e0da384..c844ef7 100644
--- a/drivers/media/platform/msm/camera/cam_cpas/include/cam_cpas_api.h
+++ b/drivers/media/platform/msm/camera/cam_cpas/include/cam_cpas_api.h
@@ -82,6 +82,183 @@ enum cam_camnoc_irq_type {
};
/**
+ * struct cam_camnoc_irq_slave_err_data : Data for Slave error.
+ *
+ * @mainctrl : Err logger mainctrl info
+ * @errvld : Err logger errvld info
+ * @errlog0_low : Err logger errlog0_low info
+ * @errlog0_high : Err logger errlog0_high info
+ * @errlog1_low : Err logger errlog1_low info
+ * @errlog1_high : Err logger errlog1_high info
+ * @errlog2_low : Err logger errlog2_low info
+ * @errlog2_high : Err logger errlog2_high info
+ * @errlog3_low : Err logger errlog3_low info
+ * @errlog3_high : Err logger errlog3_high info
+ *
+ */
+struct cam_camnoc_irq_slave_err_data {
+ union {
+ struct {
+ uint32_t stall_en : 1; /* bit 0 */
+ uint32_t fault_en : 1; /* bit 1 */
+ uint32_t rsv : 30; /* bits 2-31 */
+ };
+ uint32_t value;
+ } mainctrl;
+ union {
+ struct {
+ uint32_t err_vld : 1; /* bit 0 */
+ uint32_t rsv : 31; /* bits 1-31 */
+ };
+ uint32_t value;
+ } errvld;
+ union {
+ struct {
+ uint32_t loginfo_vld : 1; /* bit 0 */
+ uint32_t word_error : 1; /* bit 1 */
+ uint32_t non_secure : 1; /* bit 2 */
+ uint32_t device : 1; /* bit 3 */
+ uint32_t opc : 3; /* bits 4 - 6 */
+ uint32_t rsv0 : 1; /* bit 7 */
+ uint32_t err_code : 3; /* bits 8 - 10 */
+ uint32_t sizef : 3; /* bits 11 - 13 */
+ uint32_t rsv1 : 2; /* bits 14 - 15 */
+ uint32_t addr_space : 6; /* bits 16 - 21 */
+ uint32_t rsv2 : 10; /* bits 22 - 31 */
+ };
+ uint32_t value;
+ } errlog0_low;
+ union {
+ struct {
+ uint32_t len1 : 10; /* bits 0 - 9 */
+ uint32_t rsv : 22; /* bits 10 - 31 */
+ };
+ uint32_t value;
+ } errlog0_high;
+ union {
+ struct {
+ uint32_t path : 16; /* bits 0 - 15 */
+ uint32_t rsv : 16; /* bits 16 - 31 */
+ };
+ uint32_t value;
+ } errlog1_low;
+ union {
+ struct {
+ uint32_t extid : 18; /* bits 0 - 17 */
+ uint32_t rsv : 14; /* bits 18 - 31 */
+ };
+ uint32_t value;
+ } errlog1_high;
+ union {
+ struct {
+ uint32_t errlog2_lsb : 32; /* bits 0 - 31 */
+ };
+ uint32_t value;
+ } errlog2_low;
+ union {
+ struct {
+ uint32_t errlog2_msb : 16; /* bits 0 - 16 */
+ uint32_t rsv : 16; /* bits 16 - 31 */
+ };
+ uint32_t value;
+ } errlog2_high;
+ union {
+ struct {
+ uint32_t errlog3_lsb : 32; /* bits 0 - 31 */
+ };
+ uint32_t value;
+ } errlog3_low;
+ union {
+ struct {
+ uint32_t errlog3_msb : 32; /* bits 0 - 31 */
+ };
+ uint32_t value;
+ } errlog3_high;
+};
+
+/**
+ * struct cam_camnoc_irq_ubwc_enc_data : Data for UBWC Encode error.
+ *
+ * @encerr_status : Encode error status
+ *
+ */
+struct cam_camnoc_irq_ubwc_enc_data {
+ union {
+ struct {
+ uint32_t encerrstatus : 3; /* bits 0 - 2 */
+ uint32_t rsv : 29; /* bits 3 - 31 */
+ };
+ uint32_t value;
+ } encerr_status;
+};
+
+/**
+ * struct cam_camnoc_irq_ubwc_dec_data : Data for UBWC Decode error.
+ *
+ * @decerr_status : Decoder error status
+ * @thr_err : Set to 1 if
+ * At least one of the bflc_len fields in the bit steam exceeds
+ * its threshold value. This error is possible only for
+ * RGBA1010102, TP10, and RGB565 formats
+ * @fcl_err : Set to 1 if
+ * Fast clear with a legal non-RGB format
+ * @len_md_err : Set to 1 if
+ * The calculated burst length does not match burst length
+ * specified by the metadata value
+ * @format_err : Set to 1 if
+ * Illegal format
+ * 1. bad format :2,3,6
+ * 2. For 32B MAL, metadata=6
+ * 3. For 32B MAL RGB565, Metadata != 0,1,7
+ * 4. For 64B MAL RGB565, metadata[3:1] == 1,2
+ *
+ */
+struct cam_camnoc_irq_ubwc_dec_data {
+ union {
+ struct {
+ uint32_t thr_err : 1; /* bit 0 */
+ uint32_t fcl_err : 1; /* bit 1 */
+ uint32_t len_md_err : 1; /* bit 2 */
+ uint32_t format_err : 1; /* bit 3 */
+ uint32_t rsv : 28; /* bits 4 - 31 */
+ };
+ uint32_t value;
+ } decerr_status;
+};
+
+struct cam_camnoc_irq_ahb_timeout_data {
+ uint32_t data;
+};
+
+/**
+ * struct cam_cpas_irq_data : CAMNOC IRQ data
+ *
+ * @irq_type : To identify the type of IRQ
+ * @u : Union of irq err data information
+ * @slave_err : Data for Slave error.
+ * Valid if type is CAM_CAMNOC_IRQ_SLAVE_ERROR
+ * @enc_err : Data for UBWC Encode error.
+ * Valid if type is one of below:
+ * CAM_CAMNOC_IRQ_IFE02_UBWC_ENCODE_ERROR
+ * CAM_CAMNOC_IRQ_IFE13_UBWC_ENCODE_ERROR
+ * CAM_CAMNOC_IRQ_IPE_BPS_UBWC_ENCODE_ERROR
+ * @dec_err : Data for UBWC Decode error.
+ * Valid if type is CAM_CAMNOC_IRQ_IPE_BPS_UBWC_DECODE_ERROR
+ * @ahb_err : Data for Slave error.
+ * Valid if type is CAM_CAMNOC_IRQ_AHB_TIMEOUT
+ *
+ */
+struct cam_cpas_irq_data {
+ enum cam_camnoc_irq_type irq_type;
+ union {
+ struct cam_camnoc_irq_slave_err_data slave_err;
+ struct cam_camnoc_irq_ubwc_enc_data enc_err;
+ struct cam_camnoc_irq_ubwc_dec_data dec_err;
+ struct cam_camnoc_irq_ahb_timeout_data ahb_err;
+ } u;
+};
+
+/**
* struct cam_cpas_register_params : Register params for cpas client
*
* @identifier : Input identifier string which is the device label
@@ -107,11 +284,10 @@ struct cam_cpas_register_params {
uint32_t cell_index;
struct device *dev;
void *userdata;
- void (*cam_cpas_client_cb)(
+ bool (*cam_cpas_client_cb)(
uint32_t client_handle,
void *userdata,
- enum cam_camnoc_irq_type event_type,
- uint32_t event_data);
+ struct cam_cpas_irq_data *irq_data);
uint32_t client_handle;
};
diff --git a/drivers/media/platform/msm/camera/cam_fd/fd_hw_mgr/fd_hw/cam_fd_hw_soc.c b/drivers/media/platform/msm/camera/cam_fd/fd_hw_mgr/fd_hw/cam_fd_hw_soc.c
index 9045dc1..f27d016 100644
--- a/drivers/media/platform/msm/camera/cam_fd/fd_hw_mgr/fd_hw/cam_fd_hw_soc.c
+++ b/drivers/media/platform/msm/camera/cam_fd/fd_hw_mgr/fd_hw/cam_fd_hw_soc.c
@@ -20,11 +20,16 @@
#include "cam_fd_hw_core.h"
#include "cam_fd_hw_soc.h"
-static void cam_fd_hw_util_cpas_callback(uint32_t handle, void *userdata,
- enum cam_camnoc_irq_type event_type, uint32_t event_data)
+static bool cam_fd_hw_util_cpas_callback(uint32_t handle, void *userdata,
+ struct cam_cpas_irq_data *irq_data)
{
- CAM_DBG(CAM_FD, "CPAS hdl=%d, udata=%pK, event=%d, event_data=%d",
- handle, userdata, event_type, event_data);
+ if (!irq_data)
+ return false;
+
+ CAM_DBG(CAM_FD, "CPAS hdl=%d, udata=%pK, irq_type=%d",
+ handle, userdata, irq_data->irq_type);
+
+ return false;
}
static int cam_fd_hw_soc_util_setup_regbase_indices(
diff --git a/drivers/media/platform/msm/camera/cam_icp/fw_inc/hfi_intf.h b/drivers/media/platform/msm/camera/cam_icp/fw_inc/hfi_intf.h
index e892772..ce7a8b3 100644
--- a/drivers/media/platform/msm/camera/cam_icp/fw_inc/hfi_intf.h
+++ b/drivers/media/platform/msm/camera/cam_icp/fw_inc/hfi_intf.h
@@ -60,10 +60,12 @@ int hfi_write_cmd(void *cmd_ptr);
* hfi_read_message() - function for hfi read
* @pmsg: buffer to place read message for hfi queue
* @q_id: queue id
+ * @words_read: total number of words read from the queue
+ * returned as output to the caller
*
- * Returns size read in words/failure(negative value)
+ * Returns success(zero)/failure(non zero)
*/
-int64_t hfi_read_message(uint32_t *pmsg, uint8_t q_id);
+int hfi_read_message(uint32_t *pmsg, uint8_t q_id, uint32_t *words_read);
/**
* hfi_init() - function initialize hfi after firmware download
diff --git a/drivers/media/platform/msm/camera/cam_icp/hfi.c b/drivers/media/platform/msm/camera/cam_icp/hfi.c
index 16fa33a..a8855ae 100644
--- a/drivers/media/platform/msm/camera/cam_icp/hfi.c
+++ b/drivers/media/platform/msm/camera/cam_icp/hfi.c
@@ -109,7 +109,19 @@ int hfi_write_cmd(void *cmd_ptr)
new_write_idx << BYTE_WORD_SHIFT);
}
+ /*
+ * To make sure command data in a command queue before
+ * updating write index
+ */
+ wmb();
+
q->qhdr_write_idx = new_write_idx;
+
+ /*
+ * Before raising interrupt make sure command data is ready for
+ * firmware to process
+ */
+ wmb();
cam_io_w((uint32_t)INTR_ENABLE,
g_hfi->csr_base + HFI_REG_A5_CSR_HOST2ICPINT);
err:
@@ -117,13 +129,14 @@ int hfi_write_cmd(void *cmd_ptr)
return rc;
}
-int64_t hfi_read_message(uint32_t *pmsg, uint8_t q_id)
+int hfi_read_message(uint32_t *pmsg, uint8_t q_id,
+ uint32_t *words_read)
{
struct hfi_qtbl *q_tbl_ptr;
struct hfi_q_hdr *q;
uint32_t new_read_idx, size_in_words, word_diff, temp;
uint32_t *read_q, *read_ptr, *write_ptr;
- int64_t rc = 0;
+ int rc = 0;
if (!pmsg) {
CAM_ERR(CAM_HFI, "Invalid msg");
@@ -202,7 +215,7 @@ int64_t hfi_read_message(uint32_t *pmsg, uint8_t q_id)
}
q->qhdr_read_idx = new_read_idx;
- rc = size_in_words;
+ *words_read = size_in_words;
err:
mutex_unlock(&hfi_msg_q_mutex);
return rc;
diff --git a/drivers/media/platform/msm/camera/cam_icp/icp_hw/a5_hw/a5_dev.c b/drivers/media/platform/msm/camera/cam_icp/icp_hw/a5_hw/a5_dev.c
index 99e2e79..14c3c9c 100644
--- a/drivers/media/platform/msm/camera/cam_icp/icp_hw/a5_hw/a5_dev.c
+++ b/drivers/media/platform/msm/camera/cam_icp/icp_hw/a5_hw/a5_dev.c
@@ -50,6 +50,40 @@ struct cam_a5_device_hw_info cam_a5_hw_info = {
};
EXPORT_SYMBOL(cam_a5_hw_info);
+static bool cam_a5_cpas_cb(uint32_t client_handle, void *userdata,
+ struct cam_cpas_irq_data *irq_data)
+{
+ bool error_handled = false;
+
+ if (!irq_data)
+ return error_handled;
+
+ switch (irq_data->irq_type) {
+ case CAM_CAMNOC_IRQ_IPE_BPS_UBWC_DECODE_ERROR:
+ CAM_ERR_RATE_LIMIT(CAM_ICP,
+ "IPE/BPS UBWC Decode error type=%d status=%x thr_err=%d, fcl_err=%d, len_md_err=%d, format_err=%d",
+ irq_data->irq_type,
+ irq_data->u.dec_err.decerr_status.value,
+ irq_data->u.dec_err.decerr_status.thr_err,
+ irq_data->u.dec_err.decerr_status.fcl_err,
+ irq_data->u.dec_err.decerr_status.len_md_err,
+ irq_data->u.dec_err.decerr_status.format_err);
+ error_handled = true;
+ break;
+ case CAM_CAMNOC_IRQ_IPE_BPS_UBWC_ENCODE_ERROR:
+ CAM_ERR_RATE_LIMIT(CAM_ICP,
+ "IPE/BPS UBWC Encode error type=%d status=%x",
+ irq_data->irq_type,
+ irq_data->u.enc_err.encerr_status.value);
+ error_handled = true;
+ break;
+ default:
+ break;
+ }
+
+ return error_handled;
+}
+
int cam_a5_register_cpas(struct cam_hw_soc_info *soc_info,
struct cam_a5_device_core_info *core_info,
uint32_t hw_idx)
@@ -59,7 +93,7 @@ int cam_a5_register_cpas(struct cam_hw_soc_info *soc_info,
cpas_register_params.dev = &soc_info->pdev->dev;
memcpy(cpas_register_params.identifier, "icp", sizeof("icp"));
- cpas_register_params.cam_cpas_client_cb = NULL;
+ cpas_register_params.cam_cpas_client_cb = cam_a5_cpas_cb;
cpas_register_params.cell_index = hw_idx;
cpas_register_params.userdata = NULL;
diff --git a/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.c b/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.c
index 72f2803..6f997a2 100644
--- a/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.c
+++ b/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.c
@@ -300,7 +300,7 @@ static int cam_icp_calc_total_clk(struct cam_icp_hw_mgr *hw_mgr,
hw_mgr_clk_info->base_clk = 0;
for (i = 0; i < CAM_ICP_CTX_MAX; i++) {
ctx_data = &hw_mgr->ctx_data[i];
- if (ctx_data->in_use &&
+ if (ctx_data->state == CAM_ICP_CTX_STATE_ACQUIRED &&
ctx_data->icp_dev_acquire_info->dev_type == dev_type)
hw_mgr_clk_info->base_clk +=
ctx_data->clk_info.base_clk;
@@ -527,7 +527,7 @@ static bool cam_icp_update_bw(struct cam_icp_hw_mgr *hw_mgr,
hw_mgr_clk_info->compressed_bw = 0;
for (i = 0; i < CAM_ICP_CTX_MAX; i++) {
ctx = &hw_mgr->ctx_data[i];
- if (ctx->in_use &&
+ if (ctx->state == CAM_ICP_CTX_STATE_ACQUIRED &&
ctx->icp_dev_acquire_info->dev_type ==
ctx_data->icp_dev_acquire_info->dev_type) {
mutex_lock(&hw_mgr->hw_mgr_mutex);
@@ -905,6 +905,34 @@ static int cam_icp_mgr_process_cmd(void *priv, void *data)
return rc;
}
+static int cam_icp_mgr_cleanup_ctx(struct cam_icp_hw_ctx_data *ctx_data)
+{
+ int i;
+ struct hfi_frame_process_info *hfi_frame_process;
+ struct cam_hw_done_event_data buf_data;
+
+ hfi_frame_process = &ctx_data->hfi_frame_process;
+ for (i = 0; i < CAM_FRAME_CMD_MAX; i++) {
+ if (!hfi_frame_process->request_id[i])
+ continue;
+ buf_data.request_id = hfi_frame_process->request_id[i];
+ ctx_data->ctxt_event_cb(ctx_data->context_priv,
+ false, &buf_data);
+ hfi_frame_process->request_id[i] = 0;
+ if (ctx_data->hfi_frame_process.in_resource[i] > 0) {
+ CAM_DBG(CAM_ICP, "Delete merged sync in object: %d",
+ ctx_data->hfi_frame_process.in_resource[i]);
+ cam_sync_destroy(
+ ctx_data->hfi_frame_process.in_resource[i]);
+ ctx_data->hfi_frame_process.in_resource[i] = 0;
+ }
+ hfi_frame_process->fw_process_flag[i] = false;
+ clear_bit(i, ctx_data->hfi_frame_process.bitmap);
+ }
+
+ return 0;
+}
+
static int cam_icp_mgr_handle_frame_process(uint32_t *msg_ptr, int flag)
{
int i;
@@ -926,6 +954,15 @@ static int cam_icp_mgr_handle_frame_process(uint32_t *msg_ptr, int flag)
(void *)ctx_data->context_priv, request_id);
mutex_lock(&ctx_data->ctx_mutex);
+ if (ctx_data->state != CAM_ICP_CTX_STATE_ACQUIRED) {
+ mutex_unlock(&ctx_data->ctx_mutex);
+ CAM_WARN(CAM_ICP,
+ "ctx with id: %u not in the right state : %x",
+ ctx_data->ctx_id,
+ ctx_data->state);
+ return 0;
+ }
+
hfi_frame_process = &ctx_data->hfi_frame_process;
for (i = 0; i < CAM_FRAME_CMD_MAX; i++)
if (hfi_frame_process->request_id[i] == request_id)
@@ -1056,9 +1093,12 @@ static int cam_icp_mgr_process_msg_create_handle(uint32_t *msg_ptr)
return -EINVAL;
}
- ctx_data->fw_handle = create_handle_ack->fw_handle;
- CAM_DBG(CAM_ICP, "fw_handle = %x", ctx_data->fw_handle);
- complete(&ctx_data->wait_complete);
+ if (ctx_data->state == CAM_ICP_CTX_STATE_IN_USE) {
+ ctx_data->fw_handle = create_handle_ack->fw_handle;
+ CAM_DBG(CAM_ICP, "fw_handle = %x", ctx_data->fw_handle);
+ complete(&ctx_data->wait_complete);
+ } else
+ CAM_WARN(CAM_ICP, "Timeout failed to create fw handle");
return 0;
}
@@ -1080,7 +1120,8 @@ static int cam_icp_mgr_process_msg_ping_ack(uint32_t *msg_ptr)
return -EINVAL;
}
- complete(&ctx_data->wait_complete);
+ if (ctx_data->state == CAM_ICP_CTX_STATE_IN_USE)
+ complete(&ctx_data->wait_complete);
return 0;
}
@@ -1134,7 +1175,10 @@ static int cam_icp_mgr_process_direct_ack_msg(uint32_t *msg_ptr)
ioconfig_ack = (struct hfi_msg_ipebps_async_ack *)msg_ptr;
ctx_data =
(struct cam_icp_hw_ctx_data *)ioconfig_ack->user_data1;
- complete(&ctx_data->wait_complete);
+ if ((ctx_data->state == CAM_ICP_CTX_STATE_RELEASE) ||
+ (ctx_data->state == CAM_ICP_CTX_STATE_IN_USE))
+ complete(&ctx_data->wait_complete);
+
break;
default:
CAM_ERR(CAM_ICP, "Invalid opcode : %u",
@@ -1150,11 +1194,12 @@ static void cam_icp_mgr_process_dbg_buf(void)
{
uint32_t *msg_ptr = NULL, *pkt_ptr = NULL;
struct hfi_msg_debug *dbg_msg;
- int64_t read_len, size_processed = 0;
+ uint32_t read_len, size_processed = 0;
char *dbg_buf;
+ int rc = 0;
- read_len = hfi_read_message(icp_hw_mgr.dbg_buf, Q_DBG);
- if (read_len < 0)
+ rc = hfi_read_message(icp_hw_mgr.dbg_buf, Q_DBG, &read_len);
+ if (rc)
return;
msg_ptr = (uint32_t *)icp_hw_mgr.dbg_buf;
@@ -1179,7 +1224,8 @@ static void cam_icp_mgr_process_dbg_buf(void)
static int cam_icp_process_msg_pkt_type(
struct cam_icp_hw_mgr *hw_mgr,
- uint32_t *msg_ptr)
+ uint32_t *msg_ptr,
+ uint32_t *msg_processed_len)
{
int rc = 0;
int size_processed = 0;
@@ -1230,19 +1276,17 @@ static int cam_icp_process_msg_pkt_type(
break;
}
- if (rc)
- return rc;
-
- return size_processed;
+ *msg_processed_len = size_processed;
+ return rc;
}
static int32_t cam_icp_mgr_process_msg(void *priv, void *data)
{
- int64_t read_len, msg_processed_len;
- int rc = 0;
+ uint32_t read_len, msg_processed_len;
uint32_t *msg_ptr = NULL;
struct hfi_msg_work_data *task_data;
struct cam_icp_hw_mgr *hw_mgr;
+ int rc = 0;
if (!data || !priv) {
CAM_ERR(CAM_ICP, "Invalid data");
@@ -1252,25 +1296,24 @@ static int32_t cam_icp_mgr_process_msg(void *priv, void *data)
task_data = data;
hw_mgr = priv;
- read_len = hfi_read_message(icp_hw_mgr.msg_buf, Q_MSG);
- if (read_len < 0) {
- rc = read_len;
+ rc = hfi_read_message(icp_hw_mgr.msg_buf, Q_MSG, &read_len);
+ if (rc) {
CAM_DBG(CAM_ICP, "Unable to read msg q");
} else {
read_len = read_len << BYTE_WORD_SHIFT;
msg_ptr = (uint32_t *)icp_hw_mgr.msg_buf;
while (true) {
- msg_processed_len = cam_icp_process_msg_pkt_type(
- hw_mgr, msg_ptr);
- if (msg_processed_len < 0) {
- rc = msg_processed_len;
+ rc = cam_icp_process_msg_pkt_type(hw_mgr, msg_ptr,
+ &msg_processed_len);
+ if (rc)
return rc;
- }
read_len -= msg_processed_len;
- if (read_len > 0)
+ if (read_len > 0) {
msg_ptr += (msg_processed_len >>
BYTE_WORD_SHIFT);
+ msg_processed_len = 0;
+ }
else
break;
}
@@ -1475,8 +1518,8 @@ static int cam_icp_mgr_get_free_ctx(struct cam_icp_hw_mgr *hw_mgr)
for (i = 0; i < CAM_ICP_CTX_MAX; i++) {
mutex_lock(&hw_mgr->ctx_data[i].ctx_mutex);
- if (hw_mgr->ctx_data[i].in_use == false) {
- hw_mgr->ctx_data[i].in_use = true;
+ if (hw_mgr->ctx_data[i].state == CAM_ICP_CTX_STATE_FREE) {
+ hw_mgr->ctx_data[i].state = CAM_ICP_CTX_STATE_IN_USE;
mutex_unlock(&hw_mgr->ctx_data[i].ctx_mutex);
break;
}
@@ -1488,7 +1531,7 @@ static int cam_icp_mgr_get_free_ctx(struct cam_icp_hw_mgr *hw_mgr)
static void cam_icp_mgr_put_ctx(struct cam_icp_hw_ctx_data *ctx_data)
{
- ctx_data->in_use = false;
+ ctx_data->state = CAM_ICP_CTX_STATE_FREE;
}
static int cam_icp_mgr_abort_handle(
@@ -1632,25 +1675,34 @@ static int cam_icp_mgr_release_ctx(struct cam_icp_hw_mgr *hw_mgr, int ctx_id)
mutex_lock(&hw_mgr->hw_mgr_mutex);
mutex_lock(&hw_mgr->ctx_data[ctx_id].ctx_mutex);
- if (!hw_mgr->ctx_data[ctx_id].in_use) {
+ if (hw_mgr->ctx_data[ctx_id].state !=
+ CAM_ICP_CTX_STATE_ACQUIRED) {
mutex_unlock(&hw_mgr->ctx_data[ctx_id].ctx_mutex);
mutex_unlock(&hw_mgr->hw_mgr_mutex);
+ CAM_WARN(CAM_ICP,
+ "ctx with id: %d not in right state to release: %d",
+ ctx_id, hw_mgr->ctx_data[ctx_id].state);
return 0;
}
+ cam_icp_mgr_ipe_bps_power_collapse(hw_mgr,
+ &hw_mgr->ctx_data[ctx_id], 0);
+ hw_mgr->ctx_data[ctx_id].state = CAM_ICP_CTX_STATE_RELEASE;
cam_icp_mgr_destroy_handle(&hw_mgr->ctx_data[ctx_id]);
+ cam_icp_mgr_cleanup_ctx(&hw_mgr->ctx_data[ctx_id]);
- hw_mgr->ctx_data[ctx_id].in_use = false;
hw_mgr->ctx_data[ctx_id].fw_handle = 0;
hw_mgr->ctx_data[ctx_id].scratch_mem_size = 0;
for (i = 0; i < CAM_FRAME_CMD_MAX; i++)
clear_bit(i, hw_mgr->ctx_data[ctx_id].hfi_frame_process.bitmap);
kfree(hw_mgr->ctx_data[ctx_id].hfi_frame_process.bitmap);
+ hw_mgr->ctx_data[ctx_id].hfi_frame_process.bitmap = NULL;
cam_icp_hw_mgr_clk_info_update(hw_mgr, &hw_mgr->ctx_data[ctx_id]);
hw_mgr->ctx_data[ctx_id].clk_info.curr_fc = 0;
hw_mgr->ctx_data[ctx_id].clk_info.base_clk = 0;
hw_mgr->ctxt_cnt--;
kfree(hw_mgr->ctx_data[ctx_id].icp_dev_acquire_info);
hw_mgr->ctx_data[ctx_id].icp_dev_acquire_info = NULL;
+ hw_mgr->ctx_data[ctx_id].state = CAM_ICP_CTX_STATE_FREE;
mutex_unlock(&hw_mgr->ctx_data[ctx_id].ctx_mutex);
mutex_unlock(&hw_mgr->hw_mgr_mutex);
@@ -2005,7 +2057,7 @@ static int cam_icp_mgr_handle_config_err(
struct cam_hw_done_event_data buf_data;
buf_data.request_id = *(uint64_t *)config_args->priv;
- ctx_data->ctxt_event_cb(ctx_data->context_priv, true, &buf_data);
+ ctx_data->ctxt_event_cb(ctx_data->context_priv, false, &buf_data);
ctx_data->hfi_frame_process.request_id[idx] = 0;
ctx_data->hfi_frame_process.fw_process_flag[idx] = false;
@@ -2068,8 +2120,10 @@ static int cam_icp_mgr_config_hw(void *hw_mgr_priv, void *config_hw_args)
ctx_data = config_args->ctxt_to_hw_map;
mutex_lock(&ctx_data->ctx_mutex);
- if (!ctx_data->in_use) {
- CAM_ERR(CAM_ICP, "ctx is not in use");
+ if (ctx_data->state != CAM_ICP_CTX_STATE_ACQUIRED) {
+ mutex_unlock(&ctx_data->ctx_mutex);
+ CAM_ERR(CAM_ICP, "ctx id :%u is not in use",
+ ctx_data->ctx_id);
return -EINVAL;
}
@@ -2342,9 +2396,10 @@ static int cam_icp_mgr_prepare_hw_update(void *hw_mgr_priv,
ctx_data = prepare_args->ctxt_to_hw_map;
mutex_lock(&ctx_data->ctx_mutex);
- if (!ctx_data->in_use) {
+ if (ctx_data->state != CAM_ICP_CTX_STATE_ACQUIRED) {
mutex_unlock(&ctx_data->ctx_mutex);
- CAM_ERR(CAM_ICP, "ctx is not in use");
+ CAM_ERR(CAM_ICP, "ctx id: %u is not in use",
+ ctx_data->ctx_id);
return -EINVAL;
}
@@ -2460,7 +2515,7 @@ static int cam_icp_mgr_release_hw(void *hw_mgr_priv, void *release_hw_args)
}
mutex_lock(&hw_mgr->ctx_data[ctx_id].ctx_mutex);
- if (!hw_mgr->ctx_data[ctx_id].in_use) {
+ if (hw_mgr->ctx_data[ctx_id].state != CAM_ICP_CTX_STATE_ACQUIRED) {
CAM_DBG(CAM_ICP, "ctx is not in use: %d", ctx_id);
mutex_unlock(&hw_mgr->ctx_data[ctx_id].ctx_mutex);
return -EINVAL;
@@ -2803,6 +2858,7 @@ static int cam_icp_mgr_acquire_hw(void *hw_mgr_priv, void *acquire_hw_args)
goto copy_to_user_failed;
cam_icp_ctx_clk_info_init(ctx_data);
+ ctx_data->state = CAM_ICP_CTX_STATE_ACQUIRED;
mutex_unlock(&ctx_data->ctx_mutex);
CAM_DBG(CAM_ICP, "scratch size = %x fw_handle = %x",
(unsigned int)icp_dev_acquire_info->scratch_mem_size,
diff --git a/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.h b/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.h
index 321f10c..ab19f45 100644
--- a/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.h
+++ b/drivers/media/platform/msm/camera/cam_icp/icp_hw/icp_hw_mgr/cam_icp_hw_mgr.h
@@ -58,6 +58,15 @@
#define CPAS_IPE1_BIT 0x2000
#define CPAS_BPS_BIT 0x400
+#define ICP_PWR_CLP_BPS 0x00000001
+#define ICP_PWR_CLP_IPE0 0x00010000
+#define ICP_PWR_CLP_IPE1 0x00020000
+
+#define CAM_ICP_CTX_STATE_FREE 0x0
+#define CAM_ICP_CTX_STATE_IN_USE 0x1
+#define CAM_ICP_CTX_STATE_ACQUIRED 0x2
+#define CAM_ICP_CTX_STATE_RELEASE 0x3
+
/**
* struct icp_hfi_mem_info
* @qtbl: Memory info of queue table
@@ -154,7 +163,7 @@ struct cam_ctx_clk_info {
* @acquire_dev_cmd: Acquire command
* @icp_dev_acquire_info: Acquire device info
* @ctxt_event_cb: Context callback function
- * @in_use: Flag for context usage
+ * @state: context state
* @role: Role of a context in case of chaining
* @chain_ctx: Peer context
* @hfi_frame_process: Frame process command
@@ -171,7 +180,7 @@ struct cam_icp_hw_ctx_data {
struct cam_acquire_dev_cmd acquire_dev_cmd;
struct cam_icp_acquire_dev_info *icp_dev_acquire_info;
cam_hw_event_cb_func ctxt_event_cb;
- bool in_use;
+ uint32_t state;
uint32_t role;
struct cam_icp_hw_ctx_data *chain_ctx;
struct hfi_frame_process_info hfi_frame_process;
diff --git a/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.c b/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.c
index 19dd794..01c0a02 100644
--- a/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.c
+++ b/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.c
@@ -66,6 +66,73 @@ static int __cam_isp_ctx_enqueue_request_in_order(
return 0;
}
+static int __cam_isp_ctx_enqueue_init_request(
+ struct cam_context *ctx, struct cam_ctx_request *req)
+{
+ int rc = 0;
+ struct cam_ctx_request *req_old;
+ struct cam_isp_ctx_req *req_isp_old;
+ struct cam_isp_ctx_req *req_isp_new;
+
+ spin_lock_bh(&ctx->lock);
+ if (list_empty(&ctx->pending_req_list)) {
+ list_add_tail(&req->list, &ctx->pending_req_list);
+ CAM_DBG(CAM_ISP, "INIT packet added req id= %d",
+ req->request_id);
+ goto end;
+ }
+
+ req_old = list_first_entry(&ctx->pending_req_list,
+ struct cam_ctx_request, list);
+ req_isp_old = (struct cam_isp_ctx_req *) req_old->req_priv;
+ req_isp_new = (struct cam_isp_ctx_req *) req->req_priv;
+ if (req_isp_old->packet_opcode_type == CAM_ISP_PACKET_INIT_DEV) {
+ if ((req_isp_old->num_cfg + req_isp_new->num_cfg) >=
+ CAM_ISP_CTX_CFG_MAX) {
+ CAM_WARN(CAM_ISP, "Can not merge INIT pkt");
+ rc = -ENOMEM;
+ }
+
+ if (req_isp_old->num_fence_map_out != 0 ||
+ req_isp_old->num_fence_map_in != 0) {
+ CAM_WARN(CAM_ISP, "Invalid INIT pkt sequence");
+ rc = -EINVAL;
+ }
+
+ if (!rc) {
+ memcpy(req_isp_old->fence_map_out,
+ req_isp_new->fence_map_out,
+ sizeof(req_isp_new->fence_map_out[0])*
+ req_isp_new->num_fence_map_out);
+ req_isp_old->num_fence_map_out =
+ req_isp_new->num_fence_map_out;
+
+ memcpy(req_isp_old->fence_map_in,
+ req_isp_new->fence_map_in,
+ sizeof(req_isp_new->fence_map_in[0])*
+ req_isp_new->num_fence_map_in);
+ req_isp_old->num_fence_map_in =
+ req_isp_new->num_fence_map_in;
+
+ memcpy(&req_isp_old->cfg[req_isp_old->num_cfg],
+ req_isp_new->cfg,
+ sizeof(req_isp_new->cfg[0])*
+ req_isp_new->num_cfg);
+ req_isp_old->num_cfg += req_isp_new->num_cfg;
+
+ list_add_tail(&req->list, &ctx->free_req_list);
+ }
+ } else {
+ CAM_WARN(CAM_ISP,
+ "Received Update pkt before INIT pkt. req_id= %lld",
+ req->request_id);
+ rc = -EINVAL;
+ }
+end:
+ spin_unlock_bh(&ctx->lock);
+ return rc;
+}
+
static const char *__cam_isp_resource_handle_id_to_type
(uint32_t resource_handle)
{
@@ -304,7 +371,7 @@ static void __cam_isp_ctx_send_sof_timestamp(
ctx_isp->sof_timestamp_val);
CAM_DBG(CAM_ISP, " sof status:%d", sof_event_status);
- if (cam_req_mgr_notify_frame_message(&req_msg,
+ if (cam_req_mgr_notify_message(&req_msg,
V4L_EVENT_CAM_REQ_MGR_SOF, V4L_EVENT_CAM_REQ_MGR_EVENT))
CAM_ERR(CAM_ISP,
"Error in notifying the sof time for req id:%lld",
@@ -427,6 +494,13 @@ static int __cam_isp_ctx_notify_eof_in_actived_state(
return rc;
}
+static int __cam_isp_ctx_reg_upd_in_hw_error(
+ struct cam_isp_context *ctx_isp, void *evt_data)
+{
+ ctx_isp->substate_activated = CAM_ISP_CTX_ACTIVATED_SOF;
+ return 0;
+}
+
static int __cam_isp_ctx_sof_in_activated_state(
struct cam_isp_context *ctx_isp, void *evt_data)
{
@@ -689,8 +763,13 @@ static int __cam_isp_ctx_handle_error(struct cam_isp_context *ctx_isp,
void *evt_data)
{
int rc = 0;
- struct cam_ctx_request *req;
+ uint32_t i = 0;
+ bool found = 0;
+ struct cam_ctx_request *req = NULL;
+ struct cam_ctx_request *req_temp;
+ struct cam_isp_ctx_req *req_isp = NULL;
struct cam_req_mgr_error_notify notify;
+ uint64_t error_request_id;
struct cam_context *ctx = ctx_isp->base;
struct cam_isp_hw_error_event_data *error_event_data =
@@ -701,7 +780,7 @@ static int __cam_isp_ctx_handle_error(struct cam_isp_context *ctx_isp,
CAM_DBG(CAM_ISP, "Enter error_type = %d", error_type);
if ((error_type == CAM_ISP_HW_ERROR_OVERFLOW) ||
(error_type == CAM_ISP_HW_ERROR_BUSIF_OVERFLOW))
- notify.error = CRM_KMD_ERR_FATAL;
+ notify.error = CRM_KMD_ERR_OVERFLOW;
/*
* Need to check the active req
@@ -712,31 +791,92 @@ static int __cam_isp_ctx_handle_error(struct cam_isp_context *ctx_isp,
if (list_empty(&ctx->active_req_list)) {
CAM_ERR_RATE_LIMIT(CAM_ISP,
"handling error with no active request");
- rc = -EINVAL;
- goto end;
+ } else {
+ list_for_each_entry_safe(req, req_temp,
+ &ctx->active_req_list, list) {
+ req_isp = (struct cam_isp_ctx_req *) req->req_priv;
+ if (!req_isp->bubble_report) {
+ for (i = 0; i < req_isp->num_fence_map_out;
+ i++) {
+ CAM_ERR(CAM_ISP, "req %llu, Sync fd %x",
+ req->request_id,
+ req_isp->fence_map_out[i].
+ sync_id);
+ if (req_isp->fence_map_out[i].sync_id
+ != -1) {
+ rc = cam_sync_signal(
+ req_isp->fence_map_out[i].
+ sync_id,
+ CAM_SYNC_STATE_SIGNALED_ERROR);
+ req_isp->fence_map_out[i].
+ sync_id = -1;
+ }
+ }
+ list_del_init(&req->list);
+ list_add_tail(&req->list, &ctx->free_req_list);
+ ctx_isp->active_req_cnt--;
+ } else {
+ found = 1;
+ break;
+ }
+ }
}
- req = list_first_entry(&ctx->active_req_list,
- struct cam_ctx_request, list);
+ if (found) {
+ list_for_each_entry_safe_reverse(req, req_temp,
+ &ctx->active_req_list, list) {
+ req_isp = (struct cam_isp_ctx_req *) req->req_priv;
+ list_del_init(&req->list);
+ list_add(&req->list, &ctx->pending_req_list);
+ ctx_isp->active_req_cnt--;
+ }
+ }
+
+ do {
+ if (list_empty(&ctx->pending_req_list)) {
+ error_request_id = ctx_isp->last_applied_req_id + 1;
+ req_isp = NULL;
+ break;
+ }
+ req = list_first_entry(&ctx->pending_req_list,
+ struct cam_ctx_request, list);
+ req_isp = (struct cam_isp_ctx_req *) req->req_priv;
+ error_request_id = ctx_isp->last_applied_req_id;
+
+ if (req_isp->bubble_report)
+ break;
+
+ for (i = 0; i < req_isp->num_fence_map_out; i++) {
+ if (req_isp->fence_map_out[i].sync_id != -1)
+ rc = cam_sync_signal(
+ req_isp->fence_map_out[i].sync_id,
+ CAM_SYNC_STATE_SIGNALED_ERROR);
+ req_isp->fence_map_out[i].sync_id = -1;
+ }
+ list_del_init(&req->list);
+ list_add_tail(&req->list, &ctx->free_req_list);
+
+ } while (req->request_id < ctx_isp->last_applied_req_id);
+
if (ctx->ctx_crm_intf && ctx->ctx_crm_intf->notify_err) {
notify.link_hdl = ctx->link_hdl;
notify.dev_hdl = ctx->dev_hdl;
- notify.req_id = req->request_id;
+ notify.req_id = error_request_id;
+
+ if (req_isp && req_isp->bubble_report)
+ notify.error = CRM_KMD_ERR_BUBBLE;
+
+ CAM_WARN(CAM_ISP, "Notify CRM: req %lld, frame %lld\n",
+ error_request_id, ctx_isp->frame_id);
ctx->ctx_crm_intf->notify_err(¬ify);
- CAM_ERR_RATE_LIMIT(CAM_ISP, "Notify CRM about ERROR frame %lld",
- ctx_isp->frame_id);
+ ctx_isp->substate_activated = CAM_ISP_CTX_ACTIVATED_HW_ERROR;
} else {
CAM_ERR_RATE_LIMIT(CAM_ISP, "Can not notify ERRROR to CRM");
rc = -EFAULT;
}
- list_del_init(&req->list);
- list_add(&req->list, &ctx->pending_req_list);
- /* might need to check if active list is empty */
-
-end:
CAM_DBG(CAM_ISP, "Exit");
return rc;
}
@@ -746,7 +886,7 @@ static struct cam_isp_ctx_irq_ops
/* SOF */
{
.irq_ops = {
- NULL,
+ __cam_isp_ctx_handle_error,
__cam_isp_ctx_sof_in_activated_state,
__cam_isp_ctx_reg_upd_in_sof,
__cam_isp_ctx_notify_sof_in_actived_state,
@@ -779,7 +919,7 @@ static struct cam_isp_ctx_irq_ops
/* BUBBLE */
{
.irq_ops = {
- NULL,
+ __cam_isp_ctx_handle_error,
__cam_isp_ctx_sof_in_activated_state,
NULL,
__cam_isp_ctx_notify_sof_in_actived_state,
@@ -790,7 +930,7 @@ static struct cam_isp_ctx_irq_ops
/* Bubble Applied */
{
.irq_ops = {
- NULL,
+ __cam_isp_ctx_handle_error,
__cam_isp_ctx_sof_in_activated_state,
__cam_isp_ctx_reg_upd_in_activated_state,
__cam_isp_ctx_epoch_in_bubble_applied,
@@ -798,6 +938,17 @@ static struct cam_isp_ctx_irq_ops
__cam_isp_ctx_buf_done_in_bubble_applied,
},
},
+ /* HW ERROR */
+ {
+ .irq_ops = {
+ NULL,
+ __cam_isp_ctx_sof_in_activated_state,
+ __cam_isp_ctx_reg_upd_in_hw_error,
+ NULL,
+ NULL,
+ NULL,
+ },
+ },
/* HALT */
{
},
@@ -878,7 +1029,9 @@ static int __cam_isp_ctx_apply_req_in_activated_state(
} else {
spin_lock_bh(&ctx->lock);
ctx_isp->substate_activated = next_state;
- CAM_DBG(CAM_ISP, "new state %d", next_state);
+ ctx_isp->last_applied_req_id = apply->request_id;
+ CAM_DBG(CAM_ISP, "new substate state %d, applied req %lld",
+ next_state, ctx_isp->last_applied_req_id);
spin_unlock_bh(&ctx->lock);
}
end:
@@ -942,9 +1095,9 @@ static int __cam_isp_ctx_flush_req(struct cam_context *ctx,
struct cam_ctx_request *req_temp;
struct cam_isp_ctx_req *req_isp;
- spin_lock(&ctx->lock);
+ spin_lock_bh(&ctx->lock);
if (list_empty(req_list)) {
- spin_unlock(&ctx->lock);
+ spin_unlock_bh(&ctx->lock);
CAM_DBG(CAM_ISP, "request list is empty");
return 0;
}
@@ -978,7 +1131,7 @@ static int __cam_isp_ctx_flush_req(struct cam_context *ctx,
break;
}
}
- spin_unlock(&ctx->lock);
+ spin_unlock_bh(&ctx->lock);
if (flush_req->type == CAM_REQ_MGR_FLUSH_TYPE_CANCEL_REQ &&
!cancel_req_id_found)
@@ -1012,10 +1165,10 @@ static int __cam_isp_ctx_flush_req_in_ready(
rc = __cam_isp_ctx_flush_req(ctx, &ctx->pending_req_list, flush_req);
/* if nothing is in pending req list, change state to acquire*/
- spin_lock(&ctx->lock);
+ spin_lock_bh(&ctx->lock);
if (list_empty(&ctx->pending_req_list))
ctx->state = CAM_CTX_ACQUIRED;
- spin_unlock(&ctx->lock);
+ spin_unlock_bh(&ctx->lock);
trace_cam_context_state("ISP", ctx);
@@ -1541,6 +1694,7 @@ static int __cam_isp_ctx_config_dev_in_top_state(
struct cam_req_mgr_add_request add_req;
struct cam_isp_context *ctx_isp =
(struct cam_isp_context *) ctx->ctx_priv;
+ struct cam_isp_prepare_hw_update_data hw_update_data;
CAM_DBG(CAM_ISP, "get free request object......");
@@ -1591,6 +1745,7 @@ static int __cam_isp_ctx_config_dev_in_top_state(
cfg.max_in_map_entries = CAM_ISP_CTX_RES_MAX;
cfg.out_map_entries = req_isp->fence_map_out;
cfg.in_map_entries = req_isp->fence_map_in;
+ cfg.priv = &hw_update_data;
CAM_DBG(CAM_ISP, "try to prepare config packet......");
@@ -1605,31 +1760,48 @@ static int __cam_isp_ctx_config_dev_in_top_state(
req_isp->num_fence_map_out = cfg.num_out_map_entries;
req_isp->num_fence_map_in = cfg.num_in_map_entries;
req_isp->num_acked = 0;
+ req_isp->packet_opcode_type = hw_update_data.packet_opcode_type;
CAM_DBG(CAM_ISP, "num_entry: %d, num fence out: %d, num fence in: %d",
- req_isp->num_cfg, req_isp->num_fence_map_out,
+ req_isp->num_cfg, req_isp->num_fence_map_out,
req_isp->num_fence_map_in);
req->request_id = packet->header.request_id;
req->status = 1;
- if (ctx->state == CAM_CTX_ACTIVATED && ctx->ctx_crm_intf->add_req) {
- add_req.link_hdl = ctx->link_hdl;
- add_req.dev_hdl = ctx->dev_hdl;
- add_req.req_id = req->request_id;
- add_req.skip_before_applying = 0;
- rc = ctx->ctx_crm_intf->add_req(&add_req);
- if (rc) {
- CAM_ERR(CAM_ISP, "Error: Adding request id=%llu",
- req->request_id);
- goto free_req;
+ CAM_DBG(CAM_ISP, "Packet request id 0x%llx packet opcode:%d",
+ packet->header.request_id, req_isp->packet_opcode_type);
+
+ if (req_isp->packet_opcode_type == CAM_ISP_PACKET_INIT_DEV) {
+ if (ctx->state < CAM_CTX_ACTIVATED) {
+ rc = __cam_isp_ctx_enqueue_init_request(ctx, req);
+ if (rc)
+ CAM_ERR(CAM_ISP, "Enqueue INIT pkt failed");
+ } else {
+ rc = -EINVAL;
+ CAM_ERR(CAM_ISP, "Recevied INIT pkt in wrong state");
+ }
+ } else {
+ if (ctx->state >= CAM_CTX_READY && ctx->ctx_crm_intf->add_req) {
+ add_req.link_hdl = ctx->link_hdl;
+ add_req.dev_hdl = ctx->dev_hdl;
+ add_req.req_id = req->request_id;
+ add_req.skip_before_applying = 0;
+ rc = ctx->ctx_crm_intf->add_req(&add_req);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "Add req failed: req id=%llu",
+ req->request_id);
+ } else {
+ __cam_isp_ctx_enqueue_request_in_order(
+ ctx, req);
+ }
+ } else {
+ rc = -EINVAL;
+ CAM_ERR(CAM_ISP, "Recevied Update in wrong state");
}
}
-
- CAM_DBG(CAM_ISP, "Packet request id 0x%llx",
- packet->header.request_id);
-
- __cam_isp_ctx_enqueue_request_in_order(ctx, req);
+ if (rc)
+ goto free_req;
CAM_DBG(CAM_ISP, "Preprocessing Config %lld successful",
req->request_id);
@@ -2005,6 +2177,24 @@ static int __cam_isp_ctx_release_dev_in_activated(struct cam_context *ctx,
return rc;
}
+static int __cam_isp_ctx_unlink_in_activated(struct cam_context *ctx,
+ struct cam_req_mgr_core_dev_link_setup *unlink)
+{
+ int rc = 0;
+
+ CAM_WARN(CAM_ISP,
+ "Received unlink in activated state. It's unexpected");
+ rc = __cam_isp_ctx_stop_dev_in_activated_unlock(ctx);
+ if (rc)
+ CAM_WARN(CAM_ISP, "Stop device failed rc=%d", rc);
+
+ rc = __cam_isp_ctx_unlink_in_ready(ctx, unlink);
+ if (rc)
+ CAM_ERR(CAM_ISP, "Unlink failed rc=%d", rc);
+
+ return rc;
+}
+
static int __cam_isp_ctx_apply_req(struct cam_context *ctx,
struct cam_req_mgr_apply_request *apply)
{
@@ -2116,6 +2306,7 @@ static struct cam_ctx_ops
.config_dev = __cam_isp_ctx_config_dev_in_top_state,
},
.crm_ops = {
+ .unlink = __cam_isp_ctx_unlink_in_activated,
.apply_req = __cam_isp_ctx_apply_req,
.flush_req = __cam_isp_ctx_flush_req_in_top_state,
},
@@ -2184,4 +2375,3 @@ int cam_isp_context_deinit(struct cam_isp_context *ctx)
memset(ctx, 0, sizeof(*ctx));
return rc;
}
-
diff --git a/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.h b/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.h
index cec0e80..88ebc03 100644
--- a/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.h
+++ b/drivers/media/platform/msm/camera/cam_isp/cam_isp_context.h
@@ -50,6 +50,7 @@ enum cam_isp_ctx_activated_substate {
CAM_ISP_CTX_ACTIVATED_EPOCH,
CAM_ISP_CTX_ACTIVATED_BUBBLE,
CAM_ISP_CTX_ACTIVATED_BUBBLE_APPLIED,
+ CAM_ISP_CTX_ACTIVATED_HW_ERROR,
CAM_ISP_CTX_ACTIVATED_HALT,
CAM_ISP_CTX_ACTIVATED_MAX,
};
@@ -80,6 +81,8 @@ struct cam_isp_ctx_irq_ops {
* the request has been completed.
* @bubble_report: Flag to track if bubble report is active on
* current request
+ * @packet_opcode_type: Request packet opcode type,
+ * ie INIT packet or update packet
*
*/
struct cam_isp_ctx_req {
@@ -93,6 +96,7 @@ struct cam_isp_ctx_req {
uint32_t num_fence_map_in;
uint32_t num_acked;
int32_t bubble_report;
+ uint32_t packet_opcode_type;
};
/**
@@ -111,6 +115,7 @@ struct cam_isp_ctx_req {
* @reported_req_id: Last reported request id
* @subscribe_event: The irq event mask that CRM subscribes to, IFE will
* invoke CRM cb at those event.
+ * @last_applied_req_id: Last applied request id
*
*/
struct cam_isp_context {
@@ -129,6 +134,7 @@ struct cam_isp_context {
int32_t active_req_cnt;
int64_t reported_req_id;
uint32_t subscribe_event;
+ int64_t last_applied_req_id;
};
/**
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.c
index ee01c5e..d407771 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.c
@@ -41,10 +41,12 @@
(CAM_ISP_PACKET_META_GENERIC_BLOB_COMMON + 1)
#define CAM_ISP_GENERIC_BLOB_TYPE_MAX \
- (CAM_ISP_GENERIC_BLOB_TYPE_HFR_CONFIG + 1)
+ (CAM_ISP_GENERIC_BLOB_TYPE_BW_CONFIG + 1)
static uint32_t blob_type_hw_cmd_map[CAM_ISP_GENERIC_BLOB_TYPE_MAX] = {
CAM_ISP_HW_CMD_GET_HFR_UPDATE,
+ CAM_ISP_HW_CMD_CLOCK_UPDATE,
+ CAM_ISP_HW_CMD_BW_UPDATE,
};
static struct cam_ife_hw_mgr g_ife_hw_mgr;
@@ -138,6 +140,39 @@ static int cam_ife_hw_mgr_is_rdi_res(uint32_t res_id)
return rc;
}
+static int cam_ife_hw_mgr_reset_csid_res(
+ struct cam_ife_hw_mgr_res *isp_hw_res)
+{
+ int i;
+ int rc = 0;
+ struct cam_hw_intf *hw_intf;
+ struct cam_csid_reset_cfg_args csid_reset_args;
+
+ csid_reset_args.reset_type = CAM_IFE_CSID_RESET_PATH;
+
+ for (i = 0; i < CAM_ISP_HW_SPLIT_MAX; i++) {
+ if (!isp_hw_res->hw_res[i])
+ continue;
+ csid_reset_args.node_res = isp_hw_res->hw_res[i];
+ hw_intf = isp_hw_res->hw_res[i]->hw_intf;
+ CAM_DBG(CAM_ISP, "Resetting csid hardware %d",
+ hw_intf->hw_idx);
+ if (hw_intf->hw_ops.reset) {
+ rc = hw_intf->hw_ops.reset(hw_intf->hw_priv,
+ &csid_reset_args,
+ sizeof(struct cam_csid_reset_cfg_args));
+ if (rc <= 0)
+ goto err;
+ }
+ }
+
+ return 0;
+err:
+ CAM_ERR(CAM_ISP, "RESET HW res failed: (type:%d, id:%d)",
+ isp_hw_res->res_type, isp_hw_res->res_id);
+ return rc;
+}
+
static int cam_ife_hw_mgr_init_hw_res(
struct cam_ife_hw_mgr_res *isp_hw_res)
{
@@ -168,7 +203,8 @@ static int cam_ife_hw_mgr_init_hw_res(
}
static int cam_ife_hw_mgr_start_hw_res(
- struct cam_ife_hw_mgr_res *isp_hw_res)
+ struct cam_ife_hw_mgr_res *isp_hw_res,
+ struct cam_ife_hw_mgr_ctx *ctx)
{
int i;
int rc = -1;
@@ -179,6 +215,8 @@ static int cam_ife_hw_mgr_start_hw_res(
continue;
hw_intf = isp_hw_res->hw_res[i]->hw_intf;
if (hw_intf->hw_ops.start) {
+ isp_hw_res->hw_res[i]->rdi_only_ctx =
+ ctx->is_rdi_only_context;
rc = hw_intf->hw_ops.start(hw_intf->hw_priv,
isp_hw_res->hw_res[i],
sizeof(struct cam_isp_resource_node));
@@ -833,7 +871,7 @@ static int cam_ife_hw_mgr_acquire_res_ife_csid_ipp(
struct cam_ife_hw_mgr *ife_hw_mgr;
struct cam_ife_hw_mgr_res *csid_res;
struct cam_ife_hw_mgr_res *cid_res;
- struct cam_hw_intf *hw_intf;
+ struct cam_hw_intf *hw_intf;
struct cam_csid_hw_reserve_resource_args csid_acquire;
ife_hw_mgr = ife_ctx->hw_mgr;
@@ -1101,7 +1139,8 @@ static int cam_ife_hw_mgr_preprocess_out_port(
static int cam_ife_mgr_acquire_cid_res(
struct cam_ife_hw_mgr_ctx *ife_ctx,
struct cam_isp_in_port_info *in_port,
- uint32_t *cid_res_id)
+ uint32_t *cid_res_id,
+ int pixel_count)
{
int rc = -1;
int i, j;
@@ -1121,6 +1160,7 @@ static int cam_ife_mgr_acquire_cid_res(
csid_acquire.res_type = CAM_ISP_RESOURCE_CID;
csid_acquire.in_port = in_port;
+ csid_acquire.pixel_count = pixel_count;
for (i = 0; i < CAM_IFE_CSID_HW_NUM_MAX; i++) {
if (!ife_hw_mgr->csid_devices[i])
@@ -1201,13 +1241,6 @@ static int cam_ife_mgr_acquire_hw_for_ctx(
goto err;
}
- /* get cid resource */
- rc = cam_ife_mgr_acquire_cid_res(ife_ctx, in_port, &cid_res_id);
- if (rc) {
- CAM_ERR(CAM_ISP, "Acquire IFE CID resource Failed");
- goto err;
- }
-
cam_ife_hw_mgr_preprocess_out_port(ife_ctx, in_port,
&pixel_count, &rdi_count);
@@ -1216,6 +1249,14 @@ static int cam_ife_mgr_acquire_hw_for_ctx(
return -EINVAL;
}
+ /* get cid resource */
+ rc = cam_ife_mgr_acquire_cid_res(ife_ctx, in_port, &cid_res_id,
+ pixel_count);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "Acquire IFE CID resource Failed");
+ goto err;
+ }
+
if (pixel_count) {
/* get ife csid IPP resrouce */
rc = cam_ife_hw_mgr_acquire_res_ife_csid_ipp(ife_ctx, in_port,
@@ -1424,6 +1465,8 @@ static int cam_ife_mgr_config_hw(void *hw_mgr_priv,
CAM_ERR(CAM_ISP, "Invalid context parameters");
return -EPERM;
}
+ if (atomic_read(&ctx->overflow_pending))
+ return -EINVAL;
CAM_DBG(CAM_ISP, "Enter ctx id:%d num_hw_upd_entries %d",
ctx->ctx_index, cfg->num_hw_update_entries);
@@ -1455,8 +1498,7 @@ static int cam_ife_mgr_config_hw(void *hw_mgr_priv,
return rc;
}
-static int cam_ife_mgr_stop_hw_in_overflow(void *hw_mgr_priv,
- void *stop_hw_args)
+static int cam_ife_mgr_stop_hw_in_overflow(void *stop_hw_args)
{
int rc = 0;
struct cam_hw_stop_args *stop_args = stop_hw_args;
@@ -1464,7 +1506,7 @@ static int cam_ife_mgr_stop_hw_in_overflow(void *hw_mgr_priv,
struct cam_ife_hw_mgr_ctx *ctx;
uint32_t i, master_base_idx = 0;
- if (!hw_mgr_priv || !stop_hw_args) {
+ if (!stop_hw_args) {
CAM_ERR(CAM_ISP, "Invalid arguments");
return -EINVAL;
}
@@ -1477,7 +1519,6 @@ static int cam_ife_mgr_stop_hw_in_overflow(void *hw_mgr_priv,
CAM_DBG(CAM_ISP, "Enter...ctx id:%d",
ctx->ctx_index);
- /* stop resource will remove the irq mask from the hardware */
if (!ctx->num_base) {
CAM_ERR(CAM_ISP, "Number of bases are zero");
return -EINVAL;
@@ -1491,17 +1532,13 @@ static int cam_ife_mgr_stop_hw_in_overflow(void *hw_mgr_priv,
}
}
- /*
- * if Context does not have PIX resources and has only RDI resource
- * then take the first base index.
- */
-
if (i == ctx->num_base)
master_base_idx = ctx->base[0].idx;
+
/* stop the master CIDs first */
cam_ife_mgr_csid_stop_hw(ctx, &ctx->res_list_ife_cid,
- master_base_idx, CAM_CSID_HALT_IMMEDIATELY);
+ master_base_idx, CAM_CSID_HALT_IMMEDIATELY);
/* stop rest of the CIDs */
for (i = 0; i < ctx->num_base; i++) {
@@ -1513,7 +1550,7 @@ static int cam_ife_mgr_stop_hw_in_overflow(void *hw_mgr_priv,
/* stop the master CSID path first */
cam_ife_mgr_csid_stop_hw(ctx, &ctx->res_list_ife_csid,
- master_base_idx, CAM_CSID_HALT_IMMEDIATELY);
+ master_base_idx, CAM_CSID_HALT_IMMEDIATELY);
/* Stop rest of the CSID paths */
for (i = 0; i < ctx->num_base; i++) {
@@ -1533,8 +1570,9 @@ static int cam_ife_mgr_stop_hw_in_overflow(void *hw_mgr_priv,
for (i = 0; i < CAM_IFE_HW_OUT_RES_MAX; i++)
cam_ife_hw_mgr_stop_hw_res(&ctx->res_list_ife_out[i]);
- /* update vote bandwidth should be done at the HW layer */
+ /* Stop tasklet for context */
+ cam_tasklet_stop(ctx->common.tasklet_info);
CAM_DBG(CAM_ISP, "Exit...ctx id:%d rc :%d",
ctx->ctx_index, rc);
@@ -1664,40 +1702,27 @@ static int cam_ife_mgr_stop_hw(void *hw_mgr_priv, void *stop_hw_args)
return rc;
}
-static int cam_ife_mgr_reset_hw(struct cam_ife_hw_mgr *hw_mgr,
+static int cam_ife_mgr_reset_vfe_hw(struct cam_ife_hw_mgr *hw_mgr,
uint32_t hw_idx)
{
uint32_t i = 0;
- struct cam_hw_intf *csid_hw_intf;
struct cam_hw_intf *vfe_hw_intf;
- struct cam_csid_reset_cfg_args csid_reset_args;
+ uint32_t vfe_reset_type;
if (!hw_mgr) {
CAM_DBG(CAM_ISP, "Invalid arguments");
return -EINVAL;
}
-
- /* Reset IFE CSID HW */
- csid_reset_args.reset_type = CAM_IFE_CSID_RESET_GLOBAL;
-
- for (i = 0; i < CAM_IFE_CSID_HW_NUM_MAX; i++) {
- if (hw_idx != hw_mgr->csid_devices[i]->hw_idx)
- continue;
-
- csid_hw_intf = hw_mgr->csid_devices[i];
- csid_hw_intf->hw_ops.reset(csid_hw_intf->hw_priv,
- &csid_reset_args,
- sizeof(struct cam_csid_reset_cfg_args));
- break;
- }
-
/* Reset VFE HW*/
+ vfe_reset_type = CAM_VFE_HW_RESET_HW;
+
for (i = 0; i < CAM_VFE_HW_NUM_MAX; i++) {
if (hw_idx != hw_mgr->ife_devices[i]->hw_idx)
continue;
CAM_DBG(CAM_ISP, "VFE (id = %d) reset", hw_idx);
vfe_hw_intf = hw_mgr->ife_devices[i];
- vfe_hw_intf->hw_ops.reset(vfe_hw_intf->hw_priv, NULL, 0);
+ vfe_hw_intf->hw_ops.reset(vfe_hw_intf->hw_priv,
+ &vfe_reset_type, sizeof(vfe_reset_type));
break;
}
@@ -1705,8 +1730,7 @@ static int cam_ife_mgr_reset_hw(struct cam_ife_hw_mgr *hw_mgr,
return 0;
}
-static int cam_ife_mgr_restart_hw(void *hw_mgr_priv,
- void *start_hw_args)
+static int cam_ife_mgr_restart_hw(void *start_hw_args)
{
int rc = -1;
struct cam_hw_start_args *start_args = start_hw_args;
@@ -1714,7 +1738,7 @@ static int cam_ife_mgr_restart_hw(void *hw_mgr_priv,
struct cam_ife_hw_mgr_res *hw_mgr_res;
uint32_t i;
- if (!hw_mgr_priv || !start_hw_args) {
+ if (!start_hw_args) {
CAM_ERR(CAM_ISP, "Invalid arguments");
return -EINVAL;
}
@@ -1725,12 +1749,14 @@ static int cam_ife_mgr_restart_hw(void *hw_mgr_priv,
return -EPERM;
}
- CAM_DBG(CAM_ISP, "Enter... ctx id:%d", ctx->ctx_index);
-
CAM_DBG(CAM_ISP, "START IFE OUT ... in ctx id:%d", ctx->ctx_index);
+
+ cam_tasklet_start(ctx->common.tasklet_info);
+
/* start the IFE out devices */
for (i = 0; i < CAM_IFE_HW_OUT_RES_MAX; i++) {
- rc = cam_ife_hw_mgr_start_hw_res(&ctx->res_list_ife_out[i]);
+ rc = cam_ife_hw_mgr_start_hw_res(
+ &ctx->res_list_ife_out[i], ctx);
if (rc) {
CAM_ERR(CAM_ISP, "Can not start IFE OUT (%d)", i);
goto err;
@@ -1740,7 +1766,7 @@ static int cam_ife_mgr_restart_hw(void *hw_mgr_priv,
CAM_DBG(CAM_ISP, "START IFE SRC ... in ctx id:%d", ctx->ctx_index);
/* Start the IFE mux in devices */
list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_src, list) {
- rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res);
+ rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res, ctx);
if (rc) {
CAM_ERR(CAM_ISP, "Can not start IFE MUX (%d)",
hw_mgr_res->res_id);
@@ -1751,7 +1777,7 @@ static int cam_ife_mgr_restart_hw(void *hw_mgr_priv,
CAM_DBG(CAM_ISP, "START CSID HW ... in ctx id:%d", ctx->ctx_index);
/* Start the IFE CSID HW devices */
list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_csid, list) {
- rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res);
+ rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res, ctx);
if (rc) {
CAM_ERR(CAM_ISP, "Can not start IFE CSID (%d)",
hw_mgr_res->res_id);
@@ -1760,22 +1786,12 @@ static int cam_ife_mgr_restart_hw(void *hw_mgr_priv,
}
CAM_DBG(CAM_ISP, "START CID SRC ... in ctx id:%d", ctx->ctx_index);
- /* Start the IFE CID HW devices */
- list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_cid, list) {
- rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res);
- if (rc) {
- CAM_ERR(CAM_ISP, "Can not start IFE CSID (%d)",
- hw_mgr_res->res_id);
- goto err;
- }
- }
-
/* Start IFE root node: do nothing */
CAM_DBG(CAM_ISP, "Exit...(success)");
return 0;
err:
- cam_ife_mgr_stop_hw(hw_mgr_priv, start_hw_args);
+ cam_ife_mgr_stop_hw_in_overflow(start_hw_args);
CAM_DBG(CAM_ISP, "Exit...(rc=%d)", rc);
return rc;
}
@@ -1900,7 +1916,8 @@ static int cam_ife_mgr_start_hw(void *hw_mgr_priv, void *start_hw_args)
ctx->ctx_index);
/* start the IFE out devices */
for (i = 0; i < CAM_IFE_HW_OUT_RES_MAX; i++) {
- rc = cam_ife_hw_mgr_start_hw_res(&ctx->res_list_ife_out[i]);
+ rc = cam_ife_hw_mgr_start_hw_res(
+ &ctx->res_list_ife_out[i], ctx);
if (rc) {
CAM_ERR(CAM_ISP, "Can not start IFE OUT (%d)",
i);
@@ -1912,7 +1929,7 @@ static int cam_ife_mgr_start_hw(void *hw_mgr_priv, void *start_hw_args)
ctx->ctx_index);
/* Start the IFE mux in devices */
list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_src, list) {
- rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res);
+ rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res, ctx);
if (rc) {
CAM_ERR(CAM_ISP, "Can not start IFE MUX (%d)",
hw_mgr_res->res_id);
@@ -1924,7 +1941,7 @@ static int cam_ife_mgr_start_hw(void *hw_mgr_priv, void *start_hw_args)
ctx->ctx_index);
/* Start the IFE CSID HW devices */
list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_csid, list) {
- rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res);
+ rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res, ctx);
if (rc) {
CAM_ERR(CAM_ISP, "Can not start IFE CSID (%d)",
hw_mgr_res->res_id);
@@ -1936,10 +1953,10 @@ static int cam_ife_mgr_start_hw(void *hw_mgr_priv, void *start_hw_args)
ctx->ctx_index);
/* Start the IFE CID HW devices */
list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_cid, list) {
- rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res);
+ rc = cam_ife_hw_mgr_start_hw_res(hw_mgr_res, ctx);
if (rc) {
CAM_ERR(CAM_ISP, "Can not start IFE CSID (%d)",
- hw_mgr_res->res_id);
+ hw_mgr_res->res_id);
goto err;
}
}
@@ -2102,6 +2119,168 @@ static int cam_isp_blob_hfr_update(
return rc;
}
+static int cam_isp_blob_clock_update(
+ uint32_t blob_type,
+ struct cam_isp_generic_blob_info *blob_info,
+ struct cam_isp_clock_config *clock_config,
+ struct cam_hw_prepare_update_args *prepare)
+{
+ struct cam_ife_hw_mgr_ctx *ctx = NULL;
+ struct cam_ife_hw_mgr_res *hw_mgr_res;
+ struct cam_hw_intf *hw_intf;
+ struct cam_vfe_clock_update_args clock_upd_args;
+ uint64_t clk_rate = 0;
+ int rc = -EINVAL;
+ uint32_t i;
+ uint32_t j;
+
+ ctx = prepare->ctxt_to_hw_map;
+
+ CAM_DBG(CAM_ISP,
+ "usage=%u left_clk= %lu right_clk=%lu",
+ clock_config->usage_type,
+ clock_config->left_pix_hz,
+ clock_config->right_pix_hz);
+
+ list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_src, list) {
+ for (i = 0; i < CAM_ISP_HW_SPLIT_MAX; i++) {
+ clk_rate = 0;
+ if (!hw_mgr_res->hw_res[i])
+ continue;
+
+ if (hw_mgr_res->res_id == CAM_ISP_HW_VFE_IN_CAMIF)
+ if (i == CAM_ISP_HW_SPLIT_LEFT)
+ clk_rate =
+ clock_config->left_pix_hz;
+ else
+ clk_rate =
+ clock_config->right_pix_hz;
+ else if ((hw_mgr_res->res_id >= CAM_ISP_HW_VFE_IN_RDI0)
+ && (hw_mgr_res->res_id <=
+ CAM_ISP_HW_VFE_IN_RDI3))
+ for (j = 0; j < clock_config->num_rdi; j++)
+ clk_rate = max(clock_config->rdi_hz[j],
+ clk_rate);
+ else
+ if (hw_mgr_res->hw_res[i]) {
+ CAM_ERR(CAM_ISP, "Invalid res_id %u",
+ hw_mgr_res->res_id);
+ rc = -EINVAL;
+ return rc;
+ }
+
+ hw_intf = hw_mgr_res->hw_res[i]->hw_intf;
+ if (hw_intf && hw_intf->hw_ops.process_cmd) {
+ clock_upd_args.node_res =
+ hw_mgr_res->hw_res[i];
+ CAM_DBG(CAM_ISP,
+ "res_id=%u i= %d clk=%llu\n",
+ hw_mgr_res->res_id, i, clk_rate);
+
+ clock_upd_args.clk_rate = clk_rate;
+
+ rc = hw_intf->hw_ops.process_cmd(
+ hw_intf->hw_priv,
+ CAM_ISP_HW_CMD_CLOCK_UPDATE,
+ &clock_upd_args,
+ sizeof(
+ struct cam_vfe_clock_update_args));
+ if (rc)
+ CAM_ERR(CAM_ISP, "Clock Update failed");
+ } else
+ CAM_WARN(CAM_ISP, "NULL hw_intf!");
+ }
+ }
+
+ return rc;
+}
+
+static int cam_isp_blob_bw_update(
+ uint32_t blob_type,
+ struct cam_isp_generic_blob_info *blob_info,
+ struct cam_isp_bw_config *bw_config,
+ struct cam_hw_prepare_update_args *prepare)
+{
+ struct cam_ife_hw_mgr_ctx *ctx = NULL;
+ struct cam_ife_hw_mgr_res *hw_mgr_res;
+ struct cam_hw_intf *hw_intf;
+ struct cam_vfe_bw_update_args bw_upd_args;
+ uint64_t cam_bw_bps = 0;
+ uint64_t ext_bw_bps = 0;
+ int rc = -EINVAL;
+ uint32_t i;
+
+ ctx = prepare->ctxt_to_hw_map;
+
+ CAM_DBG(CAM_ISP,
+ "usage=%u left cam_bw_bps=%llu ext_bw_bps=%llu\n"
+ "right cam_bw_bps=%llu ext_bw_bps=%llu",
+ bw_config->usage_type,
+ bw_config->left_pix_vote.cam_bw_bps,
+ bw_config->left_pix_vote.ext_bw_bps,
+ bw_config->right_pix_vote.cam_bw_bps,
+ bw_config->right_pix_vote.ext_bw_bps);
+
+ list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_src, list) {
+ for (i = 0; i < CAM_ISP_HW_SPLIT_MAX; i++) {
+ if (!hw_mgr_res->hw_res[i])
+ continue;
+
+ if (hw_mgr_res->res_id == CAM_ISP_HW_VFE_IN_CAMIF)
+ if (i == CAM_ISP_HW_SPLIT_LEFT) {
+ cam_bw_bps =
+ bw_config->left_pix_vote.cam_bw_bps;
+ ext_bw_bps =
+ bw_config->left_pix_vote.ext_bw_bps;
+ } else {
+ cam_bw_bps =
+ bw_config->right_pix_vote.cam_bw_bps;
+ ext_bw_bps =
+ bw_config->right_pix_vote.ext_bw_bps;
+ }
+ else if ((hw_mgr_res->res_id >= CAM_ISP_HW_VFE_IN_RDI0)
+ && (hw_mgr_res->res_id <=
+ CAM_ISP_HW_VFE_IN_RDI3)) {
+ uint32_t idx = hw_mgr_res->res_id -
+ CAM_ISP_HW_VFE_IN_RDI0;
+ if (idx >= bw_config->num_rdi)
+ continue;
+
+ cam_bw_bps =
+ bw_config->rdi_vote[idx].cam_bw_bps;
+ ext_bw_bps =
+ bw_config->rdi_vote[idx].ext_bw_bps;
+ } else
+ if (hw_mgr_res->hw_res[i]) {
+ CAM_ERR(CAM_ISP, "Invalid res_id %u",
+ hw_mgr_res->res_id);
+ rc = -EINVAL;
+ return rc;
+ }
+
+ hw_intf = hw_mgr_res->hw_res[i]->hw_intf;
+ if (hw_intf && hw_intf->hw_ops.process_cmd) {
+ bw_upd_args.node_res =
+ hw_mgr_res->hw_res[i];
+
+ bw_upd_args.camnoc_bw_bytes = cam_bw_bps;
+ bw_upd_args.external_bw_bytes = ext_bw_bps;
+
+ rc = hw_intf->hw_ops.process_cmd(
+ hw_intf->hw_priv,
+ CAM_ISP_HW_CMD_BW_UPDATE,
+ &bw_upd_args,
+ sizeof(struct cam_vfe_bw_update_args));
+ if (rc)
+ CAM_ERR(CAM_ISP, "BW Update failed");
+ } else
+ CAM_WARN(CAM_ISP, "NULL hw_intf!");
+ }
+ }
+
+ return rc;
+}
+
static int cam_isp_packet_generic_blob_handler(void *user_data,
uint32_t blob_type, uint32_t blob_size, uint8_t *blob_data)
{
@@ -2139,6 +2318,26 @@ static int cam_isp_packet_generic_blob_handler(void *user_data,
CAM_ERR(CAM_ISP, "HFR Update Failed");
}
break;
+ case CAM_ISP_GENERIC_BLOB_TYPE_CLOCK_CONFIG: {
+ struct cam_isp_clock_config *clock_config =
+ (struct cam_isp_clock_config *)blob_data;
+
+ rc = cam_isp_blob_clock_update(blob_type, blob_info,
+ clock_config, prepare);
+ if (rc)
+ CAM_ERR(CAM_ISP, "Clock Update Failed");
+ }
+ break;
+ case CAM_ISP_GENERIC_BLOB_TYPE_BW_CONFIG: {
+ struct cam_isp_bw_config *bw_config =
+ (struct cam_isp_bw_config *)blob_data;
+
+ rc = cam_isp_blob_bw_update(blob_type, blob_info,
+ bw_config, prepare);
+ if (rc)
+ CAM_ERR(CAM_ISP, "Bandwidth Update Failed");
+ }
+ break;
default:
CAM_WARN(CAM_ISP, "Invalid blob type %d", blob_type);
break;
@@ -2153,11 +2352,12 @@ static int cam_ife_mgr_prepare_hw_update(void *hw_mgr_priv,
int rc = 0;
struct cam_hw_prepare_update_args *prepare =
(struct cam_hw_prepare_update_args *) prepare_hw_update_args;
- struct cam_ife_hw_mgr_ctx *ctx;
- struct cam_ife_hw_mgr *hw_mgr;
- struct cam_kmd_buf_info kmd_buf;
- uint32_t i;
- bool fill_fence = true;
+ struct cam_ife_hw_mgr_ctx *ctx;
+ struct cam_ife_hw_mgr *hw_mgr;
+ struct cam_kmd_buf_info kmd_buf;
+ uint32_t i;
+ bool fill_fence = true;
+ struct cam_isp_prepare_hw_update_data *prepare_hw_data;
if (!hw_mgr_priv || !prepare_hw_update_args) {
CAM_ERR(CAM_ISP, "Invalid args");
@@ -2243,9 +2443,14 @@ static int cam_ife_mgr_prepare_hw_update(void *hw_mgr_priv,
* bits to get the type of operation since UMD definition
* of op_code has some difference from KMD.
*/
+ prepare_hw_data = (struct cam_isp_prepare_hw_update_data *)
+ prepare->priv;
if (((prepare->packet->header.op_code + 1) & 0xF) ==
- CAM_ISP_PACKET_INIT_DEV)
+ CAM_ISP_PACKET_INIT_DEV) {
+ prepare_hw_data->packet_opcode_type = CAM_ISP_PACKET_INIT_DEV;
goto end;
+ } else
+ prepare_hw_data->packet_opcode_type = CAM_ISP_PACKET_UPDATE_DEV;
/* add reg update commands */
for (i = 0; i < ctx->num_base; i++) {
@@ -2363,11 +2568,12 @@ static int cam_ife_mgr_cmd_get_sof_timestamp(
static int cam_ife_mgr_process_recovery_cb(void *priv, void *data)
{
int32_t rc = 0;
- struct cam_hw_event_recovery_data *recovery_data = priv;
- struct cam_hw_start_args start_args;
- struct cam_ife_hw_mgr *ife_hw_mgr = NULL;
- uint32_t hw_mgr_priv;
- uint32_t i = 0;
+ struct cam_hw_event_recovery_data *recovery_data = data;
+ struct cam_hw_start_args start_args;
+ struct cam_hw_stop_args stop_args;
+ struct cam_ife_hw_mgr *ife_hw_mgr = priv;
+ struct cam_ife_hw_mgr_res *hw_mgr_res;
+ uint32_t i = 0;
uint32_t error_type = recovery_data->error_type;
struct cam_ife_hw_mgr_ctx *ctx = NULL;
@@ -2384,20 +2590,57 @@ static int cam_ife_mgr_process_recovery_cb(void *priv, void *data)
kfree(recovery_data);
return 0;
}
+ /* stop resources here */
+ CAM_DBG(CAM_ISP, "STOP: Number of affected context: %d",
+ recovery_data->no_of_context);
+ for (i = 0; i < recovery_data->no_of_context; i++) {
+ stop_args.ctxt_to_hw_map =
+ recovery_data->affected_ctx[i];
+ rc = cam_ife_mgr_stop_hw_in_overflow(&stop_args);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "CTX stop failed(%d)", rc);
+ return rc;
+ }
+ }
- ctx = recovery_data->affected_ctx[0];
- ife_hw_mgr = ctx->hw_mgr;
+ CAM_DBG(CAM_ISP, "RESET: CSID PATH");
+ for (i = 0; i < recovery_data->no_of_context; i++) {
+ ctx = recovery_data->affected_ctx[i];
+ list_for_each_entry(hw_mgr_res, &ctx->res_list_ife_csid,
+ list) {
+ rc = cam_ife_hw_mgr_reset_csid_res(hw_mgr_res);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "Failed RESET (%d)",
+ hw_mgr_res->res_id);
+ return rc;
+ }
+ }
+ }
+
+ CAM_DBG(CAM_ISP, "RESET: Calling VFE reset");
for (i = 0; i < CAM_VFE_HW_NUM_MAX; i++) {
if (recovery_data->affected_core[i])
- rc = cam_ife_mgr_reset_hw(ife_hw_mgr, i);
+ cam_ife_mgr_reset_vfe_hw(ife_hw_mgr, i);
}
+ CAM_DBG(CAM_ISP, "START: Number of affected context: %d",
+ recovery_data->no_of_context);
+
for (i = 0; i < recovery_data->no_of_context; i++) {
- start_args.ctxt_to_hw_map =
- recovery_data->affected_ctx[i];
- rc = cam_ife_mgr_restart_hw(&hw_mgr_priv, &start_args);
+ ctx = recovery_data->affected_ctx[i];
+ start_args.ctxt_to_hw_map = ctx;
+
+ atomic_set(&ctx->overflow_pending, 0);
+
+ rc = cam_ife_mgr_restart_hw(&start_args);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "CTX start failed(%d)", rc);
+ return rc;
+ }
+ CAM_DBG(CAM_ISP, "Started resources rc (%d)", rc);
}
+ CAM_DBG(CAM_ISP, "Recovery Done rc (%d)", rc);
break;
@@ -2423,8 +2666,6 @@ static int cam_ife_hw_mgr_do_error_recovery(
struct crm_workq_task *task = NULL;
struct cam_hw_event_recovery_data *recovery_data = NULL;
- return 0;
-
recovery_data = kzalloc(sizeof(struct cam_hw_event_recovery_data),
GFP_ATOMIC);
if (!recovery_data)
@@ -2443,7 +2684,9 @@ static int cam_ife_hw_mgr_do_error_recovery(
}
task->process_cb = &cam_ife_mgr_process_recovery_cb;
- rc = cam_req_mgr_workq_enqueue_task(task, recovery_data,
+ task->payload = recovery_data;
+ rc = cam_req_mgr_workq_enqueue_task(task,
+ recovery_data->affected_ctx[0]->hw_mgr,
CRM_TASK_PRIORITY_0);
return rc;
@@ -2456,9 +2699,9 @@ static int cam_ife_hw_mgr_do_error_recovery(
* affected_core[]
* b. Return 0 i.e.SUCCESS
*/
-static int cam_ife_hw_mgr_match_hw_idx(
+static int cam_ife_hw_mgr_is_ctx_affected(
struct cam_ife_hw_mgr_ctx *ife_hwr_mgr_ctx,
- uint32_t *affected_core)
+ uint32_t *affected_core, uint32_t size)
{
int32_t rc = -EPERM;
@@ -2468,22 +2711,25 @@ static int cam_ife_hw_mgr_match_hw_idx(
CAM_DBG(CAM_ISP, "Enter:max_idx = %d", max_idx);
- while (i < max_idx) {
+ if ((max_idx >= CAM_IFE_HW_NUM_MAX) ||
+ (size > CAM_IFE_HW_NUM_MAX)) {
+ CAM_ERR(CAM_ISP, "invalid parameter = %d", max_idx);
+ return rc;
+ }
+
+ for (i = 0; i < max_idx; i++) {
if (affected_core[ife_hwr_mgr_ctx->base[i].idx])
rc = 0;
else {
ctx_affected_core_idx[j] = ife_hwr_mgr_ctx->base[i].idx;
j = j + 1;
}
-
- i = i + 1;
}
if (rc == 0) {
while (j) {
if (affected_core[ctx_affected_core_idx[j-1]] != 1)
affected_core[ctx_affected_core_idx[j-1]] = 1;
-
j = j - 1;
}
}
@@ -2499,7 +2745,7 @@ static int cam_ife_hw_mgr_match_hw_idx(
* d. For any dual VFE context, if copanion VFE is also serving
* other context it should also notify the CRM with fatal error
*/
-static int cam_ife_hw_mgr_handle_overflow(
+static int cam_ife_hw_mgr_process_overflow(
struct cam_ife_hw_mgr_ctx *curr_ife_hwr_mgr_ctx,
struct cam_isp_hw_error_event_data *error_event_data,
uint32_t curr_core_idx,
@@ -2509,12 +2755,10 @@ static int cam_ife_hw_mgr_handle_overflow(
struct cam_ife_hw_mgr_ctx *ife_hwr_mgr_ctx = NULL;
cam_hw_event_cb_func ife_hwr_irq_err_cb;
struct cam_ife_hw_mgr *ife_hwr_mgr = NULL;
- uint32_t hw_mgr_priv = 1;
struct cam_hw_stop_args stop_args;
uint32_t i = 0;
CAM_DBG(CAM_ISP, "Enter");
- return 0;
if (!recovery_data) {
CAM_ERR(CAM_ISP, "recovery_data parameter is NULL",
@@ -2535,9 +2779,12 @@ static int cam_ife_hw_mgr_handle_overflow(
* with this context
*/
CAM_DBG(CAM_ISP, "Calling match Hw idx");
- if (cam_ife_hw_mgr_match_hw_idx(ife_hwr_mgr_ctx, affected_core))
+ if (cam_ife_hw_mgr_is_ctx_affected(ife_hwr_mgr_ctx,
+ affected_core, CAM_IFE_HW_NUM_MAX))
continue;
+ atomic_set(&ife_hwr_mgr_ctx->overflow_pending, 1);
+
ife_hwr_irq_err_cb =
ife_hwr_mgr_ctx->common.event_cb[CAM_ISP_HW_EVENT_ERROR];
@@ -2551,16 +2798,13 @@ static int cam_ife_hw_mgr_handle_overflow(
ife_hwr_mgr_ctx;
/*
- * Stop the hw resources associated with this context
- * and call the error callback. In the call back function
- * corresponding ISP context will update CRM about fatal Error
+ * In the call back function corresponding ISP context
+ * will update CRM about fatal Error
*/
- if (!cam_ife_mgr_stop_hw_in_overflow(&hw_mgr_priv,
- &stop_args)) {
- CAM_DBG(CAM_ISP, "Calling Error handler CB");
- ife_hwr_irq_err_cb(ife_hwr_mgr_ctx->common.cb_priv,
- CAM_ISP_HW_EVENT_ERROR, error_event_data);
- }
+
+ ife_hwr_irq_err_cb(ife_hwr_mgr_ctx->common.cb_priv,
+ CAM_ISP_HW_EVENT_ERROR, error_event_data);
+
}
/* fill the affected_core in recovery data */
for (i = 0; i < CAM_IFE_HW_NUM_MAX; i++) {
@@ -2572,11 +2816,85 @@ static int cam_ife_hw_mgr_handle_overflow(
return 0;
}
+static int cam_ife_hw_mgr_get_err_type(
+ void *handler_priv,
+ void *payload)
+{
+ struct cam_isp_resource_node *hw_res_l = NULL;
+ struct cam_isp_resource_node *hw_res_r = NULL;
+ struct cam_ife_hw_mgr_ctx *ife_hwr_mgr_ctx;
+ struct cam_vfe_top_irq_evt_payload *evt_payload;
+ struct cam_ife_hw_mgr_res *isp_ife_camif_res = NULL;
+ uint32_t status = 0;
+ uint32_t core_idx;
+
+ ife_hwr_mgr_ctx = handler_priv;
+ evt_payload = payload;
+
+ if (!evt_payload) {
+ CAM_ERR(CAM_ISP, "No payload");
+ return IRQ_HANDLED;
+ }
+
+ core_idx = evt_payload->core_index;
+ evt_payload->evt_id = CAM_ISP_HW_EVENT_ERROR;
+
+ list_for_each_entry(isp_ife_camif_res,
+ &ife_hwr_mgr_ctx->res_list_ife_src, list) {
+
+ if ((isp_ife_camif_res->res_type ==
+ CAM_IFE_HW_MGR_RES_UNINIT) ||
+ (isp_ife_camif_res->res_id != CAM_ISP_HW_VFE_IN_CAMIF))
+ continue;
+
+ hw_res_l = isp_ife_camif_res->hw_res[CAM_ISP_HW_SPLIT_LEFT];
+ hw_res_r = isp_ife_camif_res->hw_res[CAM_ISP_HW_SPLIT_RIGHT];
+
+ CAM_DBG(CAM_ISP, "is_dual_vfe ? = %d\n",
+ isp_ife_camif_res->is_dual_vfe);
+
+ /* ERROR check for Left VFE */
+ if (!hw_res_l) {
+ CAM_DBG(CAM_ISP, "VFE(L) Device is NULL");
+ break;
+ }
+
+ CAM_DBG(CAM_ISP, "core id= %d, HW id %d", core_idx,
+ hw_res_l->hw_intf->hw_idx);
+
+ if (core_idx == hw_res_l->hw_intf->hw_idx) {
+ status = hw_res_l->bottom_half_handler(
+ hw_res_l, evt_payload);
+ }
+
+ if (status)
+ break;
+
+ /* ERROR check for Right VFE */
+ if (!hw_res_r) {
+ CAM_DBG(CAM_ISP, "VFE(R) Device is NULL");
+ continue;
+ }
+ CAM_DBG(CAM_ISP, "core id= %d, HW id %d", core_idx,
+ hw_res_r->hw_intf->hw_idx);
+
+ if (core_idx == hw_res_r->hw_intf->hw_idx) {
+ status = hw_res_r->bottom_half_handler(
+ hw_res_r, evt_payload);
+ }
+
+ if (status)
+ break;
+ }
+ CAM_DBG(CAM_ISP, "Exit (status = %d)!", status);
+ return status;
+}
+
static int cam_ife_hw_mgr_handle_camif_error(
void *handler_priv,
void *payload)
{
- int32_t rc = 0;
+ int32_t error_status = CAM_ISP_HW_ERROR_NONE;
uint32_t core_idx;
struct cam_ife_hw_mgr_ctx *ife_hwr_mgr_ctx;
struct cam_vfe_top_irq_evt_payload *evt_payload;
@@ -2587,17 +2905,22 @@ static int cam_ife_hw_mgr_handle_camif_error(
evt_payload = payload;
core_idx = evt_payload->core_index;
- rc = evt_payload->error_type;
- CAM_DBG(CAM_ISP, "Enter: error_type (%d)", evt_payload->error_type);
- switch (evt_payload->error_type) {
+ error_status = cam_ife_hw_mgr_get_err_type(ife_hwr_mgr_ctx,
+ evt_payload);
+
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ return error_status;
+
+ switch (error_status) {
case CAM_ISP_HW_ERROR_OVERFLOW:
case CAM_ISP_HW_ERROR_P2I_ERROR:
case CAM_ISP_HW_ERROR_VIOLATION:
+ CAM_DBG(CAM_ISP, "Enter: error_type (%d)", error_status);
error_event_data.error_type =
CAM_ISP_HW_ERROR_OVERFLOW;
- cam_ife_hw_mgr_handle_overflow(ife_hwr_mgr_ctx,
+ cam_ife_hw_mgr_process_overflow(ife_hwr_mgr_ctx,
&error_event_data,
core_idx,
&recovery_data);
@@ -2607,12 +2930,10 @@ static int cam_ife_hw_mgr_handle_camif_error(
cam_ife_hw_mgr_do_error_recovery(&recovery_data);
break;
default:
- CAM_DBG(CAM_ISP, "None error. Error type (%d)",
- evt_payload->error_type);
+ CAM_DBG(CAM_ISP, "None error (%d)", error_status);
}
- CAM_DBG(CAM_ISP, "Exit (%d)", rc);
- return rc;
+ return error_status;
}
/*
@@ -2677,6 +2998,8 @@ static int cam_ife_hw_mgr_handle_reg_update(
rup_status = hw_res->bottom_half_handler(
hw_res, evt_payload);
}
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ break;
if (!rup_status) {
ife_hwr_irq_rup_cb(
@@ -2708,6 +3031,8 @@ static int cam_ife_hw_mgr_handle_reg_update(
rup_status = hw_res->bottom_half_handler(
hw_res, evt_payload);
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ break;
if (!rup_status) {
/* Send the Reg update hw event */
ife_hwr_irq_rup_cb(
@@ -2829,6 +3154,9 @@ static int cam_ife_hw_mgr_handle_epoch_for_camif_hw_res(
if (core_idx == hw_res_l->hw_intf->hw_idx) {
epoch_status = hw_res_l->bottom_half_handler(
hw_res_l, evt_payload);
+ if (atomic_read(
+ &ife_hwr_mgr_ctx->overflow_pending))
+ break;
if (!epoch_status)
ife_hwr_irq_epoch_cb(
ife_hwr_mgr_ctx->common.cb_priv,
@@ -2876,6 +3204,8 @@ static int cam_ife_hw_mgr_handle_epoch_for_camif_hw_res(
core_index1,
evt_payload->evt_id);
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ break;
if (!rc)
ife_hwr_irq_epoch_cb(
ife_hwr_mgr_ctx->common.cb_priv,
@@ -2936,6 +3266,8 @@ static int cam_ife_hw_mgr_process_camif_sof(
if (core_idx == hw_res_l->hw_intf->hw_idx) {
sof_status = hw_res_l->bottom_half_handler(hw_res_l,
evt_payload);
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ break;
if (!sof_status) {
cam_ife_mgr_cmd_get_sof_timestamp(
ife_hwr_mgr_ctx,
@@ -2991,6 +3323,9 @@ static int cam_ife_hw_mgr_process_camif_sof(
core_index0 = hw_res_l->hw_intf->hw_idx;
core_index1 = hw_res_r->hw_intf->hw_idx;
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ break;
+
rc = cam_ife_hw_mgr_check_irq_for_dual_vfe(ife_hwr_mgr_ctx,
core_index0, core_index1, evt_payload->evt_id);
@@ -3149,6 +3484,9 @@ static int cam_ife_hw_mgr_handle_eof_for_camif_hw_res(
if (core_idx == hw_res_l->hw_intf->hw_idx) {
eof_status = hw_res_l->bottom_half_handler(
hw_res_l, evt_payload);
+ if (atomic_read(
+ &ife_hwr_mgr_ctx->overflow_pending))
+ break;
if (!eof_status)
ife_hwr_irq_eof_cb(
ife_hwr_mgr_ctx->common.cb_priv,
@@ -3193,6 +3531,9 @@ static int cam_ife_hw_mgr_handle_eof_for_camif_hw_res(
core_index1,
evt_payload->evt_id);
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ break;
+
if (!rc)
ife_hwr_irq_eof_cb(
ife_hwr_mgr_ctx->common.cb_priv,
@@ -3237,6 +3578,8 @@ static int cam_ife_hw_mgr_handle_buf_done_for_hw_res(
ife_hwr_irq_wm_done_cb =
ife_hwr_mgr_ctx->common.event_cb[CAM_ISP_HW_EVENT_DONE];
+ evt_payload->evt_id = CAM_ISP_HW_EVENT_DONE;
+
for (i = 0; i < CAM_IFE_HW_OUT_RES_MAX; i++) {
isp_ife_out_res = &ife_hwr_mgr_ctx->res_list_ife_out[i];
@@ -3293,6 +3636,8 @@ static int cam_ife_hw_mgr_handle_buf_done_for_hw_res(
buf_done_event_data.resource_handle[0] =
isp_ife_out_res->res_id;
+ if (atomic_read(&ife_hwr_mgr_ctx->overflow_pending))
+ break;
/* Report for Successful buf_done event if any */
if (buf_done_event_data.num_handles > 0 &&
ife_hwr_irq_wm_done_cb) {
@@ -3330,7 +3675,7 @@ static int cam_ife_hw_mgr_handle_buf_done_for_hw_res(
* the affected context and any successful buf_done event is not
* reported.
*/
- rc = cam_ife_hw_mgr_handle_overflow(ife_hwr_mgr_ctx,
+ rc = cam_ife_hw_mgr_process_overflow(ife_hwr_mgr_ctx,
&error_event_data, evt_payload->core_index,
&recovery_data);
@@ -3369,8 +3714,6 @@ int cam_ife_mgr_do_tasklet_buf_done(void *handler_priv,
evt_payload->irq_reg_val[5]);
CAM_DBG(CAM_ISP, "bus_irq_dual_comp_owrt: = %x",
evt_payload->irq_reg_val[6]);
-
- CAM_DBG(CAM_ISP, "Calling Buf_done");
/* WM Done */
return cam_ife_hw_mgr_handle_buf_done_for_hw_res(ife_hwr_mgr_ctx,
evt_payload_priv);
@@ -3401,8 +3744,15 @@ int cam_ife_mgr_do_tasklet(void *handler_priv, void *evt_payload_priv)
* for this context it needs to be handled remaining
* interrupts are ignored.
*/
- rc = cam_ife_hw_mgr_handle_camif_error(ife_hwr_mgr_ctx,
- evt_payload_priv);
+ if (g_ife_hw_mgr.debug_cfg.enable_recovery) {
+ CAM_DBG(CAM_ISP, "IFE Mgr recovery is enabled");
+ rc = cam_ife_hw_mgr_handle_camif_error(ife_hwr_mgr_ctx,
+ evt_payload_priv);
+ } else {
+ CAM_DBG(CAM_ISP, "recovery is not enabled");
+ rc = 0;
+ }
+
if (rc) {
CAM_ERR(CAM_ISP, "Encountered Error (%d), ignoring other irqs",
rc);
@@ -3501,6 +3851,15 @@ static int cam_ife_hw_mgr_debug_register(void)
goto err;
}
+ if (!debugfs_create_u32("enable_recovery",
+ 0644,
+ g_ife_hw_mgr.debug_cfg.dentry,
+ &g_ife_hw_mgr.debug_cfg.enable_recovery)) {
+ CAM_ERR(CAM_ISP, "failed to create enable_recovery");
+ goto err;
+ }
+ g_ife_hw_mgr.debug_cfg.enable_recovery = 0;
+
return 0;
err:
@@ -3700,4 +4059,3 @@ int cam_ife_hw_mgr_init(struct cam_hw_mgr_intf *hw_mgr_intf)
g_ife_hw_mgr.mgr_common.img_iommu_hdl = -1;
return rc;
}
-
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.h
index 1c35e5d..4d26138 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/cam_ife_hw_mgr.h
@@ -85,11 +85,13 @@ struct ctx_base_info {
*
* @dentry: Debugfs entry
* @csid_debug: csid debug information
+ * @enable_recovery enable recovery
*
*/
struct cam_ife_hw_mgr_debug {
- struct dentry *dentry;
- uint64_t csid_debug;
+ struct dentry *dentry;
+ uint64_t csid_debug;
+ uint32_t enable_recovery;
};
/**
@@ -171,6 +173,7 @@ struct cam_ife_hw_mgr_ctx {
* @ife_csid_dev_caps csid device capability stored per core
* @ife_dev_caps ife device capability per core
* @work q work queue for IFE hw manager
+ * @debug_cfg debug configuration
*/
struct cam_ife_hw_mgr {
struct cam_isp_hw_mgr mgr_common;
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_isp_packet_parser.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_isp_packet_parser.c
index 876a540..3606af9 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_isp_packet_parser.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_isp_packet_parser.c
@@ -97,6 +97,8 @@ static int cam_isp_update_dual_config(
struct cam_ife_hw_mgr_res *hw_mgr_res;
struct cam_isp_resource_node *res;
struct cam_isp_hw_dual_isp_update_args dual_isp_update_args;
+ uint32_t outport_id;
+ uint32_t ports_plane_idx;
size_t len = 0;
uint32_t *cpu_addr;
uint32_t i, j;
@@ -113,6 +115,14 @@ static int cam_isp_update_dual_config(
dual_config = (struct cam_isp_dual_config *)cpu_addr;
for (i = 0; i < dual_config->num_ports; i++) {
+
+ if (i >= CAM_ISP_IFE_OUT_RES_MAX) {
+ CAM_ERR(CAM_UTIL,
+ "failed update for i:%d > size_isp_out:%d",
+ i, size_isp_out);
+ return -EINVAL;
+ }
+
hw_mgr_res = &res_list_isp_out[i];
for (j = 0; j < CAM_ISP_HW_SPLIT_MAX; j++) {
if (!hw_mgr_res->hw_res[j])
@@ -122,6 +132,20 @@ static int cam_isp_update_dual_config(
continue;
res = hw_mgr_res->hw_res[j];
+
+ if (res->res_id < CAM_ISP_IFE_OUT_RES_BASE ||
+ res->res_id >= CAM_ISP_IFE_OUT_RES_MAX)
+ continue;
+
+ outport_id = res->res_id & 0xFF;
+
+ ports_plane_idx = (j * (dual_config->num_ports *
+ CAM_PACKET_MAX_PLANES)) +
+ (outport_id * CAM_PACKET_MAX_PLANES);
+
+ if (dual_config->stripes[ports_plane_idx].port_id == 0)
+ continue;
+
dual_isp_update_args.split_id = j;
dual_isp_update_args.res = res;
dual_isp_update_args.dual_cfg = dual_config;
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_tasklet_util.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_tasklet_util.c
index 4a7eff8..8863275 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_tasklet_util.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/cam_tasklet_util.c
@@ -261,6 +261,15 @@ void cam_tasklet_deinit(void **tasklet_info)
*tasklet_info = NULL;
}
+static void cam_tasklet_flush(void *tasklet_info)
+{
+ unsigned long data;
+ struct cam_tasklet_info *tasklet = tasklet_info;
+
+ data = (unsigned long)tasklet;
+ cam_tasklet_action(data);
+}
+
int cam_tasklet_start(void *tasklet_info)
{
struct cam_tasklet_info *tasklet = tasklet_info;
@@ -290,6 +299,7 @@ void cam_tasklet_stop(void *tasklet_info)
{
struct cam_tasklet_info *tasklet = tasklet_info;
+ cam_tasklet_flush(tasklet);
atomic_set(&tasklet->tasklet_active, 0);
tasklet_disable(&tasklet->tasklet);
}
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/include/cam_isp_hw_mgr_intf.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/include/cam_isp_hw_mgr_intf.h
index 0480cd3..cf044eb 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/include/cam_isp_hw_mgr_intf.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/include/cam_isp_hw_mgr_intf.h
@@ -46,6 +46,18 @@ enum cam_isp_hw_err_type {
CAM_ISP_HW_ERROR_MAX,
};
+/**
+ * struct cam_isp_prepare_hw_update_data - hw prepare data
+ *
+ * @packet_opcode_type: Packet header opcode in the packet header
+ * this opcode defines, packet is init packet or
+ * update packet
+ *
+ */
+struct cam_isp_prepare_hw_update_data {
+ uint32_t packet_opcode_type;
+};
+
/**
* struct cam_isp_hw_sof_event_data - Event payload for CAM_HW_EVENT_SOF
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.c
index 9a368cf..daf515a 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.c
@@ -287,7 +287,7 @@ static int cam_ife_csid_get_format_ipp(
static int cam_ife_csid_cid_get(struct cam_ife_csid_hw *csid_hw,
struct cam_isp_resource_node **res, int32_t vc, uint32_t dt,
- uint32_t res_type)
+ uint32_t res_type, int pixel_count)
{
int rc = 0;
struct cam_ife_csid_cid_data *cid_data;
@@ -305,7 +305,8 @@ static int cam_ife_csid_cid_get(struct cam_ife_csid_hw *csid_hw,
break;
}
} else {
- if (cid_data->vc == vc && cid_data->dt == dt) {
+ if (cid_data->vc == vc && cid_data->dt == dt &&
+ cid_data->pixel_count == pixel_count) {
cid_data->cnt++;
*res = &csid_hw->cid_res[i];
break;
@@ -329,6 +330,7 @@ static int cam_ife_csid_cid_get(struct cam_ife_csid_hw *csid_hw,
cid_data->vc = vc;
cid_data->dt = dt;
cid_data->cnt = 1;
+ cid_data->pixel_count = pixel_count;
csid_hw->cid_res[j].res_state =
CAM_ISP_RESOURCE_STATE_RESERVED;
*res = &csid_hw->cid_res[j];
@@ -568,6 +570,7 @@ static int cam_ife_csid_cid_reserve(struct cam_ife_csid_hw *csid_hw,
struct cam_csid_hw_reserve_resource_args *cid_reserv)
{
int rc = 0;
+ uint32_t i;
struct cam_ife_csid_cid_data *cid_data;
CAM_DBG(CAM_ISP,
@@ -725,6 +728,7 @@ static int cam_ife_csid_cid_reserve(struct cam_ife_csid_hw *csid_hw,
cid_data->vc = cid_reserv->in_port->vc;
cid_data->dt = cid_reserv->in_port->dt;
cid_data->cnt = 1;
+ cid_data->pixel_count = cid_reserv->pixel_count;
cid_reserv->node_res = &csid_hw->cid_res[0];
csid_hw->csi2_reserve_cnt++;
@@ -733,9 +737,27 @@ static int cam_ife_csid_cid_reserve(struct cam_ife_csid_hw *csid_hw,
csid_hw->hw_intf->hw_idx,
cid_reserv->node_res->res_id);
} else {
- rc = cam_ife_csid_cid_get(csid_hw, &cid_reserv->node_res,
- cid_reserv->in_port->vc, cid_reserv->in_port->dt,
- cid_reserv->in_port->res_type);
+ if (cid_reserv->pixel_count > 0) {
+ for (i = 0; i < CAM_IFE_CSID_CID_RES_MAX; i++) {
+ cid_data = (struct cam_ife_csid_cid_data *)
+ csid_hw->cid_res[i].res_priv;
+ if ((csid_hw->cid_res[i].res_state >=
+ CAM_ISP_RESOURCE_STATE_RESERVED) &&
+ cid_data->pixel_count > 0) {
+ CAM_DBG(CAM_ISP,
+ "CSID:%d IPP resource is full");
+ rc = -EINVAL;
+ goto end;
+ }
+ }
+ }
+
+ rc = cam_ife_csid_cid_get(csid_hw,
+ &cid_reserv->node_res,
+ cid_reserv->in_port->vc,
+ cid_reserv->in_port->dt,
+ cid_reserv->in_port->res_type,
+ cid_reserv->pixel_count);
/* if success then increment the reserve count */
if (!rc) {
if (csid_hw->csi2_reserve_cnt == UINT_MAX) {
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.h
index deef41f..b400d14 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/ife_csid_hw/cam_ife_csid_core.h
@@ -356,10 +356,11 @@ struct cam_ife_csid_tpg_cfg {
/**
* struct cam_ife_csid_cid_data- cid configuration private data
*
- * @vc: virtual channel
- * @dt: Data type
- * @cnt: cid resource reference count.
- * @tpg_set: tpg used for this cid resource
+ * @vc: Virtual channel
+ * @dt: Data type
+ * @cnt: Cid resource reference count.
+ * @tpg_set: Tpg used for this cid resource
+ * @pixel_count: Pixel resource connected
*
*/
struct cam_ife_csid_cid_data {
@@ -367,6 +368,7 @@ struct cam_ife_csid_cid_data {
uint32_t dt;
uint32_t cnt;
uint32_t tpg_set;
+ int pixel_count;
};
@@ -392,6 +394,7 @@ struct cam_ife_csid_cid_data {
* for RDI, set mode to none
* @master_idx: For Slave reservation, Give master IFE instance Index.
* Slave will synchronize with master Start and stop operations
+ * @clk_rate Clock rate
*
*/
struct cam_ife_csid_path_cfg {
@@ -409,6 +412,7 @@ struct cam_ife_csid_path_cfg {
uint32_t height;
enum cam_isp_hw_sync_mode sync_mode;
uint32_t master_idx;
+ uint64_t clk_rate;
};
/**
@@ -432,6 +436,7 @@ struct cam_ife_csid_path_cfg {
* @csid_rdin_reset_complete: rdi n completion
* @csid_debug: csid debug information to enable the SOT, EOT,
* SOF, EOF, measure etc in the csid hw
+ * @clk_rate Clock rate
*
*/
struct cam_ife_csid_hw {
@@ -452,6 +457,7 @@ struct cam_ife_csid_hw {
struct completion csid_ipp_complete;
struct completion csid_rdin_complete[CAM_IFE_CSID_RDI_MAX];
uint64_t csid_debug;
+ uint64_t clk_rate;
};
int cam_ife_csid_hw_probe_init(struct cam_hw_intf *csid_hw_intf,
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_ife_csid_hw_intf.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_ife_csid_hw_intf.h
index 37e0ce3..df97bd6 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_ife_csid_hw_intf.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_ife_csid_hw_intf.h
@@ -61,20 +61,21 @@ struct cam_ife_csid_hw_caps {
/**
* struct cam_csid_hw_reserve_resource- hw reserve
- * @res_type : reource type CID or PATH
- * if type is CID, then res_id is not required,
- * if type is path then res id need to be filled
- * @res_id : res id to be reserved
- * @in_port : input port resource info
- * @out_port: output port resource info, used for RDI path only
- * @sync_mode : Sync mode
- * Sync mode could be master, slave or none
- * @master_idx: master device index to be configured in the slave path
- * for master path, this value is not required.
- * only slave need to configure the master index value
- * @cid: cid (DT_ID) value for path, this is applicable for CSID path
- * reserve
- * @node_res : reserved resource structure pointer
+ * @res_type : Reource type CID or PATH
+ * if type is CID, then res_id is not required,
+ * if type is path then res id need to be filled
+ * @res_id : Resource id to be reserved
+ * @in_port : Input port resource info
+ * @out_port: Output port resource info, used for RDI path only
+ * @sync_mode: Sync mode
+ * Sync mode could be master, slave or none
+ * @master_idx: Master device index to be configured in the slave path
+ * for master path, this value is not required.
+ * only slave need to configure the master index value
+ * @cid: cid (DT_ID) value for path, this is applicable for CSID path
+ * reserve
+ * @node_res : Reserved resource structure pointer
+ * @pixel_count: Number of pixel resources
*
*/
struct cam_csid_hw_reserve_resource_args {
@@ -86,6 +87,7 @@ struct cam_csid_hw_reserve_resource_args {
uint32_t master_idx;
uint32_t cid;
struct cam_isp_resource_node *node_res;
+ int pixel_count;
};
/**
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_isp_hw.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_isp_hw.h
index c81e6db..257a5ac 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_isp_hw.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_isp_hw.h
@@ -90,6 +90,8 @@ enum cam_isp_hw_cmd_type {
CAM_ISP_HW_CMD_GET_HFR_UPDATE,
CAM_ISP_HW_CMD_GET_SECURE_MODE,
CAM_ISP_HW_CMD_STRIPE_UPDATE,
+ CAM_ISP_HW_CMD_CLOCK_UPDATE,
+ CAM_ISP_HW_CMD_BW_UPDATE,
CAM_ISP_HW_CMD_MAX,
};
@@ -110,6 +112,7 @@ enum cam_isp_hw_cmd_type {
* @tasklet_info: Tasklet structure that will be used to
* schedule IRQ events related to this resource
* @irq_handle: handle returned on subscribing for IRQ event
+ * @rdi_only_ctx: resouce belong to rdi only context or not
* @init: function pointer to init the HW resource
* @deinit: function pointer to deinit the HW resource
* @start: function pointer to start the HW resource
@@ -129,6 +132,7 @@ struct cam_isp_resource_node {
void *cdm_ops;
void *tasklet_info;
int irq_handle;
+ int rdi_only_ctx;
int (*init)(struct cam_isp_resource_node *rsrc_node,
void *init_args, uint32_t arg_size);
@@ -192,6 +196,8 @@ struct cam_isp_hw_get_cmd_update {
void *data;
struct cam_isp_hw_get_wm_update *wm_update;
struct cam_isp_port_hfr_config *hfr_update;
+ struct cam_isp_clock_config *clock_update;
+ struct cam_isp_bw_config *bw_update;
};
};
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_vfe_hw_intf.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_vfe_hw_intf.h
index e8a5de5..b771ec6 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_vfe_hw_intf.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include/cam_vfe_hw_intf.h
@@ -70,6 +70,12 @@ enum cam_vfe_bus_irq_regs {
CAM_IFE_BUS_IRQ_REGISTERS_MAX,
};
+enum cam_vfe_reset_type {
+ CAM_VFE_HW_RESET_HW_AND_REG,
+ CAM_VFE_HW_RESET_HW,
+ CAM_VFE_HW_RESET_MAX,
+};
+
/*
* struct cam_vfe_hw_get_hw_cap:
*
@@ -155,6 +161,31 @@ struct cam_vfe_acquire_args {
};
/*
+ * struct cam_vfe_clock_update_args:
+ *
+ * @node_res: Resource to get the time stamp
+ * @clk_rate: Clock rate requested
+ */
+struct cam_vfe_clock_update_args {
+ struct cam_isp_resource_node *node_res;
+ uint64_t clk_rate;
+};
+
+/*
+ * struct cam_vfe_bw_update_args:
+ *
+ * @node_res: Resource to get the time stamp
+ * @camnoc_bw_bytes: Bandwidth vote request for CAMNOC
+ * @external_bw_bytes: Bandwidth vote request from CAMNOC
+ * out to the rest of the path-to-DDR
+ */
+struct cam_vfe_bw_update_args {
+ struct cam_isp_resource_node *node_res;
+ uint64_t camnoc_bw_bytes;
+ uint64_t external_bw_bytes;
+};
+
+/*
* struct cam_vfe_top_irq_evt_payload:
*
* @Brief: This structure is used to save payload for IRQ
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.c
index 7a26370..187aeaf 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.c
@@ -25,7 +25,6 @@
#include "cam_debug_util.h"
static const char drv_name[] = "vfe";
-
static uint32_t irq_reg_offset[CAM_IFE_IRQ_REGISTERS_MAX] = {
0x0000006C,
0x00000070,
@@ -34,6 +33,11 @@ static uint32_t irq_reg_offset[CAM_IFE_IRQ_REGISTERS_MAX] = {
static uint32_t camif_irq_reg_mask[CAM_IFE_IRQ_REGISTERS_MAX] = {
0x0003FD1F,
+ 0x00000000,
+};
+
+static uint32_t camif_irq_err_reg_mask[CAM_IFE_IRQ_REGISTERS_MAX] = {
+ 0x00000000,
0x0FFF7EBC,
};
@@ -83,6 +87,7 @@ int cam_vfe_put_evt_payload(void *core_info,
}
spin_lock_irqsave(&vfe_core_info->spin_lock, flags);
+ (*evt_payload)->error_type = 0;
list_add_tail(&(*evt_payload)->list, &vfe_core_info->free_payload_list);
spin_unlock_irqrestore(&vfe_core_info->spin_lock, flags);
@@ -143,6 +148,60 @@ int cam_vfe_reset_irq_top_half(uint32_t evt_id,
return rc;
}
+static int cam_vfe_irq_err_top_half(uint32_t evt_id,
+ struct cam_irq_th_payload *th_payload)
+{
+ int32_t rc;
+ int i;
+ struct cam_vfe_irq_handler_priv *handler_priv;
+ struct cam_vfe_top_irq_evt_payload *evt_payload;
+ struct cam_vfe_hw_core_info *core_info;
+
+ CAM_DBG(CAM_ISP, "IRQ status_0 = %x, IRQ status_1 = %x",
+ th_payload->evt_status_arr[0], th_payload->evt_status_arr[1]);
+
+ handler_priv = th_payload->handler_priv;
+ core_info = handler_priv->core_info;
+ /*
+ * need to handle overflow condition here, otherwise irq storm
+ * will block everything
+ */
+
+ if (th_payload->evt_status_arr[1]) {
+ CAM_ERR(CAM_ISP, "IRQ status_1: %x, Masking all interrupts",
+ th_payload->evt_status_arr[1]);
+ cam_irq_controller_disable_irq(core_info->vfe_irq_controller,
+ core_info->irq_err_handle);
+ }
+
+ rc = cam_vfe_get_evt_payload(handler_priv->core_info, &evt_payload);
+ if (rc) {
+ CAM_ERR_RATE_LIMIT(CAM_ISP,
+ "No tasklet_cmd is free in queue\n");
+ return rc;
+ }
+
+ cam_isp_hw_get_timestamp(&evt_payload->ts);
+
+ evt_payload->core_index = handler_priv->core_index;
+ evt_payload->core_info = handler_priv->core_info;
+ evt_payload->evt_id = evt_id;
+
+ for (i = 0; i < th_payload->num_registers; i++)
+ evt_payload->irq_reg_val[i] = th_payload->evt_status_arr[i];
+
+ for (; i < CAM_IFE_IRQ_REGISTERS_MAX; i++) {
+ evt_payload->irq_reg_val[i] = cam_io_r(handler_priv->mem_base +
+ irq_reg_offset[i]);
+ }
+
+ CAM_DBG(CAM_ISP, "Violation status = %x", evt_payload->irq_reg_val[2]);
+
+ th_payload->evt_payload_priv = evt_payload;
+
+ return rc;
+}
+
int cam_vfe_init_hw(void *hw_priv, void *init_hw_args, uint32_t arg_size)
{
struct cam_hw_info *vfe_hw = hw_priv;
@@ -150,6 +209,8 @@ int cam_vfe_init_hw(void *hw_priv, void *init_hw_args, uint32_t arg_size)
struct cam_vfe_hw_core_info *core_info = NULL;
struct cam_isp_resource_node *isp_res = NULL;
int rc = 0;
+ uint32_t reset_core_args =
+ CAM_VFE_HW_RESET_HW_AND_REG;
CAM_DBG(CAM_ISP, "Enter");
if (!hw_priv) {
@@ -190,7 +251,7 @@ int cam_vfe_init_hw(void *hw_priv, void *init_hw_args, uint32_t arg_size)
CAM_DBG(CAM_ISP, "Enable soc done");
/* Do HW Reset */
- rc = cam_vfe_reset(hw_priv, NULL, 0);
+ rc = cam_vfe_reset(hw_priv, &reset_core_args, sizeof(uint32_t));
if (rc) {
CAM_ERR(CAM_ISP, "Reset Failed rc=%d", rc);
goto deinint_vfe_res;
@@ -203,7 +264,9 @@ int cam_vfe_init_hw(void *hw_priv, void *init_hw_args, uint32_t arg_size)
goto deinint_vfe_res;
}
- return 0;
+ vfe_hw->hw_state = CAM_HW_STATE_POWER_UP;
+ return rc;
+
deinint_vfe_res:
if (isp_res && isp_res->deinit)
isp_res->deinit(isp_res, NULL, 0);
@@ -306,7 +369,8 @@ int cam_vfe_reset(void *hw_priv, void *reset_core_args, uint32_t arg_size)
reinit_completion(&vfe_hw->hw_complete);
CAM_DBG(CAM_ISP, "calling RESET");
- core_info->vfe_top->hw_ops.reset(core_info->vfe_top->top_priv, NULL, 0);
+ core_info->vfe_top->hw_ops.reset(core_info->vfe_top->top_priv,
+ reset_core_args, arg_size);
CAM_DBG(CAM_ISP, "waiting for vfe reset complete");
/* Wait for Completion or Timeout of 500ms */
rc = wait_for_completion_timeout(&vfe_hw->hw_complete, 500);
@@ -333,20 +397,37 @@ void cam_isp_hw_get_timestamp(struct cam_isp_timestamp *time_stamp)
time_stamp->mono_time.tv_usec = ts.tv_nsec/1000;
}
-
-int cam_vfe_irq_top_half(uint32_t evt_id,
+static int cam_vfe_irq_top_half(uint32_t evt_id,
struct cam_irq_th_payload *th_payload)
{
int32_t rc;
int i;
struct cam_vfe_irq_handler_priv *handler_priv;
struct cam_vfe_top_irq_evt_payload *evt_payload;
+ struct cam_vfe_hw_core_info *core_info;
handler_priv = th_payload->handler_priv;
CAM_DBG(CAM_ISP, "IRQ status_0 = %x", th_payload->evt_status_arr[0]);
CAM_DBG(CAM_ISP, "IRQ status_1 = %x", th_payload->evt_status_arr[1]);
+ /*
+ * need to handle non-recoverable condition here, otherwise irq storm
+ * will block everything.
+ */
+ if (th_payload->evt_status_arr[0] & 0x3FC00) {
+ CAM_ERR(CAM_ISP,
+ "Encountered Error Irq_status0=0x%x Status1=0x%x",
+ th_payload->evt_status_arr[0],
+ th_payload->evt_status_arr[1]);
+ CAM_ERR(CAM_ISP,
+ "Stopping further IRQ processing from this HW index=%d",
+ handler_priv->core_index);
+ cam_io_w(0, handler_priv->mem_base + 0x60);
+ cam_io_w(0, handler_priv->mem_base + 0x5C);
+ return 0;
+ }
+
rc = cam_vfe_get_evt_payload(handler_priv->core_info, &evt_payload);
if (rc) {
CAM_ERR_RATE_LIMIT(CAM_ISP,
@@ -354,6 +435,7 @@ int cam_vfe_irq_top_half(uint32_t evt_id,
return rc;
}
+ core_info = handler_priv->core_info;
cam_isp_hw_get_timestamp(&evt_payload->ts);
evt_payload->core_index = handler_priv->core_index;
@@ -369,22 +451,6 @@ int cam_vfe_irq_top_half(uint32_t evt_id,
}
CAM_DBG(CAM_ISP, "Violation status = %x", evt_payload->irq_reg_val[2]);
- /*
- * need to handle overflow condition here, otherwise irq storm
- * will block everything.
- */
- if (evt_payload->irq_reg_val[1]) {
- CAM_ERR(CAM_ISP,
- "Encountered Error Irq_status1=0x%x. Stopping further IRQ processing from this HW",
- evt_payload->irq_reg_val[1]);
- CAM_ERR(CAM_ISP, "Violation status = %x",
- evt_payload->irq_reg_val[2]);
- cam_io_w(0, handler_priv->mem_base + 0x60);
- cam_io_w(0, handler_priv->mem_base + 0x5C);
-
- evt_payload->error_type = CAM_ISP_HW_ERROR_OVERFLOW;
- }
-
th_payload->evt_payload_priv = evt_payload;
CAM_DBG(CAM_ISP, "Exit");
@@ -465,7 +531,7 @@ int cam_vfe_start(void *hw_priv, void *start_args, uint32_t arg_size)
struct cam_vfe_hw_core_info *core_info = NULL;
struct cam_hw_info *vfe_hw = hw_priv;
struct cam_isp_resource_node *isp_res;
- int rc = -ENODEV;
+ int rc = 0;
if (!hw_priv || !start_args ||
(arg_size != sizeof(struct cam_isp_resource_node))) {
@@ -475,35 +541,72 @@ int cam_vfe_start(void *hw_priv, void *start_args, uint32_t arg_size)
core_info = (struct cam_vfe_hw_core_info *)vfe_hw->core_info;
isp_res = (struct cam_isp_resource_node *)start_args;
+ core_info->tasklet_info = isp_res->tasklet_info;
mutex_lock(&vfe_hw->hw_mutex);
if (isp_res->res_type == CAM_ISP_RESOURCE_VFE_IN) {
- if (isp_res->res_id == CAM_ISP_HW_VFE_IN_CAMIF)
- isp_res->irq_handle = cam_irq_controller_subscribe_irq(
- core_info->vfe_irq_controller,
- CAM_IRQ_PRIORITY_1,
- camif_irq_reg_mask, &core_info->irq_payload,
- cam_vfe_irq_top_half, cam_ife_mgr_do_tasklet,
- isp_res->tasklet_info, cam_tasklet_enqueue_cmd);
- else
- isp_res->irq_handle = cam_irq_controller_subscribe_irq(
- core_info->vfe_irq_controller,
- CAM_IRQ_PRIORITY_1,
- rdi_irq_reg_mask, &core_info->irq_payload,
- cam_vfe_irq_top_half, cam_ife_mgr_do_tasklet,
- isp_res->tasklet_info, cam_tasklet_enqueue_cmd);
+ if (isp_res->res_id == CAM_ISP_HW_VFE_IN_CAMIF) {
+ isp_res->irq_handle =
+ cam_irq_controller_subscribe_irq(
+ core_info->vfe_irq_controller,
+ CAM_IRQ_PRIORITY_1,
+ camif_irq_reg_mask,
+ &core_info->irq_payload,
+ cam_vfe_irq_top_half,
+ cam_ife_mgr_do_tasklet,
+ isp_res->tasklet_info,
+ cam_tasklet_enqueue_cmd);
+ if (isp_res->irq_handle < 1)
+ rc = -ENOMEM;
+ } else if (isp_res->rdi_only_ctx) {
+ isp_res->irq_handle =
+ cam_irq_controller_subscribe_irq(
+ core_info->vfe_irq_controller,
+ CAM_IRQ_PRIORITY_1,
+ rdi_irq_reg_mask,
+ &core_info->irq_payload,
+ cam_vfe_irq_top_half,
+ cam_ife_mgr_do_tasklet,
+ isp_res->tasklet_info,
+ cam_tasklet_enqueue_cmd);
+ if (isp_res->irq_handle < 1)
+ rc = -ENOMEM;
+ }
- if (isp_res->irq_handle > 0)
+ if (rc == 0) {
rc = core_info->vfe_top->hw_ops.start(
core_info->vfe_top->top_priv, isp_res,
sizeof(struct cam_isp_resource_node));
- else
+ if (rc)
+ CAM_ERR(CAM_ISP, "Start failed. type:%d",
+ isp_res->res_type);
+ } else {
CAM_ERR(CAM_ISP,
"Error! subscribe irq controller failed");
+ }
} else if (isp_res->res_type == CAM_ISP_RESOURCE_VFE_OUT) {
rc = core_info->vfe_bus->hw_ops.start(isp_res, NULL, 0);
} else {
CAM_ERR(CAM_ISP, "Invalid res type:%d", isp_res->res_type);
+ rc = -EFAULT;
+ }
+
+ if (!core_info->irq_err_handle) {
+ core_info->irq_err_handle =
+ cam_irq_controller_subscribe_irq(
+ core_info->vfe_irq_controller,
+ CAM_IRQ_PRIORITY_0,
+ camif_irq_err_reg_mask,
+ &core_info->irq_payload,
+ cam_vfe_irq_err_top_half,
+ cam_ife_mgr_do_tasklet,
+ core_info->tasklet_info,
+ cam_tasklet_enqueue_cmd);
+ if (core_info->irq_err_handle < 1) {
+ CAM_ERR(CAM_ISP, "Error handle subscribe failure");
+ rc = -ENOMEM;
+ core_info->irq_err_handle = 0;
+ }
}
mutex_unlock(&vfe_hw->hw_mutex);
@@ -534,12 +637,20 @@ int cam_vfe_stop(void *hw_priv, void *stop_args, uint32_t arg_size)
rc = core_info->vfe_top->hw_ops.stop(
core_info->vfe_top->top_priv, isp_res,
sizeof(struct cam_isp_resource_node));
+
} else if (isp_res->res_type == CAM_ISP_RESOURCE_VFE_OUT) {
rc = core_info->vfe_bus->hw_ops.stop(isp_res, NULL, 0);
} else {
CAM_ERR(CAM_ISP, "Invalid res type:%d", isp_res->res_type);
}
+ if (core_info->irq_err_handle) {
+ cam_irq_controller_unsubscribe_irq(
+ core_info->vfe_irq_controller,
+ core_info->irq_err_handle);
+ core_info->irq_err_handle = 0;
+ }
+
mutex_unlock(&vfe_hw->hw_mutex);
return rc;
@@ -576,10 +687,11 @@ int cam_vfe_process_cmd(void *hw_priv, uint32_t cmd_type,
switch (cmd_type) {
case CAM_ISP_HW_CMD_GET_CHANGE_BASE:
case CAM_ISP_HW_CMD_GET_REG_UPDATE:
+ case CAM_ISP_HW_CMD_CLOCK_UPDATE:
+ case CAM_ISP_HW_CMD_BW_UPDATE:
rc = core_info->vfe_top->hw_ops.process_cmd(
core_info->vfe_top->top_priv, cmd_type, cmd_args,
arg_size);
-
break;
case CAM_ISP_HW_CMD_GET_BUF_UPDATE:
case CAM_ISP_HW_CMD_GET_HFR_UPDATE:
@@ -699,4 +811,3 @@ int cam_vfe_core_deinit(struct cam_vfe_hw_core_info *core_info,
return rc;
}
-
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.h
index ee29e1cf..0674a6ad 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_core.h
@@ -50,12 +50,13 @@ struct cam_vfe_hw_core_info {
void *vfe_irq_controller;
struct cam_vfe_top *vfe_top;
struct cam_vfe_bus *vfe_bus;
-
+ void *tasklet_info;
struct cam_vfe_top_irq_evt_payload evt_payload[CAM_VFE_EVT_MAX];
struct list_head free_payload_list;
struct cam_vfe_irq_handler_priv irq_payload;
uint32_t cpas_handle;
int irq_handle;
+ int irq_err_handle;
spinlock_t spin_lock;
};
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_soc.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_soc.c
index ed5e120..0f93664 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_soc.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/cam_vfe_soc.c
@@ -15,6 +15,30 @@
#include "cam_vfe_soc.h"
#include "cam_debug_util.h"
+static bool cam_vfe_cpas_cb(uint32_t client_handle, void *userdata,
+ struct cam_cpas_irq_data *irq_data)
+{
+ bool error_handled = false;
+
+ if (!irq_data)
+ return error_handled;
+
+ switch (irq_data->irq_type) {
+ case CAM_CAMNOC_IRQ_IFE02_UBWC_ENCODE_ERROR:
+ case CAM_CAMNOC_IRQ_IFE13_UBWC_ENCODE_ERROR:
+ CAM_ERR_RATE_LIMIT(CAM_ISP,
+ "IFE UBWC Encode error type=%d status=%x",
+ irq_data->irq_type,
+ irq_data->u.enc_err.encerr_status.value);
+ error_handled = true;
+ break;
+ default:
+ break;
+ }
+
+ return error_handled;
+}
+
static int cam_vfe_get_dt_properties(struct cam_hw_soc_info *soc_info)
{
int rc = 0;
@@ -95,6 +119,8 @@ int cam_vfe_init_soc_resources(struct cam_hw_soc_info *soc_info,
CAM_HW_IDENTIFIER_LENGTH);
cpas_register_param.cell_index = soc_info->index;
cpas_register_param.dev = soc_info->dev;
+ cpas_register_param.cam_cpas_client_cb = cam_vfe_cpas_cb;
+ cpas_register_param.userdata = soc_info;
rc = cam_cpas_register_client(&cpas_register_param);
if (rc) {
CAM_ERR(CAM_ISP, "CPAS registration failed rc=%d", rc);
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe170/cam_vfe170.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe170/cam_vfe170.h
index fb6ea6c..a4ba2e1 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe170/cam_vfe170.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe170/cam_vfe170.h
@@ -77,6 +77,8 @@ static struct cam_vfe_camif_reg_data vfe_170_camif_reg_data = {
.epoch0_irq_mask = 0x00000004,
.reg_update_irq_mask = 0x00000010,
.eof_irq_mask = 0x00000002,
+ .error_irq_mask0 = 0x0003FC00,
+ .error_irq_mask1 = 0x0FFF7E80,
};
struct cam_vfe_top_ver2_reg_offset_module_ctrl lens_170_reg = {
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_bus/cam_vfe_bus_ver2.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_bus/cam_vfe_bus_ver2.c
index 13c477d..a2fbbd7 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_bus/cam_vfe_bus_ver2.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_bus/cam_vfe_bus_ver2.c
@@ -263,7 +263,7 @@ static int cam_vfe_bus_put_evt_payload(void *core_info,
CAM_ERR(CAM_ISP, "No payload to put");
return -EINVAL;
}
-
+ (*evt_payload)->error_type = 0;
ife_irq_regs = (*evt_payload)->irq_reg_val;
status_reg0 = ife_irq_regs[CAM_IFE_IRQ_BUS_REG_STATUS0];
status_reg1 = ife_irq_regs[CAM_IFE_IRQ_BUS_REG_STATUS1];
@@ -992,6 +992,9 @@ static int cam_vfe_bus_acquire_wm(
rsrc_data->width = rsrc_data->width * 2;
rsrc_data->stride = rsrc_data->width;
rsrc_data->en_cfg = 0x1;
+
+ /* LSB aligned */
+ rsrc_data->pack_fmt |= 0x10;
} else {
/* Write master 5-6 DS ports, 10 PDAF */
uint32_t align_width;
@@ -1143,6 +1146,8 @@ static int cam_vfe_bus_stop_wm(struct cam_isp_resource_node *wm_res)
common_data->mem_base + common_data->common_reg->sw_reset);
wm_res->res_state = CAM_ISP_RESOURCE_STATE_RESERVED;
+ rsrc_data->init_cfg_done = false;
+ rsrc_data->hfr_cfg_done = false;
return rc;
}
@@ -2150,6 +2155,7 @@ static int cam_vfe_bus_stop_vfe_out(
if (vfe_out->res_state == CAM_ISP_RESOURCE_STATE_AVAILABLE ||
vfe_out->res_state == CAM_ISP_RESOURCE_STATE_RESERVED) {
+ CAM_DBG(CAM_ISP, "vfe_out res_state is %d", vfe_out->res_state);
return rc;
}
@@ -2297,12 +2303,15 @@ static int cam_vfe_bus_error_irq_top_half(uint32_t evt_id,
struct cam_irq_th_payload *th_payload)
{
int i = 0;
+ struct cam_vfe_bus_ver2_priv *bus_priv = th_payload->handler_priv;
CAM_ERR_RATE_LIMIT(CAM_ISP, "Bus Err IRQ");
for (i = 0; i < th_payload->num_registers; i++) {
CAM_ERR_RATE_LIMIT(CAM_ISP, "IRQ_Status%d: 0x%x", i,
th_payload->evt_status_arr[i]);
}
+ cam_irq_controller_disable_irq(bus_priv->common_data.bus_irq_controller,
+ bus_priv->error_irq_handle);
/* Returning error stops from enqueuing bottom half */
return -EFAULT;
@@ -2380,49 +2389,6 @@ static int cam_vfe_bus_update_wm(void *priv, void *cmd_args,
wm_data->index, reg_val_pair[j-1]);
}
- if (wm_data->framedrop_pattern != io_cfg->framedrop_pattern ||
- !wm_data->hfr_cfg_done) {
- CAM_VFE_ADD_REG_VAL_PAIR(reg_val_pair, j,
- wm_data->hw_regs->framedrop_pattern,
- io_cfg->framedrop_pattern);
- wm_data->framedrop_pattern = io_cfg->framedrop_pattern;
- CAM_DBG(CAM_ISP, "WM %d framedrop pattern 0x%x",
- wm_data->index, reg_val_pair[j-1]);
- }
-
-
- if (wm_data->framedrop_period != io_cfg->framedrop_period ||
- !wm_data->hfr_cfg_done) {
- CAM_VFE_ADD_REG_VAL_PAIR(reg_val_pair, j,
- wm_data->hw_regs->framedrop_period,
- io_cfg->framedrop_period);
- wm_data->framedrop_period = io_cfg->framedrop_period;
- CAM_DBG(CAM_ISP, "WM %d framedrop period 0x%x",
- wm_data->index, reg_val_pair[j-1]);
- }
-
- if (wm_data->irq_subsample_period != io_cfg->subsample_period
- || !wm_data->hfr_cfg_done) {
- CAM_VFE_ADD_REG_VAL_PAIR(reg_val_pair, j,
- wm_data->hw_regs->irq_subsample_period,
- io_cfg->subsample_period);
- wm_data->irq_subsample_period =
- io_cfg->subsample_period;
- CAM_DBG(CAM_ISP, "WM %d irq subsample period 0x%x",
- wm_data->index, reg_val_pair[j-1]);
- }
-
- if (wm_data->irq_subsample_pattern != io_cfg->subsample_pattern
- || !wm_data->hfr_cfg_done) {
- CAM_VFE_ADD_REG_VAL_PAIR(reg_val_pair, j,
- wm_data->hw_regs->irq_subsample_pattern,
- io_cfg->subsample_pattern);
- wm_data->irq_subsample_pattern =
- io_cfg->subsample_pattern;
- CAM_DBG(CAM_ISP, "WM %d irq subsample pattern 0x%x",
- wm_data->index, reg_val_pair[j-1]);
- }
-
if (wm_data->en_ubwc) {
if (!wm_data->hw_regs->ubwc_regs) {
CAM_ERR(CAM_ISP,
@@ -3112,4 +3078,3 @@ int cam_vfe_bus_ver2_deinit(
return rc;
}
-
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/Makefile b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/Makefile
index ac8b497..9a2c12c 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/Makefile
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/Makefile
@@ -1,11 +1,13 @@
ccflags-y += -Idrivers/media/platform/msm/camera/cam_utils/
ccflags-y += -Idrivers/media/platform/msm/camera/cam_cdm/
ccflags-y += -Idrivers/media/platform/msm/camera/cam_core/
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_cpas/include
ccflags-y += -Idrivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/include
ccflags-y += -Idrivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/hw_utils/irq_controller
ccflags-y += -Idrivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/include
ccflags-y += -Idrivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw
ccflags-y += -Idrivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/include
ccflags-y += -Idrivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/include
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw
obj-$(CONFIG_SPECTRA_CAMERA) += cam_vfe_top.o cam_vfe_top_ver2.o cam_vfe_camif_ver2.o cam_vfe_rdi.o
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.c
index cd90b57..9848454 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.c
@@ -329,6 +329,7 @@ static int cam_vfe_camif_handle_irq_bottom_half(void *handler_priv,
struct cam_vfe_mux_camif_data *camif_priv;
struct cam_vfe_top_irq_evt_payload *payload;
uint32_t irq_status0;
+ uint32_t irq_status1;
if (!handler_priv || !evt_payload_priv)
return ret;
@@ -337,6 +338,7 @@ static int cam_vfe_camif_handle_irq_bottom_half(void *handler_priv,
camif_priv = camif_node->res_priv;
payload = evt_payload_priv;
irq_status0 = payload->irq_reg_val[CAM_IFE_IRQ_CAMIF_REG_STATUS0];
+ irq_status1 = payload->irq_reg_val[CAM_IFE_IRQ_CAMIF_REG_STATUS1];
CAM_DBG(CAM_ISP, "event ID:%d", payload->evt_id);
CAM_DBG(CAM_ISP, "irq_status_0 = %x", irq_status0);
@@ -367,6 +369,15 @@ static int cam_vfe_camif_handle_irq_bottom_half(void *handler_priv,
ret = CAM_VFE_IRQ_STATUS_SUCCESS;
}
break;
+ case CAM_ISP_HW_EVENT_ERROR:
+ if (irq_status1 & camif_priv->reg_data->error_irq_mask1) {
+ CAM_DBG(CAM_ISP, "Received ERROR\n");
+ ret = CAM_ISP_HW_ERROR_OVERFLOW;
+ cam_vfe_put_evt_payload(payload->core_info, &payload);
+ } else {
+ ret = CAM_ISP_HW_ERROR_NONE;
+ }
+ break;
default:
break;
}
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.h
index 21058ac..4a73bd7 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_camif_ver2.h
@@ -61,6 +61,8 @@ struct cam_vfe_camif_reg_data {
uint32_t epoch0_irq_mask;
uint32_t reg_update_irq_mask;
uint32_t eof_irq_mask;
+ uint32_t error_irq_mask0;
+ uint32_t error_irq_mask1;
};
struct cam_vfe_camif_ver2_hw_info {
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_top_ver2.c b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_top_ver2.c
index 2c35046..f166025 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_top_ver2.c
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/cam_vfe_top_ver2.c
@@ -17,6 +17,11 @@
#include "cam_vfe_top.h"
#include "cam_vfe_top_ver2.h"
#include "cam_debug_util.h"
+#include "cam_cpas_api.h"
+#include "cam_vfe_soc.h"
+
+#define CAM_VFE_HW_RESET_HW_AND_REG_VAL 0x00003F9F
+#define CAM_VFE_HW_RESET_HW_VAL 0x00003F87
struct cam_vfe_top_ver2_common_data {
struct cam_hw_soc_info *soc_info;
@@ -26,8 +31,11 @@ struct cam_vfe_top_ver2_common_data {
struct cam_vfe_top_ver2_priv {
struct cam_vfe_top_ver2_common_data common_data;
- struct cam_vfe_camif *camif;
struct cam_isp_resource_node mux_rsrc[CAM_VFE_TOP_VER2_MUX_MAX];
+ unsigned long hw_clk_rate;
+ struct cam_axi_vote hw_axi_vote;
+ struct cam_axi_vote req_axi_vote[CAM_VFE_TOP_VER2_MUX_MAX];
+ unsigned long req_clk_rate[CAM_VFE_TOP_VER2_MUX_MAX];
};
static int cam_vfe_top_mux_get_base(struct cam_vfe_top_ver2_priv *top_priv,
@@ -77,6 +85,174 @@ static int cam_vfe_top_mux_get_base(struct cam_vfe_top_ver2_priv *top_priv,
return 0;
}
+static int cam_vfe_top_set_hw_clk_rate(
+ struct cam_vfe_top_ver2_priv *top_priv)
+{
+ struct cam_hw_soc_info *soc_info = NULL;
+ int i, rc = 0;
+ unsigned long max_clk_rate = 0;
+
+ soc_info = top_priv->common_data.soc_info;
+
+ for (i = 0; i < CAM_VFE_TOP_VER2_MUX_MAX; i++) {
+ if (top_priv->req_clk_rate[i] > max_clk_rate)
+ max_clk_rate = top_priv->req_clk_rate[i];
+ }
+ if (max_clk_rate == top_priv->hw_clk_rate)
+ return 0;
+
+ CAM_DBG(CAM_ISP, "VFE: Clock name=%s idx=%d clk=%lld",
+ soc_info->clk_name[soc_info->src_clk_idx],
+ soc_info->src_clk_idx, max_clk_rate);
+
+ rc = cam_soc_util_set_clk_rate(
+ soc_info->clk[soc_info->src_clk_idx],
+ soc_info->clk_name[soc_info->src_clk_idx],
+ max_clk_rate);
+
+ if (!rc)
+ top_priv->hw_clk_rate = max_clk_rate;
+ else
+ CAM_ERR(CAM_ISP, "Set Clock rate failed, rc=%d", rc);
+
+ return rc;
+}
+
+static int cam_vfe_top_set_axi_bw_vote(
+ struct cam_vfe_top_ver2_priv *top_priv)
+{
+ struct cam_axi_vote sum = {0, 0};
+ int i, rc = 0;
+ struct cam_hw_soc_info *soc_info =
+ top_priv->common_data.soc_info;
+ struct cam_vfe_soc_private *soc_private =
+ soc_info->soc_private;
+
+ if (!soc_private) {
+ CAM_ERR(CAM_ISP, "Error soc_private NULL");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < CAM_VFE_TOP_VER2_MUX_MAX; i++) {
+ sum.uncompressed_bw +=
+ top_priv->req_axi_vote[i].uncompressed_bw;
+ sum.compressed_bw +=
+ top_priv->req_axi_vote[i].compressed_bw;
+ }
+
+ CAM_DBG(CAM_ISP, "BW Vote: u=%lld c=%lld",
+ sum.uncompressed_bw,
+ sum.compressed_bw);
+
+ if ((top_priv->hw_axi_vote.uncompressed_bw ==
+ sum.uncompressed_bw) &&
+ (top_priv->hw_axi_vote.compressed_bw ==
+ sum.compressed_bw))
+ return 0;
+
+ rc = cam_cpas_update_axi_vote(
+ soc_private->cpas_handle,
+ &sum);
+ if (!rc) {
+ top_priv->hw_axi_vote.uncompressed_bw = sum.uncompressed_bw;
+ top_priv->hw_axi_vote.compressed_bw = sum.compressed_bw;
+ } else
+ CAM_ERR(CAM_ISP, "BW request failed, rc=%d", rc);
+
+ return rc;
+}
+
+static int cam_vfe_top_clock_update(
+ struct cam_vfe_top_ver2_priv *top_priv,
+ void *cmd_args, uint32_t arg_size)
+{
+ struct cam_vfe_clock_update_args *clk_update = NULL;
+ struct cam_isp_resource_node *res = NULL;
+ struct cam_hw_info *hw_info = NULL;
+ int i, rc = 0;
+
+ clk_update =
+ (struct cam_vfe_clock_update_args *)cmd_args;
+ res = clk_update->node_res;
+
+ if (!res || !res->hw_intf->hw_priv) {
+ CAM_ERR(CAM_ISP, "Invalid input res %pK", res);
+ return -EINVAL;
+ }
+
+ hw_info = res->hw_intf->hw_priv;
+
+ if (res->res_type != CAM_ISP_RESOURCE_VFE_IN ||
+ res->res_id >= CAM_ISP_HW_VFE_IN_MAX) {
+ CAM_ERR(CAM_ISP, "VFE:%d Invalid res_type:%d res id%d",
+ res->hw_intf->hw_idx, res->res_type,
+ res->res_id);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < CAM_VFE_TOP_VER2_MUX_MAX; i++) {
+ if (top_priv->mux_rsrc[i].res_id == res->res_id) {
+ top_priv->req_clk_rate[i] = clk_update->clk_rate;
+ break;
+ }
+ }
+
+ if (hw_info->hw_state != CAM_HW_STATE_POWER_UP) {
+ CAM_DBG(CAM_ISP, "VFE:%d Not ready to set clocks yet :%d",
+ res->hw_intf->hw_idx,
+ hw_info->hw_state);
+ } else
+ rc = cam_vfe_top_set_hw_clk_rate(top_priv);
+
+ return rc;
+}
+
+static int cam_vfe_top_bw_update(
+ struct cam_vfe_top_ver2_priv *top_priv,
+ void *cmd_args, uint32_t arg_size)
+{
+ struct cam_vfe_bw_update_args *bw_update = NULL;
+ struct cam_isp_resource_node *res = NULL;
+ struct cam_hw_info *hw_info = NULL;
+ int rc = 0;
+ int i;
+
+ bw_update = (struct cam_vfe_bw_update_args *)cmd_args;
+ res = bw_update->node_res;
+
+ if (!res || !res->hw_intf->hw_priv)
+ return -EINVAL;
+
+ hw_info = res->hw_intf->hw_priv;
+
+ if (res->res_type != CAM_ISP_RESOURCE_VFE_IN ||
+ res->res_id >= CAM_ISP_HW_VFE_IN_MAX) {
+ CAM_ERR(CAM_ISP, "VFE:%d Invalid res_type:%d res id%d",
+ res->hw_intf->hw_idx, res->res_type,
+ res->res_id);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < CAM_VFE_TOP_VER2_MUX_MAX; i++) {
+ if (top_priv->mux_rsrc[i].res_id == res->res_id) {
+ top_priv->req_axi_vote[i].uncompressed_bw =
+ bw_update->camnoc_bw_bytes;
+ top_priv->req_axi_vote[i].compressed_bw =
+ bw_update->external_bw_bytes;
+ break;
+ }
+ }
+
+ if (hw_info->hw_state != CAM_HW_STATE_POWER_UP) {
+ CAM_DBG(CAM_ISP, "VFE:%d Not ready to set BW yet :%d",
+ res->hw_intf->hw_idx,
+ hw_info->hw_state);
+ } else
+ rc = cam_vfe_top_set_axi_bw_vote(top_priv);
+
+ return rc;
+}
+
static int cam_vfe_top_mux_get_reg_update(
struct cam_vfe_top_ver2_priv *top_priv,
void *cmd_args, uint32_t arg_size)
@@ -108,12 +284,24 @@ int cam_vfe_top_reset(void *device_priv,
struct cam_vfe_top_ver2_priv *top_priv = device_priv;
struct cam_hw_soc_info *soc_info = NULL;
struct cam_vfe_top_ver2_reg_offset_common *reg_common = NULL;
+ uint32_t *reset_reg_args = reset_core_args;
+ uint32_t reset_reg_val;
- if (!top_priv) {
+ if (!top_priv || !reset_reg_args) {
CAM_ERR(CAM_ISP, "Invalid arguments");
return -EINVAL;
}
+ switch (*reset_reg_args) {
+ case CAM_VFE_HW_RESET_HW_AND_REG:
+ reset_reg_val = CAM_VFE_HW_RESET_HW_AND_REG_VAL;
+ break;
+ default:
+ reset_reg_val = CAM_VFE_HW_RESET_HW_VAL;
+ break;
+ }
+
+ CAM_DBG(CAM_ISP, "reset reg value: %x", reset_reg_val);
soc_info = top_priv->common_data.soc_info;
reg_common = top_priv->common_data.common_reg;
@@ -122,7 +310,7 @@ int cam_vfe_top_reset(void *device_priv,
CAM_SOC_GET_REG_MAP_START(soc_info, VFE_CORE_BASE_IDX) + 0x5C);
/* Reset HW */
- cam_io_w_mb(0x00003F9F,
+ cam_io_w_mb(reset_reg_val,
CAM_SOC_GET_REG_MAP_START(soc_info, VFE_CORE_BASE_IDX) +
reg_common->global_reset_cmd);
@@ -215,9 +403,21 @@ int cam_vfe_top_start(void *device_priv,
return -EINVAL;
}
- top_priv = (struct cam_vfe_top_ver2_priv *)device_priv;
+ top_priv = (struct cam_vfe_top_ver2_priv *)device_priv;
mux_res = (struct cam_isp_resource_node *)start_args;
+ rc = cam_vfe_top_set_hw_clk_rate(top_priv);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "set_hw_clk_rate failed, rc=%d", rc);
+ return rc;
+ }
+
+ rc = cam_vfe_top_set_axi_bw_vote(top_priv);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "set_axi_bw_vote failed, rc=%d", rc);
+ return rc;
+ }
+
if (mux_res->start) {
rc = mux_res->start(mux_res);
} else {
@@ -233,7 +433,7 @@ int cam_vfe_top_stop(void *device_priv,
{
struct cam_vfe_top_ver2_priv *top_priv;
struct cam_isp_resource_node *mux_res;
- int rc = 0;
+ int i, rc = 0;
if (!device_priv || !stop_args) {
CAM_ERR(CAM_ISP, "Error! Invalid input arguments");
@@ -249,11 +449,33 @@ int cam_vfe_top_stop(void *device_priv,
rc = mux_res->stop(mux_res);
} else {
CAM_ERR(CAM_ISP, "Invalid res id:%d", mux_res->res_id);
- rc = -EINVAL;
+ return -EINVAL;
+ }
+
+ if (!rc) {
+ for (i = 0; i < CAM_VFE_TOP_VER2_MUX_MAX; i++) {
+ if (top_priv->mux_rsrc[i].res_id == mux_res->res_id) {
+ top_priv->req_clk_rate[i] = 0;
+ top_priv->req_axi_vote[i].compressed_bw = 0;
+ top_priv->req_axi_vote[i].uncompressed_bw = 0;
+ break;
+ }
+ }
+
+ rc = cam_vfe_top_set_hw_clk_rate(top_priv);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "set_hw_clk_rate failed, rc=%d", rc);
+ return rc;
+ }
+
+ rc = cam_vfe_top_set_axi_bw_vote(top_priv);
+ if (rc) {
+ CAM_ERR(CAM_ISP, "set_axi_bw_vote failed, rc=%d", rc);
+ return rc;
+ }
}
return rc;
-
}
int cam_vfe_top_read(void *device_priv,
@@ -288,6 +510,14 @@ int cam_vfe_top_process_cmd(void *device_priv, uint32_t cmd_type,
rc = cam_vfe_top_mux_get_reg_update(top_priv, cmd_args,
arg_size);
break;
+ case CAM_ISP_HW_CMD_CLOCK_UPDATE:
+ rc = cam_vfe_top_clock_update(top_priv, cmd_args,
+ arg_size);
+ break;
+ case CAM_ISP_HW_CMD_BW_UPDATE:
+ rc = cam_vfe_top_bw_update(top_priv, cmd_args,
+ arg_size);
+ break;
default:
rc = -EINVAL;
CAM_ERR(CAM_ISP, "Error! Invalid cmd:%d", cmd_type);
@@ -323,12 +553,19 @@ int cam_vfe_top_ver2_init(
goto free_vfe_top;
}
vfe_top->top_priv = top_priv;
+ top_priv->hw_clk_rate = 0;
+ top_priv->hw_axi_vote.compressed_bw = 0;
+ top_priv->hw_axi_vote.uncompressed_bw = 0;
for (i = 0, j = 0; i < CAM_VFE_TOP_VER2_MUX_MAX; i++) {
top_priv->mux_rsrc[i].res_type = CAM_ISP_RESOURCE_VFE_IN;
top_priv->mux_rsrc[i].hw_intf = hw_intf;
top_priv->mux_rsrc[i].res_state =
CAM_ISP_RESOURCE_STATE_AVAILABLE;
+ top_priv->req_clk_rate[i] = 0;
+ top_priv->req_axi_vote[i].compressed_bw = 0;
+ top_priv->req_axi_vote[i].uncompressed_bw = 0;
+
if (ver2_hw_info->mux_type[i] == CAM_VFE_CAMIF_VER_2_0) {
top_priv->mux_rsrc[i].res_id =
CAM_ISP_HW_VFE_IN_CAMIF;
diff --git a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/include/cam_vfe_top.h b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/include/cam_vfe_top.h
index dbb211f..81e3b48 100644
--- a/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/include/cam_vfe_top.h
+++ b/drivers/media/platform/msm/camera/cam_isp/isp_hw_mgr/isp_hw/vfe_hw/vfe_top/include/cam_vfe_top.h
@@ -29,21 +29,6 @@ struct cam_vfe_top {
struct cam_hw_ops hw_ops;
};
-struct cam_vfe_camif {
- void *camif_priv;
- int (*start_resource)(void *priv,
- struct cam_isp_resource_node *camif_res);
- int (*stop_resource)(void *priv,
- struct cam_isp_resource_node *camif_res);
- int (*acquire_resource)(void *priv,
- struct cam_isp_resource_node *camif_res,
- void *acquire_param);
- int (*release_resource)(void *priv,
- struct cam_isp_resource_node *camif_res);
- int (*process_cmd)(void *priv, uint32_t cmd_type, void *cmd_args,
- uint32_t arg_size);
-};
-
int cam_vfe_top_init(uint32_t top_version,
struct cam_hw_soc_info *soc_info,
struct cam_hw_intf *hw_intf,
diff --git a/drivers/media/platform/msm/camera/cam_jpeg/cam_jpeg_context.c b/drivers/media/platform/msm/camera/cam_jpeg/cam_jpeg_context.c
index 6fcd7f6..4589a22 100644
--- a/drivers/media/platform/msm/camera/cam_jpeg/cam_jpeg_context.c
+++ b/drivers/media/platform/msm/camera/cam_jpeg/cam_jpeg_context.c
@@ -63,6 +63,20 @@ static int __cam_jpeg_ctx_handle_buf_done_in_acquired(void *ctx,
return cam_context_buf_done_from_hw(ctx, done, evt_id);
}
+static int __cam_jpeg_ctx_stop_dev_in_acquired(struct cam_context *ctx,
+ struct cam_start_stop_dev_cmd *cmd)
+{
+ int rc;
+
+ rc = cam_context_stop_dev_to_hw(ctx);
+ if (rc) {
+ CAM_ERR(CAM_JPEG, "Failed in Stop dev, rc=%d", rc);
+ return rc;
+ }
+
+ return rc;
+}
+
/* top state machine */
static struct cam_ctx_ops
cam_jpeg_ctx_state_machine[CAM_CTX_STATE_MAX] = {
@@ -85,6 +99,7 @@ static struct cam_ctx_ops
.ioctl_ops = {
.release_dev = __cam_jpeg_ctx_release_dev_in_acquired,
.config_dev = __cam_jpeg_ctx_config_dev_in_acquired,
+ .stop_dev = __cam_jpeg_ctx_stop_dev_in_acquired,
},
.crm_ops = { },
.irq_ops = __cam_jpeg_ctx_handle_buf_done_in_acquired,
diff --git a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/cam_jpeg_hw_mgr.c b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/cam_jpeg_hw_mgr.c
index df95100..e401549 100644
--- a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/cam_jpeg_hw_mgr.c
+++ b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/cam_jpeg_hw_mgr.c
@@ -80,11 +80,23 @@ static int cam_jpeg_mgr_process_irq(void *priv, void *data)
dev_type = ctx_data->jpeg_dev_acquire_info.dev_type;
+ mutex_lock(&g_jpeg_hw_mgr.hw_mgr_mutex);
+
+ p_cfg_req = hw_mgr->dev_hw_cfg_args[dev_type][0];
+
+ if (hw_mgr->device_in_use[dev_type][0] == false ||
+ p_cfg_req == NULL) {
+ CAM_ERR(CAM_JPEG, "irq for old request %d", rc);
+ mutex_unlock(&g_jpeg_hw_mgr.hw_mgr_mutex);
+ return -EINVAL;
+ }
+
irq_cb.jpeg_hw_mgr_cb = cam_jpeg_hw_mgr_cb;
irq_cb.data = NULL;
irq_cb.b_set_cb = false;
if (!hw_mgr->devices[dev_type][0]->hw_ops.process_cmd) {
CAM_ERR(CAM_JPEG, "process_cmd null ");
+ mutex_unlock(&g_jpeg_hw_mgr.hw_mgr_mutex);
return -EINVAL;
}
rc = hw_mgr->devices[dev_type][0]->hw_ops.process_cmd(
@@ -93,6 +105,7 @@ static int cam_jpeg_mgr_process_irq(void *priv, void *data)
&irq_cb, sizeof(irq_cb));
if (rc) {
CAM_ERR(CAM_JPEG, "CMD_SET_IRQ_CB failed %d", rc);
+ mutex_unlock(&g_jpeg_hw_mgr.hw_mgr_mutex);
return rc;
}
@@ -103,9 +116,7 @@ static int cam_jpeg_mgr_process_irq(void *priv, void *data)
CAM_ERR(CAM_JPEG, "Failed to Deinit %d HW", dev_type);
}
- mutex_lock(&g_jpeg_hw_mgr.hw_mgr_mutex);
hw_mgr->device_in_use[dev_type][0] = false;
- p_cfg_req = hw_mgr->dev_hw_cfg_args[dev_type][0];
hw_mgr->dev_hw_cfg_args[dev_type][0] = NULL;
mutex_unlock(&g_jpeg_hw_mgr.hw_mgr_mutex);
@@ -167,7 +178,6 @@ static int cam_jpeg_mgr_process_irq(void *priv, void *data)
list_add_tail(&p_cfg_req->list, &hw_mgr->free_req_list);
-
return rc;
}
@@ -319,8 +329,11 @@ static int cam_jpeg_mgr_process_cmd(void *priv, void *data)
return -EINVAL;
}
+ mutex_lock(&hw_mgr->hw_mgr_mutex);
+
if (list_empty(&hw_mgr->hw_config_req_list)) {
CAM_DBG(CAM_JPEG, "no available request");
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
rc = -EFAULT;
goto end;
}
@@ -329,11 +342,11 @@ static int cam_jpeg_mgr_process_cmd(void *priv, void *data)
struct cam_jpeg_hw_cfg_req, list);
if (!p_cfg_req) {
CAM_ERR(CAM_JPEG, "no request");
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
rc = -EFAULT;
goto end;
}
- mutex_lock(&hw_mgr->hw_mgr_mutex);
if (false == hw_mgr->device_in_use[p_cfg_req->dev_type][0]) {
hw_mgr->device_in_use[p_cfg_req->dev_type][0] = true;
hw_mgr->dev_hw_cfg_args[p_cfg_req->dev_type][0] = p_cfg_req;
@@ -344,6 +357,7 @@ static int cam_jpeg_mgr_process_cmd(void *priv, void *data)
rc = -EFAULT;
goto end;
}
+
mutex_unlock(&hw_mgr->hw_mgr_mutex);
config_args = (struct cam_hw_config_args *)&p_cfg_req->hw_cfg_args;
@@ -464,7 +478,7 @@ static int cam_jpeg_mgr_process_cmd(void *priv, void *data)
hw_mgr->devices[dev_type][0]->hw_priv,
NULL, 0);
if (rc) {
- CAM_ERR(CAM_JPEG, "Failed to apply the configs %d",
+ CAM_ERR(CAM_JPEG, "Failed to start hw %d",
rc);
goto end_callcb;
}
@@ -553,12 +567,12 @@ static int cam_jpeg_mgr_config_hw(void *hw_mgr_priv, void *config_hw_args)
goto err_after_dq_free_list;
}
- mutex_unlock(&hw_mgr->hw_mgr_mutex);
task_data = (struct cam_jpeg_process_frame_work_data_t *)
task->payload;
if (!task_data) {
CAM_ERR(CAM_JPEG, "task_data is NULL");
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
rc = -EINVAL;
goto err_after_dq_free_list;
}
@@ -567,6 +581,7 @@ static int cam_jpeg_mgr_config_hw(void *hw_mgr_priv, void *config_hw_args)
p_cfg_req->hw_cfg_args.num_hw_update_entries);
list_add_tail(&p_cfg_req->list, &hw_mgr->hw_config_req_list);
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
task_data->data = (void *)(int64_t)p_cfg_req->dev_type;
task_data->request_id = request_id;
@@ -719,6 +734,88 @@ static int cam_jpeg_mgr_prepare_hw_update(void *hw_mgr_priv,
return rc;
}
+static int cam_jpeg_mgr_flush(void *hw_mgr_priv,
+ struct cam_jpeg_hw_ctx_data *ctx_data)
+{
+ int rc = 0;
+ struct cam_jpeg_hw_mgr *hw_mgr = hw_mgr_priv;
+ uint32_t dev_type;
+ struct cam_jpeg_hw_cfg_req *p_cfg_req = NULL;
+ struct cam_jpeg_hw_cfg_req *cfg_req, *req_temp;
+
+ if (!hw_mgr || !ctx_data) {
+ CAM_ERR(CAM_JPEG, "Invalid args");
+ return -EINVAL;
+ }
+
+ dev_type = ctx_data->jpeg_dev_acquire_info.dev_type;
+
+ p_cfg_req = hw_mgr->dev_hw_cfg_args[dev_type][0];
+ if (hw_mgr->device_in_use[dev_type][0] == true &&
+ p_cfg_req != NULL) {
+ if ((struct cam_jpeg_hw_ctx_data *)p_cfg_req->
+ hw_cfg_args.ctxt_to_hw_map == ctx_data) {
+ /* stop reset Unregister CB and deinit */
+ if (hw_mgr->devices[dev_type][0]->hw_ops.stop) {
+ rc = hw_mgr->devices[dev_type][0]->hw_ops.stop(
+ hw_mgr->devices[dev_type][0]->hw_priv,
+ NULL, 0);
+ if (rc)
+ CAM_ERR(CAM_JPEG, "stop fail %d", rc);
+ } else {
+ CAM_ERR(CAM_JPEG, "op stop null ");
+ rc = -EINVAL;
+ }
+ }
+
+ hw_mgr->device_in_use[dev_type][0] = false;
+ p_cfg_req = hw_mgr->dev_hw_cfg_args[dev_type][0];
+ hw_mgr->dev_hw_cfg_args[dev_type][0] = NULL;
+ }
+
+ list_for_each_entry_safe(cfg_req, req_temp,
+ &hw_mgr->hw_config_req_list, list) {
+ if ((struct cam_jpeg_hw_ctx_data *)cfg_req->
+ hw_cfg_args.ctxt_to_hw_map != ctx_data)
+ continue;
+
+ CAM_INFO(CAM_JPEG, "deleting req %pK", cfg_req);
+ list_del_init(&cfg_req->list);
+ }
+
+ return rc;
+}
+
+static int cam_jpeg_mgr_hw_stop(void *hw_mgr_priv, void *stop_hw_args)
+{
+ int rc;
+ struct cam_hw_stop_args *stop_args =
+ (struct cam_hw_stop_args *)stop_hw_args;
+ struct cam_jpeg_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_jpeg_hw_ctx_data *ctx_data = NULL;
+
+ if (!hw_mgr || !stop_args || !stop_args->ctxt_to_hw_map) {
+ CAM_ERR(CAM_JPEG, "Invalid args");
+ return -EINVAL;
+ }
+ mutex_lock(&hw_mgr->hw_mgr_mutex);
+
+ ctx_data = (struct cam_jpeg_hw_ctx_data *)stop_args->ctxt_to_hw_map;
+ if (!ctx_data->in_use) {
+ CAM_ERR(CAM_JPEG, "ctx is not in use");
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
+ return -EINVAL;
+ }
+
+ rc = cam_jpeg_mgr_flush(hw_mgr_priv, ctx_data);
+ if ((rc))
+ CAM_ERR(CAM_JPEG, "flush failed %d", rc);
+
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
+
+ return rc;
+}
+
static int cam_jpeg_mgr_release_hw(void *hw_mgr_priv, void *release_hw_args)
{
int rc;
@@ -1184,6 +1281,7 @@ int cam_jpeg_hw_mgr_init(struct device_node *of_node, uint64_t *hw_mgr_hdl)
hw_mgr_intf->hw_release = cam_jpeg_mgr_release_hw;
hw_mgr_intf->hw_prepare_update = cam_jpeg_mgr_prepare_hw_update;
hw_mgr_intf->hw_config = cam_jpeg_mgr_config_hw;
+ hw_mgr_intf->hw_stop = cam_jpeg_mgr_hw_stop;
mutex_init(&g_jpeg_hw_mgr.hw_mgr_mutex);
spin_lock_init(&g_jpeg_hw_mgr.hw_mgr_lock);
diff --git a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/cam_jpeg_enc_hw_info_ver_4_2_0.h b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/cam_jpeg_enc_hw_info_ver_4_2_0.h
index 725af47..2ac4db6 100644
--- a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/cam_jpeg_enc_hw_info_ver_4_2_0.h
+++ b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/cam_jpeg_enc_hw_info_ver_4_2_0.h
@@ -19,6 +19,9 @@
#define CAM_JPEG_HW_IRQ_STATUS_RESET_ACK_MASK 0x10000000
#define CAM_JPEG_HW_IRQ_STATUS_RESET_ACK_SHIFT 0x0000000a
+#define CAM_JPEG_HW_IRQ_STATUS_STOP_DONE_MASK 0x8000000
+#define CAM_JPEG_HW_IRQ_STATUS_STOP_DONE_SHIFT 0x0000001b
+
#define CAM_JPEG_HW_IRQ_STATUS_BUS_ERROR_MASK 0x00000800
#define CAM_JPEG_HW_IRQ_STATUS_BUS_ERROR_SHIFT 0x0000000b
@@ -63,11 +66,13 @@ static struct cam_jpeg_enc_device_hw_info cam_jpeg_enc_hw_info = {
.int_mask_enable_all = 0xFFFFFFFF,
.hw_cmd_start = 0x00000001,
.reset_cmd = 0x00032093,
+ .hw_cmd_stop = 0x00000002,
},
.int_status = {
.framedone = CAM_JPEG_HW_MASK_COMP_FRAMEDONE,
.resetdone = CAM_JPEG_HW_MASK_COMP_RESET_ACK,
.iserror = CAM_JPEG_HW_MASK_COMP_ERR,
+ .stopdone = CAM_JPEG_HW_IRQ_STATUS_STOP_DONE_MASK,
}
};
diff --git a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.c b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.c
index a7c4e06..934b911 100644
--- a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.c
+++ b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.c
@@ -37,6 +37,8 @@
((jpeg_irq_status) & (hi)->int_status.resetdone)
#define CAM_JPEG_HW_IRQ_IS_ERR(jpeg_irq_status, hi) \
((jpeg_irq_status) & (hi)->int_status.iserror)
+#define CAM_JPEG_HW_IRQ_IS_STOP_DONE(jpeg_irq_status, hi) \
+ ((jpeg_irq_status) & (hi)->int_status.stopdone)
#define CAM_JPEG_ENC_RESET_TIMEOUT msecs_to_jiffies(500)
@@ -181,7 +183,9 @@ irqreturn_t cam_jpeg_enc_irq(int irq_num, void *data)
CAM_DBG(CAM_JPEG, "irq_num %d irq_status = %x , core_state %d",
irq_num, irq_status, core_info->core_state);
+
if (CAM_JPEG_HW_IRQ_IS_FRAME_DONE(irq_status, hw_info)) {
+ spin_lock(&jpeg_enc_dev->hw_lock);
if (core_info->core_state == CAM_JPEG_ENC_CORE_READY) {
encoded_size = cam_io_r_mb(mem_base +
core_info->jpeg_enc_hw_info->reg_offset.
@@ -191,22 +195,42 @@ irqreturn_t cam_jpeg_enc_irq(int irq_num, void *data)
encoded_size,
core_info->irq_cb.data);
} else {
- CAM_ERR(CAM_JPEG, "unexpected done");
+ CAM_ERR(CAM_JPEG, "unexpected done, no cb");
}
+ } else {
+ CAM_ERR(CAM_JPEG, "unexpected done irq");
}
-
core_info->core_state = CAM_JPEG_ENC_CORE_NOT_READY;
+ spin_unlock(&jpeg_enc_dev->hw_lock);
}
if (CAM_JPEG_HW_IRQ_IS_RESET_ACK(irq_status, hw_info)) {
+ spin_lock(&jpeg_enc_dev->hw_lock);
if (core_info->core_state == CAM_JPEG_ENC_CORE_RESETTING) {
core_info->core_state = CAM_JPEG_ENC_CORE_READY;
complete(&jpeg_enc_dev->hw_complete);
} else {
CAM_ERR(CAM_JPEG, "unexpected reset irq");
}
+ spin_unlock(&jpeg_enc_dev->hw_lock);
+ }
+ if (CAM_JPEG_HW_IRQ_IS_STOP_DONE(irq_status, hw_info)) {
+ spin_lock(&jpeg_enc_dev->hw_lock);
+ if (core_info->core_state == CAM_JPEG_ENC_CORE_ABORTING) {
+ core_info->core_state = CAM_JPEG_ENC_CORE_NOT_READY;
+ complete(&jpeg_enc_dev->hw_complete);
+ if (core_info->irq_cb.jpeg_hw_mgr_cb) {
+ core_info->irq_cb.jpeg_hw_mgr_cb(irq_status,
+ -1,
+ core_info->irq_cb.data);
+ }
+ } else {
+ CAM_ERR(CAM_JPEG, "unexpected abort irq");
+ }
+ spin_unlock(&jpeg_enc_dev->hw_lock);
}
/* Unexpected/unintended HW interrupt */
if (CAM_JPEG_HW_IRQ_IS_ERR(irq_status, hw_info)) {
+ spin_lock(&jpeg_enc_dev->hw_lock);
core_info->core_state = CAM_JPEG_ENC_CORE_NOT_READY;
CAM_ERR_RATE_LIMIT(CAM_JPEG,
"error irq_num %d irq_status = %x , core_state %d",
@@ -217,6 +241,7 @@ irqreturn_t cam_jpeg_enc_irq(int irq_num, void *data)
-1,
core_info->irq_cb.data);
}
+ spin_unlock(&jpeg_enc_dev->hw_lock);
}
return IRQ_HANDLED;
@@ -244,14 +269,18 @@ int cam_jpeg_enc_reset_hw(void *data,
hw_info = core_info->jpeg_enc_hw_info;
mem_base = soc_info->reg_map[0].mem_base;
+ mutex_lock(&core_info->core_mutex);
+ spin_lock(&jpeg_enc_dev->hw_lock);
if (core_info->core_state == CAM_JPEG_ENC_CORE_RESETTING) {
CAM_ERR(CAM_JPEG, "alrady resetting");
+ spin_unlock(&jpeg_enc_dev->hw_lock);
+ mutex_unlock(&core_info->core_mutex);
return 0;
}
reinit_completion(&jpeg_enc_dev->hw_complete);
-
core_info->core_state = CAM_JPEG_ENC_CORE_RESETTING;
+ spin_unlock(&jpeg_enc_dev->hw_lock);
cam_io_w_mb(hw_info->reg_val.int_mask_disable_all,
mem_base + hw_info->reg_offset.int_mask);
@@ -269,6 +298,7 @@ int cam_jpeg_enc_reset_hw(void *data,
core_info->core_state = CAM_JPEG_ENC_CORE_NOT_READY;
}
+ mutex_unlock(&core_info->core_mutex);
return 0;
}
@@ -303,6 +333,54 @@ int cam_jpeg_enc_start_hw(void *data,
return 0;
}
+int cam_jpeg_enc_stop_hw(void *data,
+ void *stop_args, uint32_t arg_size)
+{
+ struct cam_hw_info *jpeg_enc_dev = data;
+ struct cam_jpeg_enc_device_core_info *core_info = NULL;
+ struct cam_hw_soc_info *soc_info = NULL;
+ struct cam_jpeg_enc_device_hw_info *hw_info = NULL;
+ void __iomem *mem_base;
+ unsigned long rem_jiffies;
+
+ if (!jpeg_enc_dev) {
+ CAM_ERR(CAM_JPEG, "Invalid args");
+ return -EINVAL;
+ }
+ soc_info = &jpeg_enc_dev->soc_info;
+ core_info =
+ (struct cam_jpeg_enc_device_core_info *)jpeg_enc_dev->
+ core_info;
+ hw_info = core_info->jpeg_enc_hw_info;
+ mem_base = soc_info->reg_map[0].mem_base;
+
+ mutex_unlock(&core_info->core_mutex);
+ spin_lock(&jpeg_enc_dev->hw_lock);
+ if (core_info->core_state == CAM_JPEG_ENC_CORE_ABORTING) {
+ CAM_ERR(CAM_JPEG, "alrady stopping");
+ spin_unlock(&jpeg_enc_dev->hw_lock);
+ mutex_unlock(&core_info->core_mutex);
+ return 0;
+ }
+
+ reinit_completion(&jpeg_enc_dev->hw_complete);
+ core_info->core_state = CAM_JPEG_ENC_CORE_ABORTING;
+ spin_unlock(&jpeg_enc_dev->hw_lock);
+
+ cam_io_w_mb(hw_info->reg_val.hw_cmd_stop,
+ mem_base + hw_info->reg_offset.hw_cmd);
+
+ rem_jiffies = wait_for_completion_timeout(&jpeg_enc_dev->hw_complete,
+ CAM_JPEG_ENC_RESET_TIMEOUT);
+ if (!rem_jiffies) {
+ CAM_ERR(CAM_JPEG, "error Reset Timeout");
+ core_info->core_state = CAM_JPEG_ENC_CORE_NOT_READY;
+ }
+
+ mutex_unlock(&core_info->core_mutex);
+ return 0;
+}
+
int cam_jpeg_enc_process_cmd(void *device_priv, uint32_t cmd_type,
void *cmd_args, uint32_t arg_size)
{
diff --git a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.h b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.h
index 4f5d625..5fa8f21 100644
--- a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.h
+++ b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_core.h
@@ -37,12 +37,14 @@ struct cam_jpeg_enc_regval {
uint32_t int_mask_enable_all;
uint32_t hw_cmd_start;
uint32_t reset_cmd;
+ uint32_t hw_cmd_stop;
};
struct cam_jpeg_enc_int_status {
uint32_t framedone;
uint32_t resetdone;
uint32_t iserror;
+ uint32_t stopdone;
};
struct cam_jpeg_enc_device_hw_info {
@@ -55,6 +57,7 @@ enum cam_jpeg_enc_core_state {
CAM_JPEG_ENC_CORE_NOT_READY,
CAM_JPEG_ENC_CORE_READY,
CAM_JPEG_ENC_CORE_RESETTING,
+ CAM_JPEG_ENC_CORE_ABORTING,
CAM_JPEG_ENC_CORE_STATE_MAX,
};
@@ -73,6 +76,8 @@ int cam_jpeg_enc_deinit_hw(void *device_priv,
void *init_hw_args, uint32_t arg_size);
int cam_jpeg_enc_start_hw(void *device_priv,
void *start_hw_args, uint32_t arg_size);
+int cam_jpeg_enc_stop_hw(void *device_priv,
+ void *stop_hw_args, uint32_t arg_size);
int cam_jpeg_enc_reset_hw(void *device_priv,
void *reset_hw_args, uint32_t arg_size);
int cam_jpeg_enc_process_cmd(void *device_priv, uint32_t cmd_type,
diff --git a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_dev.c b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_dev.c
index 735bd21..2180448 100644
--- a/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_dev.c
+++ b/drivers/media/platform/msm/camera/cam_jpeg/jpeg_hw/jpeg_enc_hw/jpeg_enc_dev.c
@@ -139,6 +139,7 @@ static int cam_jpeg_enc_probe(struct platform_device *pdev)
jpeg_enc_dev_intf->hw_ops.init = cam_jpeg_enc_init_hw;
jpeg_enc_dev_intf->hw_ops.deinit = cam_jpeg_enc_deinit_hw;
jpeg_enc_dev_intf->hw_ops.start = cam_jpeg_enc_start_hw;
+ jpeg_enc_dev_intf->hw_ops.stop = cam_jpeg_enc_stop_hw;
jpeg_enc_dev_intf->hw_ops.reset = cam_jpeg_enc_reset_hw;
jpeg_enc_dev_intf->hw_ops.process_cmd = cam_jpeg_enc_process_cmd;
jpeg_enc_dev_intf->hw_type = CAM_JPEG_DEV_ENC;
diff --git a/drivers/media/platform/msm/camera/cam_lrme/Makefile b/drivers/media/platform/msm/camera/cam_lrme/Makefile
new file mode 100644
index 0000000..fba4529
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/Makefile
@@ -0,0 +1,14 @@
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_req_mgr
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_utils
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_sync
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_core
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_smmu
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_cdm
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw
+ccflags-y += -Idrivers/media/platform/msm/camera
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_cpas/include/
+
+obj-$(CONFIG_SPECTRA_CAMERA) += lrme_hw_mgr/
+obj-$(CONFIG_SPECTRA_CAMERA) += cam_lrme_dev.o cam_lrme_context.o
diff --git a/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_context.c b/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_context.c
new file mode 100644
index 0000000..0aa5ade
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_context.c
@@ -0,0 +1,241 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include "cam_debug_util.h"
+#include "cam_lrme_context.h"
+
+static int __cam_lrme_ctx_acquire_dev_in_available(struct cam_context *ctx,
+ struct cam_acquire_dev_cmd *cmd)
+{
+ int rc = 0;
+ uint64_t ctxt_to_hw_map = (uint64_t)ctx->ctxt_to_hw_map;
+ struct cam_lrme_context *lrme_ctx = ctx->ctx_priv;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ rc = cam_context_acquire_dev_to_hw(ctx, cmd);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to acquire");
+ return rc;
+ }
+
+ ctxt_to_hw_map |= (lrme_ctx->index << CAM_LRME_CTX_INDEX_SHIFT);
+ ctx->ctxt_to_hw_map = (void *)ctxt_to_hw_map;
+
+ ctx->state = CAM_CTX_ACQUIRED;
+
+ return rc;
+}
+
+static int __cam_lrme_ctx_release_dev_in_acquired(struct cam_context *ctx,
+ struct cam_release_dev_cmd *cmd)
+{
+ int rc = 0;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ rc = cam_context_release_dev_to_hw(ctx, cmd);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to release");
+ return rc;
+ }
+
+ ctx->state = CAM_CTX_AVAILABLE;
+
+ return rc;
+}
+
+static int __cam_lrme_ctx_start_dev_in_acquired(struct cam_context *ctx,
+ struct cam_start_stop_dev_cmd *cmd)
+{
+ int rc = 0;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ rc = cam_context_start_dev_to_hw(ctx, cmd);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to start");
+ return rc;
+ }
+
+ ctx->state = CAM_CTX_ACTIVATED;
+
+ return rc;
+}
+
+static int __cam_lrme_ctx_config_dev_in_activated(struct cam_context *ctx,
+ struct cam_config_dev_cmd *cmd)
+{
+ int rc;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ rc = cam_context_prepare_dev_to_hw(ctx, cmd);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to config");
+ return rc;
+ }
+
+ return rc;
+}
+
+static int __cam_lrme_ctx_stop_dev_in_activated(struct cam_context *ctx,
+ struct cam_start_stop_dev_cmd *cmd)
+{
+ int rc = 0;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ rc = cam_context_stop_dev_to_hw(ctx);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to stop dev");
+ return rc;
+ }
+
+ ctx->state = CAM_CTX_ACQUIRED;
+
+ return rc;
+}
+
+static int __cam_lrme_ctx_release_dev_in_activated(struct cam_context *ctx,
+ struct cam_release_dev_cmd *cmd)
+{
+ int rc = 0;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ rc = __cam_lrme_ctx_stop_dev_in_activated(ctx, NULL);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to stop");
+ return rc;
+ }
+
+ rc = cam_context_release_dev_to_hw(ctx, cmd);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to release");
+ return rc;
+ }
+
+ ctx->state = CAM_CTX_AVAILABLE;
+
+ return rc;
+}
+
+static int __cam_lrme_ctx_handle_irq_in_activated(void *context,
+ uint32_t evt_id, void *evt_data)
+{
+ int rc;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ rc = cam_context_buf_done_from_hw(context, evt_data, evt_id);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed in buf done, rc=%d", rc);
+ return rc;
+ }
+
+ return rc;
+}
+
+/* top state machine */
+static struct cam_ctx_ops
+ cam_lrme_ctx_state_machine[CAM_CTX_STATE_MAX] = {
+ /* Uninit */
+ {
+ .ioctl_ops = {},
+ .crm_ops = {},
+ .irq_ops = NULL,
+ },
+ /* Available */
+ {
+ .ioctl_ops = {
+ .acquire_dev = __cam_lrme_ctx_acquire_dev_in_available,
+ },
+ .crm_ops = {},
+ .irq_ops = NULL,
+ },
+ /* Acquired */
+ {
+ .ioctl_ops = {
+ .release_dev = __cam_lrme_ctx_release_dev_in_acquired,
+ .start_dev = __cam_lrme_ctx_start_dev_in_acquired,
+ },
+ .crm_ops = {},
+ .irq_ops = NULL,
+ },
+ /* Ready */
+ {
+ .ioctl_ops = {},
+ .crm_ops = {},
+ .irq_ops = NULL,
+ },
+ /* Activate */
+ {
+ .ioctl_ops = {
+ .config_dev = __cam_lrme_ctx_config_dev_in_activated,
+ .release_dev = __cam_lrme_ctx_release_dev_in_activated,
+ .stop_dev = __cam_lrme_ctx_stop_dev_in_activated,
+ },
+ .crm_ops = {},
+ .irq_ops = __cam_lrme_ctx_handle_irq_in_activated,
+ },
+};
+
+int cam_lrme_context_init(struct cam_lrme_context *lrme_ctx,
+ struct cam_context *base_ctx,
+ struct cam_hw_mgr_intf *hw_intf,
+ uint64_t index)
+{
+ int rc = 0;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ if (!base_ctx || !lrme_ctx) {
+ CAM_ERR(CAM_LRME, "Invalid input");
+ return -EINVAL;
+ }
+
+ memset(lrme_ctx, 0, sizeof(*lrme_ctx));
+
+ rc = cam_context_init(base_ctx, "lrme", NULL, hw_intf,
+ lrme_ctx->req_base, CAM_CTX_REQ_MAX);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to init context");
+ return rc;
+ }
+ lrme_ctx->base = base_ctx;
+ lrme_ctx->index = index;
+ base_ctx->ctx_priv = lrme_ctx;
+ base_ctx->state_machine = cam_lrme_ctx_state_machine;
+
+ return rc;
+}
+
+int cam_lrme_context_deinit(struct cam_lrme_context *lrme_ctx)
+{
+ int rc = 0;
+
+ CAM_DBG(CAM_LRME, "Enter");
+
+ if (!lrme_ctx) {
+ CAM_ERR(CAM_LRME, "No ctx to deinit");
+ return -EINVAL;
+ }
+
+ rc = cam_context_deinit(lrme_ctx->base);
+
+ memset(lrme_ctx, 0, sizeof(*lrme_ctx));
+ return rc;
+}
diff --git a/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_context.h b/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_context.h
new file mode 100644
index 0000000..882f7ac
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_context.h
@@ -0,0 +1,41 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _CAM_LRME_CONTEXT_H_
+#define _CAM_LRME_CONTEXT_H_
+
+#include "cam_context.h"
+#include "cam_context_utils.h"
+#include "cam_hw_mgr_intf.h"
+#include "cam_req_mgr_interface.h"
+#include "cam_sync_api.h"
+
+#define CAM_LRME_CTX_INDEX_SHIFT 32
+
+/**
+ * struct cam_lrme_context
+ *
+ * @base : Base context pointer for this LRME context
+ * @req_base : List of base request for this LRME context
+ */
+struct cam_lrme_context {
+ struct cam_context *base;
+ struct cam_ctx_request req_base[CAM_CTX_REQ_MAX];
+ uint64_t index;
+};
+
+int cam_lrme_context_init(struct cam_lrme_context *lrme_ctx,
+ struct cam_context *base_ctx, struct cam_hw_mgr_intf *hw_intf,
+ uint64_t index);
+int cam_lrme_context_deinit(struct cam_lrme_context *lrme_ctx);
+
+#endif /* _CAM_LRME_CONTEXT_H_ */
diff --git a/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_dev.c b/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_dev.c
new file mode 100644
index 0000000..5be16ef
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/cam_lrme_dev.c
@@ -0,0 +1,233 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include "cam_subdev.h"
+#include "cam_node.h"
+#include "cam_lrme_context.h"
+#include "cam_lrme_hw_mgr.h"
+#include "cam_lrme_hw_mgr_intf.h"
+
+#define CAM_LRME_DEV_NAME "cam-lrme"
+
+/**
+ * struct cam_lrme_dev
+ *
+ * @sd : Subdev information
+ * @ctx : List of base contexts
+ * @lrme_ctx : List of LRME contexts
+ * @lock : Mutex for LRME subdev
+ * @open_cnt : Open count of LRME subdev
+ */
+struct cam_lrme_dev {
+ struct cam_subdev sd;
+ struct cam_context ctx[CAM_CTX_MAX];
+ struct cam_lrme_context lrme_ctx[CAM_CTX_MAX];
+ struct mutex lock;
+ uint32_t open_cnt;
+};
+
+static struct cam_lrme_dev *g_lrme_dev;
+
+static int cam_lrme_dev_buf_done_cb(void *ctxt_to_hw_map, uint32_t evt_id,
+ void *evt_data)
+{
+ uint64_t index;
+ struct cam_context *ctx;
+ int rc;
+
+ index = CAM_LRME_DECODE_CTX_INDEX(ctxt_to_hw_map);
+ CAM_DBG(CAM_LRME, "ctx index %llu, evt_id %u\n", index, evt_id);
+ ctx = &g_lrme_dev->ctx[index];
+ rc = ctx->irq_cb_intf(ctx, evt_id, evt_data);
+ if (rc)
+ CAM_ERR(CAM_LRME, "irq callback failed");
+
+ return rc;
+}
+
+static int cam_lrme_dev_open(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh)
+{
+ struct cam_lrme_dev *lrme_dev = g_lrme_dev;
+
+ if (!lrme_dev) {
+ CAM_ERR(CAM_LRME,
+ "LRME Dev not initialized, dev=%pK", lrme_dev);
+ return -ENODEV;
+ }
+
+ mutex_lock(&lrme_dev->lock);
+ lrme_dev->open_cnt++;
+ mutex_unlock(&lrme_dev->lock);
+
+ return 0;
+}
+
+static int cam_lrme_dev_close(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh)
+{
+ struct cam_lrme_dev *lrme_dev = g_lrme_dev;
+ struct cam_node *node = v4l2_get_subdevdata(sd);
+
+ if (!lrme_dev) {
+ CAM_ERR(CAM_LRME, "Invalid args");
+ return -ENODEV;
+ }
+
+ mutex_lock(&lrme_dev->lock);
+ lrme_dev->open_cnt--;
+ mutex_unlock(&lrme_dev->lock);
+
+ if (!node) {
+ CAM_ERR(CAM_LRME, "Node is NULL");
+ return -EINVAL;
+ }
+
+ if (lrme_dev->open_cnt == 0)
+ cam_node_shutdown(node);
+
+ return 0;
+}
+
+static const struct v4l2_subdev_internal_ops cam_lrme_subdev_internal_ops = {
+ .open = cam_lrme_dev_open,
+ .close = cam_lrme_dev_close,
+};
+
+static int cam_lrme_dev_probe(struct platform_device *pdev)
+{
+ int rc;
+ int i;
+ struct cam_hw_mgr_intf hw_mgr_intf;
+ struct cam_node *node;
+
+ g_lrme_dev = kzalloc(sizeof(struct cam_lrme_dev), GFP_KERNEL);
+ if (!g_lrme_dev) {
+ CAM_ERR(CAM_LRME, "No memory");
+ return -ENOMEM;
+ }
+ g_lrme_dev->sd.internal_ops = &cam_lrme_subdev_internal_ops;
+
+ mutex_init(&g_lrme_dev->lock);
+
+ rc = cam_subdev_probe(&g_lrme_dev->sd, pdev, CAM_LRME_DEV_NAME,
+ CAM_LRME_DEVICE_TYPE);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "LRME cam_subdev_probe failed");
+ goto free_mem;
+ }
+ node = (struct cam_node *)g_lrme_dev->sd.token;
+
+ rc = cam_lrme_hw_mgr_init(&hw_mgr_intf, cam_lrme_dev_buf_done_cb);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Can not initialized LRME HW manager");
+ goto unregister;
+ }
+
+ for (i = 0; i < CAM_CTX_MAX; i++) {
+ rc = cam_lrme_context_init(&g_lrme_dev->lrme_ctx[i],
+ &g_lrme_dev->ctx[i],
+ &node->hw_mgr_intf, i);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "LRME context init failed");
+ goto deinit_ctx;
+ }
+ }
+
+ rc = cam_node_init(node, &hw_mgr_intf, g_lrme_dev->ctx, CAM_CTX_MAX,
+ CAM_LRME_DEV_NAME);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "LRME node init failed");
+ goto deinit_ctx;
+ }
+
+ CAM_DBG(CAM_LRME, "%s probe complete", g_lrme_dev->sd.name);
+
+ return 0;
+
+deinit_ctx:
+ for (--i; i >= 0; i--) {
+ if (cam_lrme_context_deinit(&g_lrme_dev->lrme_ctx[i]))
+ CAM_ERR(CAM_LRME, "LRME context %d deinit failed", i);
+ }
+unregister:
+ if (cam_subdev_remove(&g_lrme_dev->sd))
+ CAM_ERR(CAM_LRME, "Failed in subdev remove");
+free_mem:
+ kfree(g_lrme_dev);
+
+ return rc;
+}
+
+static int cam_lrme_dev_remove(struct platform_device *pdev)
+{
+ int i;
+ int rc = 0;
+
+ for (i = 0; i < CAM_CTX_MAX; i++) {
+ rc = cam_lrme_context_deinit(&g_lrme_dev->lrme_ctx[i]);
+ if (rc)
+ CAM_ERR(CAM_LRME, "LRME context %d deinit failed", i);
+ }
+
+ rc = cam_lrme_hw_mgr_deinit();
+ if (rc)
+ CAM_ERR(CAM_LRME, "Failed in hw mgr deinit, rc=%d", rc);
+
+ rc = cam_subdev_remove(&g_lrme_dev->sd);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Unregister failed");
+
+ mutex_destroy(&g_lrme_dev->lock);
+ kfree(g_lrme_dev);
+ g_lrme_dev = NULL;
+
+ return rc;
+}
+
+static const struct of_device_id cam_lrme_dt_match[] = {
+ {
+ .compatible = "qcom,cam-lrme"
+ },
+ {}
+};
+
+static struct platform_driver cam_lrme_driver = {
+ .probe = cam_lrme_dev_probe,
+ .remove = cam_lrme_dev_remove,
+ .driver = {
+ .name = "cam_lrme",
+ .owner = THIS_MODULE,
+ .of_match_table = cam_lrme_dt_match,
+ },
+};
+
+static int __init cam_lrme_dev_init_module(void)
+{
+ return platform_driver_register(&cam_lrme_driver);
+}
+
+static void __exit cam_lrme_dev_exit_module(void)
+{
+ platform_driver_unregister(&cam_lrme_driver);
+}
+
+module_init(cam_lrme_dev_init_module);
+module_exit(cam_lrme_dev_exit_module);
+MODULE_DESCRIPTION("MSM LRME driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/Makefile b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/Makefile
new file mode 100644
index 0000000..e4c8e0d
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/Makefile
@@ -0,0 +1,14 @@
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_utils
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_req_mgr
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_core
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_sync
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_smmu
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_cdm
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw
+ccflags-y += -Idrivers/media/platform/msm/camera
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_cpas/include
+
+obj-$(CONFIG_SPECTRA_CAMERA) += lrme_hw/
+obj-$(CONFIG_SPECTRA_CAMERA) += cam_lrme_hw_mgr.o
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr.c b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr.c
new file mode 100644
index 0000000..448086d
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr.c
@@ -0,0 +1,1034 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <media/cam_cpas.h>
+#include <media/cam_req_mgr.h>
+
+#include "cam_io_util.h"
+#include "cam_soc_util.h"
+#include "cam_mem_mgr_api.h"
+#include "cam_smmu_api.h"
+#include "cam_packet_util.h"
+#include "cam_lrme_context.h"
+#include "cam_lrme_hw_intf.h"
+#include "cam_lrme_hw_core.h"
+#include "cam_lrme_hw_soc.h"
+#include "cam_lrme_hw_mgr_intf.h"
+#include "cam_lrme_hw_mgr.h"
+
+static struct cam_lrme_hw_mgr g_lrme_hw_mgr;
+
+static int cam_lrme_mgr_util_reserve_device(struct cam_lrme_hw_mgr *hw_mgr,
+ struct cam_lrme_acquire_args *lrme_acquire_args)
+{
+ int i, index = 0;
+ uint32_t min_ctx = UINT_MAX;
+ struct cam_lrme_device *hw_device = NULL;
+
+ mutex_lock(&hw_mgr->hw_mgr_mutex);
+ if (!hw_mgr->device_count) {
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
+ CAM_ERR(CAM_LRME, "No device is registered");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < hw_mgr->device_count && i < CAM_LRME_HW_MAX; i++) {
+ hw_device = &hw_mgr->hw_device[i];
+ if (!hw_device->num_context) {
+ index = i;
+ break;
+ }
+ if (hw_device->num_context < min_ctx) {
+ min_ctx = hw_device->num_context;
+ index = i;
+ }
+ }
+
+ hw_device = &hw_mgr->hw_device[index];
+ hw_device->num_context++;
+
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
+
+ CAM_DBG(CAM_LRME, "reserve device index %d", index);
+
+ return index;
+}
+
+static int cam_lrme_mgr_util_get_device(struct cam_lrme_hw_mgr *hw_mgr,
+ uint32_t device_index, struct cam_lrme_device **hw_device)
+{
+ if (!hw_mgr) {
+ CAM_ERR(CAM_LRME, "invalid params hw_mgr %pK", hw_mgr);
+ return -EINVAL;
+ }
+
+ if (device_index >= CAM_LRME_HW_MAX) {
+ CAM_ERR(CAM_LRME, "Wrong device index %d", device_index);
+ return -EINVAL;
+ }
+
+ *hw_device = &hw_mgr->hw_device[device_index];
+
+ return 0;
+}
+
+static int cam_lrme_mgr_util_packet_validate(struct cam_packet *packet)
+{
+ struct cam_cmd_buf_desc *cmd_desc = NULL;
+ int i, rc;
+
+ if (!packet) {
+ CAM_ERR(CAM_LRME, "Invalid args");
+ return -EINVAL;
+ }
+
+ CAM_DBG(CAM_LRME, "Packet request=%d, op_code=0x%x, size=%d, flags=%d",
+ packet->header.request_id, packet->header.op_code,
+ packet->header.size, packet->header.flags);
+ CAM_DBG(CAM_LRME,
+ "Packet cmdbuf(offset=%d, num=%d) io(offset=%d, num=%d)",
+ packet->cmd_buf_offset, packet->num_cmd_buf,
+ packet->io_configs_offset, packet->num_io_configs);
+ CAM_DBG(CAM_LRME,
+ "Packet Patch(offset=%d, num=%d) kmd(offset=%d, num=%d)",
+ packet->patch_offset, packet->num_patches,
+ packet->kmd_cmd_buf_offset, packet->kmd_cmd_buf_index);
+
+ if (cam_packet_util_validate_packet(packet)) {
+ CAM_ERR(CAM_LRME, "invalid packet:%d %d %d %d %d",
+ packet->kmd_cmd_buf_index,
+ packet->num_cmd_buf, packet->cmd_buf_offset,
+ packet->io_configs_offset, packet->header.size);
+ return -EINVAL;
+ }
+
+ if (!packet->num_io_configs) {
+ CAM_ERR(CAM_LRME, "no io configs");
+ return -EINVAL;
+ }
+
+ cmd_desc = (struct cam_cmd_buf_desc *)((uint8_t *)&packet->payload +
+ packet->cmd_buf_offset);
+
+ for (i = 0; i < packet->num_cmd_buf; i++) {
+ if (!cmd_desc[i].length)
+ continue;
+
+ CAM_DBG(CAM_LRME,
+ "CmdBuf[%d] hdl=%d, offset=%d, size=%d, len=%d, type=%d, meta_data=%d",
+ i,
+ cmd_desc[i].mem_handle, cmd_desc[i].offset,
+ cmd_desc[i].size, cmd_desc[i].length, cmd_desc[i].type,
+ cmd_desc[i].meta_data);
+
+ rc = cam_packet_util_validate_cmd_desc(&cmd_desc[i]);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Invalid cmd buffer %d", i);
+ return rc;
+ }
+ }
+
+ return 0;
+}
+
+static int cam_lrme_mgr_util_prepare_io_buffer(int32_t iommu_hdl,
+ struct cam_hw_prepare_update_args *prepare,
+ struct cam_lrme_hw_io_buffer *input_buf,
+ struct cam_lrme_hw_io_buffer *output_buf, uint32_t io_buf_size)
+{
+ int rc = -EINVAL;
+ uint32_t num_in_buf, num_out_buf, i, j, plane;
+ struct cam_buf_io_cfg *io_cfg;
+ uint64_t io_addr[CAM_PACKET_MAX_PLANES];
+ size_t size;
+
+ num_in_buf = 0;
+ num_out_buf = 0;
+ io_cfg = (struct cam_buf_io_cfg *)((uint8_t *)
+ &prepare->packet->payload +
+ prepare->packet->io_configs_offset);
+
+ for (i = 0; i < prepare->packet->num_io_configs; i++) {
+ CAM_DBG(CAM_LRME,
+ "IOConfig[%d] : handle[%d] Dir[%d] Res[%d] Fence[%d], Format[%d]",
+ i, io_cfg[i].mem_handle[0], io_cfg[i].direction,
+ io_cfg[i].resource_type,
+ io_cfg[i].fence, io_cfg[i].format);
+
+ if ((num_in_buf > io_buf_size) ||
+ (num_out_buf > io_buf_size)) {
+ CAM_ERR(CAM_LRME, "Invalid number of buffers %d %d %d",
+ num_in_buf, num_out_buf, io_buf_size);
+ return -EINVAL;
+ }
+
+ memset(io_addr, 0, sizeof(io_addr));
+ for (plane = 0; plane < CAM_PACKET_MAX_PLANES; plane++) {
+ if (!io_cfg[i].mem_handle[plane])
+ break;
+
+ rc = cam_mem_get_io_buf(io_cfg[i].mem_handle[plane],
+ iommu_hdl, &io_addr[plane], &size);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Cannot get io buf for %d %d",
+ plane, rc);
+ return -ENOMEM;
+ }
+
+ io_addr[plane] += io_cfg[i].offsets[plane];
+
+ if (io_addr[plane] >> 32) {
+ CAM_ERR(CAM_LRME, "Invalid io addr for %d %d",
+ plane, rc);
+ return -ENOMEM;
+ }
+
+ CAM_DBG(CAM_LRME, "IO Address[%d][%d] : %llu",
+ io_cfg[i].direction, plane, io_addr[plane]);
+ }
+
+ switch (io_cfg[i].direction) {
+ case CAM_BUF_INPUT: {
+ prepare->in_map_entries[num_in_buf].resource_handle =
+ io_cfg[i].resource_type;
+ prepare->in_map_entries[num_in_buf].sync_id =
+ io_cfg[i].fence;
+
+ input_buf[num_in_buf].valid = true;
+ for (j = 0; j < plane; j++)
+ input_buf[num_in_buf].io_addr[j] = io_addr[j];
+ input_buf[num_in_buf].num_plane = plane;
+ input_buf[num_in_buf].io_cfg = &io_cfg[i];
+
+ num_in_buf++;
+ break;
+ }
+ case CAM_BUF_OUTPUT: {
+ prepare->out_map_entries[num_out_buf].resource_handle =
+ io_cfg[i].resource_type;
+ prepare->out_map_entries[num_out_buf].sync_id =
+ io_cfg[i].fence;
+
+ output_buf[num_out_buf].valid = true;
+ for (j = 0; j < plane; j++)
+ output_buf[num_out_buf].io_addr[j] = io_addr[j];
+ output_buf[num_out_buf].num_plane = plane;
+ output_buf[num_out_buf].io_cfg = &io_cfg[i];
+
+ num_out_buf++;
+ break;
+ }
+ default:
+ CAM_ERR(CAM_LRME, "Unsupported io direction %d",
+ io_cfg[i].direction);
+ return -EINVAL;
+ }
+ }
+ prepare->num_in_map_entries = num_in_buf;
+ prepare->num_out_map_entries = num_out_buf;
+
+ return 0;
+}
+
+static int cam_lrme_mgr_util_prepare_hw_update_entries(
+ struct cam_lrme_hw_mgr *hw_mgr,
+ struct cam_hw_prepare_update_args *prepare,
+ struct cam_lrme_hw_cmd_config_args *config_args,
+ struct cam_kmd_buf_info *kmd_buf_info)
+{
+ int i, rc = 0;
+ struct cam_lrme_device *hw_device = NULL;
+ uint32_t *kmd_buf_addr;
+ uint32_t num_entry;
+ uint32_t kmd_buf_max_size;
+ uint32_t kmd_buf_used_bytes = 0;
+ struct cam_hw_update_entry *hw_entry;
+ struct cam_cmd_buf_desc *cmd_desc = NULL;
+
+ hw_device = config_args->hw_device;
+ if (!hw_device) {
+ CAM_ERR(CAM_LRME, "Invalid hw_device");
+ return -EINVAL;
+ }
+
+ kmd_buf_addr = (uint32_t *)((uint8_t *)kmd_buf_info->cpu_addr +
+ kmd_buf_info->used_bytes);
+ kmd_buf_max_size = kmd_buf_info->size - kmd_buf_info->used_bytes;
+
+ config_args->cmd_buf_addr = kmd_buf_addr;
+ config_args->size = kmd_buf_max_size;
+ config_args->config_buf_size = 0;
+
+ if (hw_device->hw_intf.hw_ops.process_cmd) {
+ rc = hw_device->hw_intf.hw_ops.process_cmd(
+ hw_device->hw_intf.hw_priv,
+ CAM_LRME_HW_CMD_PREPARE_HW_UPDATE,
+ config_args,
+ sizeof(struct cam_lrme_hw_cmd_config_args));
+ if (rc) {
+ CAM_ERR(CAM_LRME,
+ "Failed in CMD_PREPARE_HW_UPDATE %d", rc);
+ return rc;
+ }
+ } else {
+ CAM_ERR(CAM_LRME, "Can't find handle function");
+ return -EINVAL;
+ }
+
+ kmd_buf_used_bytes += config_args->config_buf_size;
+
+ if (!kmd_buf_used_bytes || (kmd_buf_used_bytes > kmd_buf_max_size)) {
+ CAM_ERR(CAM_LRME, "Invalid kmd used bytes %d (%d)",
+ kmd_buf_used_bytes, kmd_buf_max_size);
+ return -ENOMEM;
+ }
+
+ hw_entry = prepare->hw_update_entries;
+ num_entry = 0;
+
+ if (config_args->config_buf_size) {
+ if ((num_entry + 1) >= prepare->max_hw_update_entries) {
+ CAM_ERR(CAM_LRME, "Insufficient HW entries :%d %d",
+ num_entry, prepare->max_hw_update_entries);
+ return -EINVAL;
+ }
+
+ hw_entry[num_entry].handle = kmd_buf_info->handle;
+ hw_entry[num_entry].len = config_args->config_buf_size;
+ hw_entry[num_entry].offset = kmd_buf_info->offset;
+
+ kmd_buf_info->used_bytes += config_args->config_buf_size;
+ kmd_buf_info->offset += config_args->config_buf_size;
+ num_entry++;
+ }
+
+ cmd_desc = (struct cam_cmd_buf_desc *)((uint8_t *)
+ &prepare->packet->payload + prepare->packet->cmd_buf_offset);
+
+ for (i = 0; i < prepare->packet->num_cmd_buf; i++) {
+ if (!cmd_desc[i].length)
+ continue;
+
+ if ((num_entry + 1) >= prepare->max_hw_update_entries) {
+ CAM_ERR(CAM_LRME, "Exceed max num of entry");
+ return -EINVAL;
+ }
+
+ hw_entry[num_entry].handle = cmd_desc[i].mem_handle;
+ hw_entry[num_entry].len = cmd_desc[i].length;
+ hw_entry[num_entry].offset = cmd_desc[i].offset;
+ num_entry++;
+ }
+ prepare->num_hw_update_entries = num_entry;
+
+ CAM_DBG(CAM_LRME, "FinalConfig : hw_entries=%d, Sync(in=%d, out=%d)",
+ prepare->num_hw_update_entries, prepare->num_in_map_entries,
+ prepare->num_out_map_entries);
+
+ return rc;
+}
+
+static void cam_lrme_mgr_util_put_frame_req(
+ struct list_head *src_list,
+ struct list_head *list,
+ spinlock_t *lock)
+{
+ spin_lock(lock);
+ list_add_tail(list, src_list);
+ spin_unlock(lock);
+}
+
+static int cam_lrme_mgr_util_get_frame_req(
+ struct list_head *src_list,
+ struct cam_lrme_frame_request **frame_req,
+ spinlock_t *lock)
+{
+ int rc = 0;
+ struct cam_lrme_frame_request *req_ptr = NULL;
+
+ spin_lock(lock);
+ if (!list_empty(src_list)) {
+ req_ptr = list_first_entry(src_list,
+ struct cam_lrme_frame_request, frame_list);
+ list_del_init(&req_ptr->frame_list);
+ } else {
+ rc = -ENOENT;
+ }
+ *frame_req = req_ptr;
+ spin_unlock(lock);
+
+ return rc;
+}
+
+
+static int cam_lrme_mgr_util_submit_req(void *priv, void *data)
+{
+ struct cam_lrme_device *hw_device;
+ struct cam_lrme_hw_mgr *hw_mgr;
+ struct cam_lrme_frame_request *frame_req = NULL;
+ struct cam_lrme_hw_submit_args submit_args;
+ struct cam_lrme_mgr_work_data *work_data;
+ int rc;
+ int req_prio = 0;
+
+ if (!priv) {
+ CAM_ERR(CAM_LRME, "worker doesn't have private data");
+ return -EINVAL;
+ }
+
+ hw_mgr = (struct cam_lrme_hw_mgr *)priv;
+ work_data = (struct cam_lrme_mgr_work_data *)data;
+ hw_device = work_data->hw_device;
+
+ rc = cam_lrme_mgr_util_get_frame_req(&hw_device->
+ frame_pending_list_high, &frame_req, &hw_device->high_req_lock);
+
+ if (!frame_req) {
+ rc = cam_lrme_mgr_util_get_frame_req(&hw_device->
+ frame_pending_list_normal, &frame_req,
+ &hw_device->normal_req_lock);
+ if (frame_req)
+ req_prio = 1;
+ }
+
+ if (!frame_req) {
+ CAM_DBG(CAM_LRME, "No pending request");
+ return 0;
+ }
+
+ if (hw_device->hw_intf.hw_ops.process_cmd) {
+ submit_args.hw_update_entries = frame_req->hw_update_entries;
+ submit_args.num_hw_update_entries =
+ frame_req->num_hw_update_entries;
+ submit_args.frame_req = frame_req;
+
+ rc = hw_device->hw_intf.hw_ops.process_cmd(
+ hw_device->hw_intf.hw_priv,
+ CAM_LRME_HW_CMD_SUBMIT,
+ &submit_args, sizeof(struct cam_lrme_hw_submit_args));
+
+ if (rc == -EBUSY)
+ CAM_DBG(CAM_LRME, "device busy");
+ else if (rc)
+ CAM_ERR(CAM_LRME, "submit request failed rc %d", rc);
+ if (rc) {
+ req_prio == 0 ? spin_lock(&hw_device->high_req_lock) :
+ spin_lock(&hw_device->normal_req_lock);
+ list_add(&frame_req->frame_list,
+ (req_prio == 0 ?
+ &hw_device->frame_pending_list_high :
+ &hw_device->frame_pending_list_normal));
+ req_prio == 0 ? spin_unlock(&hw_device->high_req_lock) :
+ spin_unlock(&hw_device->normal_req_lock);
+ }
+ if (rc == -EBUSY)
+ rc = 0;
+ } else {
+ req_prio == 0 ? spin_lock(&hw_device->high_req_lock) :
+ spin_lock(&hw_device->normal_req_lock);
+ list_add(&frame_req->frame_list,
+ (req_prio == 0 ?
+ &hw_device->frame_pending_list_high :
+ &hw_device->frame_pending_list_normal));
+ req_prio == 0 ? spin_unlock(&hw_device->high_req_lock) :
+ spin_unlock(&hw_device->normal_req_lock);
+ rc = -EINVAL;
+ }
+
+ CAM_DBG(CAM_LRME, "End of submit, rc %d", rc);
+
+ return rc;
+}
+
+static int cam_lrme_mgr_util_schedule_frame_req(
+ struct cam_lrme_hw_mgr *hw_mgr, struct cam_lrme_device *hw_device)
+{
+ int rc = 0;
+ struct crm_workq_task *task;
+ struct cam_lrme_mgr_work_data *work_data;
+
+ task = cam_req_mgr_workq_get_task(hw_device->work);
+ if (!task) {
+ CAM_ERR(CAM_LRME, "Can not get task for worker");
+ return -ENOMEM;
+ }
+
+ work_data = (struct cam_lrme_mgr_work_data *)task->payload;
+ work_data->hw_device = hw_device;
+
+ task->process_cb = cam_lrme_mgr_util_submit_req;
+ CAM_DBG(CAM_LRME, "enqueue submit task");
+ rc = cam_req_mgr_workq_enqueue_task(task, hw_mgr, CRM_TASK_PRIORITY_0);
+
+ return rc;
+}
+
+static int cam_lrme_mgr_util_release(struct cam_lrme_hw_mgr *hw_mgr,
+ uint32_t device_index)
+{
+ int rc = 0;
+ struct cam_lrme_device *hw_device;
+
+ rc = cam_lrme_mgr_util_get_device(hw_mgr, device_index, &hw_device);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Error in getting device %d", rc);
+ return rc;
+ }
+
+ mutex_lock(&hw_mgr->hw_mgr_mutex);
+ hw_device->num_context--;
+ mutex_unlock(&hw_mgr->hw_mgr_mutex);
+
+ return rc;
+}
+
+static int cam_lrme_mgr_cb(void *data,
+ struct cam_lrme_hw_cb_args *cb_args)
+{
+ struct cam_lrme_hw_mgr *hw_mgr = &g_lrme_hw_mgr;
+ int rc = 0;
+ bool frame_abort = true;
+ struct cam_lrme_frame_request *frame_req;
+ struct cam_lrme_device *hw_device;
+
+ if (!data || !cb_args) {
+ CAM_ERR(CAM_LRME, "Invalid input args");
+ return -EINVAL;
+ }
+
+ hw_device = (struct cam_lrme_device *)data;
+ frame_req = cb_args->frame_req;
+
+ if (cb_args->cb_type & CAM_LRME_CB_PUT_FRAME) {
+ memset(frame_req, 0x0, sizeof(*frame_req));
+ INIT_LIST_HEAD(&frame_req->frame_list);
+ cam_lrme_mgr_util_put_frame_req(&hw_mgr->frame_free_list,
+ &frame_req->frame_list,
+ &hw_mgr->free_req_lock);
+ cb_args->cb_type &= ~CAM_LRME_CB_PUT_FRAME;
+ frame_req = NULL;
+ }
+
+ if (cb_args->cb_type & CAM_LRME_CB_COMP_REG_UPDATE) {
+ cb_args->cb_type &= ~CAM_LRME_CB_COMP_REG_UPDATE;
+ CAM_DBG(CAM_LRME, "Reg update");
+ }
+
+ if (!frame_req)
+ return rc;
+
+ if (cb_args->cb_type & CAM_LRME_CB_BUF_DONE) {
+ cb_args->cb_type &= ~CAM_LRME_CB_BUF_DONE;
+ frame_abort = false;
+ } else if (cb_args->cb_type & CAM_LRME_CB_ERROR) {
+ cb_args->cb_type &= ~CAM_LRME_CB_ERROR;
+ frame_abort = true;
+ } else {
+ CAM_ERR(CAM_LRME, "Wrong cb type %d, req %lld",
+ cb_args->cb_type, frame_req->req_id);
+ return -EINVAL;
+ }
+
+ if (hw_mgr->event_cb) {
+ struct cam_hw_done_event_data buf_data;
+
+ buf_data.request_id = frame_req->req_id;
+ CAM_DBG(CAM_LRME, "frame req %llu, frame_abort %d",
+ frame_req->req_id, frame_abort);
+ rc = hw_mgr->event_cb(frame_req->ctxt_to_hw_map,
+ frame_abort, &buf_data);
+ } else {
+ CAM_ERR(CAM_LRME, "No cb function");
+ }
+ memset(frame_req, 0x0, sizeof(*frame_req));
+ INIT_LIST_HEAD(&frame_req->frame_list);
+ cam_lrme_mgr_util_put_frame_req(&hw_mgr->frame_free_list,
+ &frame_req->frame_list,
+ &hw_mgr->free_req_lock);
+
+ rc = cam_lrme_mgr_util_schedule_frame_req(hw_mgr, hw_device);
+
+ return rc;
+}
+
+static int cam_lrme_mgr_get_caps(void *hw_mgr_priv, void *hw_get_caps_args)
+{
+ int rc = 0;
+ struct cam_lrme_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_query_cap_cmd *args = hw_get_caps_args;
+
+ if (sizeof(struct cam_lrme_query_cap_cmd) != args->size) {
+ CAM_ERR(CAM_LRME,
+ "sizeof(struct cam_query_cap_cmd) = %lu, args->size = %d",
+ sizeof(struct cam_query_cap_cmd), args->size);
+ return -EFAULT;
+ }
+
+ if (copy_to_user((void __user *)args->caps_handle, &(hw_mgr->lrme_caps),
+ sizeof(struct cam_lrme_query_cap_cmd))) {
+ CAM_ERR(CAM_LRME, "copy to user failed");
+ return -EFAULT;
+ }
+
+ return rc;
+}
+
+static int cam_lrme_mgr_hw_acquire(void *hw_mgr_priv, void *hw_acquire_args)
+{
+ struct cam_lrme_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_hw_acquire_args *args =
+ (struct cam_hw_acquire_args *)hw_acquire_args;
+ struct cam_lrme_acquire_args lrme_acquire_args;
+ uint64_t device_index;
+
+ if (!hw_mgr_priv || !args) {
+ CAM_ERR(CAM_LRME,
+ "Invalid input params hw_mgr_priv %pK, acquire_args %pK",
+ hw_mgr_priv, args);
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&lrme_acquire_args,
+ (void __user *)args->acquire_info,
+ sizeof(struct cam_lrme_acquire_args))) {
+ CAM_ERR(CAM_LRME, "Failed to copy acquire args from user");
+ return -EFAULT;
+ }
+
+ device_index = cam_lrme_mgr_util_reserve_device(hw_mgr,
+ &lrme_acquire_args);
+ CAM_DBG(CAM_LRME, "Get device id %llu", device_index);
+
+ if (device_index >= hw_mgr->device_count) {
+ CAM_ERR(CAM_LRME, "Get wrong device id %llu", device_index);
+ return -EINVAL;
+ }
+
+ /* device_index is the right 4 bit in ctxt_to_hw_map */
+ args->ctxt_to_hw_map = (void *)device_index;
+
+ return 0;
+}
+
+static int cam_lrme_mgr_hw_release(void *hw_mgr_priv, void *hw_release_args)
+{
+ int rc = 0;
+ struct cam_lrme_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_hw_release_args *args =
+ (struct cam_hw_release_args *)hw_release_args;
+ uint64_t device_index;
+
+ if (!hw_mgr_priv || !hw_release_args) {
+ CAM_ERR(CAM_LRME, "Invalid arguments %pK, %pK",
+ hw_mgr_priv, hw_release_args);
+ return -EINVAL;
+ }
+
+ device_index = CAM_LRME_DECODE_DEVICE_INDEX(args->ctxt_to_hw_map);
+ if (device_index >= hw_mgr->device_count) {
+ CAM_ERR(CAM_LRME, "Invalid device index %llu", device_index);
+ return -EPERM;
+ }
+
+ rc = cam_lrme_mgr_util_release(hw_mgr, device_index);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Failed in release device, rc=%d", rc);
+
+ return rc;
+}
+
+static int cam_lrme_mgr_hw_start(void *hw_mgr_priv, void *hw_start_args)
+{
+ int rc = 0;
+ struct cam_lrme_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_hw_start_args *args =
+ (struct cam_hw_start_args *)hw_start_args;
+ struct cam_lrme_device *hw_device;
+ uint32_t device_index;
+
+ if (!hw_mgr || !args) {
+ CAM_ERR(CAM_LRME, "Invald input params");
+ return -EINVAL;
+ }
+
+ device_index = CAM_LRME_DECODE_DEVICE_INDEX(args->ctxt_to_hw_map);
+ if (device_index >= hw_mgr->device_count) {
+ CAM_ERR(CAM_LRME, "Invalid device index %d", device_index);
+ return -EPERM;
+ }
+
+ CAM_DBG(CAM_LRME, "Start device index %d", device_index);
+
+ rc = cam_lrme_mgr_util_get_device(hw_mgr, device_index, &hw_device);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to get hw device");
+ return rc;
+ }
+
+ if (hw_device->hw_intf.hw_ops.start) {
+ rc = hw_device->hw_intf.hw_ops.start(
+ hw_device->hw_intf.hw_priv, NULL, 0);
+ } else {
+ CAM_ERR(CAM_LRME, "Invald start function");
+ return -EINVAL;
+ }
+
+ return rc;
+}
+
+static int cam_lrme_mgr_hw_stop(void *hw_mgr_priv, void *stop_args)
+{
+ int rc = 0;
+ struct cam_lrme_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_hw_stop_args *args =
+ (struct cam_hw_stop_args *)stop_args;
+ struct cam_lrme_device *hw_device;
+ uint32_t device_index;
+
+ if (!hw_mgr_priv || !stop_args) {
+ CAM_ERR(CAM_LRME, "Invalid arguments");
+ return -EINVAL;
+ }
+
+ device_index = CAM_LRME_DECODE_DEVICE_INDEX(args->ctxt_to_hw_map);
+ if (device_index >= hw_mgr->device_count) {
+ CAM_ERR(CAM_LRME, "Invalid device index %d", device_index);
+ return -EPERM;
+ }
+
+ CAM_DBG(CAM_LRME, "Stop device index %d", device_index);
+
+ rc = cam_lrme_mgr_util_get_device(hw_mgr, device_index, &hw_device);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to get hw device");
+ return rc;
+ }
+
+ if (hw_device->hw_intf.hw_ops.stop) {
+ rc = hw_device->hw_intf.hw_ops.stop(
+ hw_device->hw_intf.hw_priv, NULL, 0);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed in HW stop %d", rc);
+ goto end;
+ }
+ }
+
+end:
+ return rc;
+}
+
+static int cam_lrme_mgr_hw_prepare_update(void *hw_mgr_priv,
+ void *hw_prepare_update_args)
+{
+ int rc = 0, i;
+ struct cam_lrme_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_hw_prepare_update_args *args =
+ (struct cam_hw_prepare_update_args *)hw_prepare_update_args;
+ struct cam_lrme_device *hw_device;
+ struct cam_kmd_buf_info kmd_buf;
+ struct cam_lrme_hw_cmd_config_args config_args;
+ struct cam_lrme_frame_request *frame_req = NULL;
+ uint32_t device_index;
+
+ if (!hw_mgr_priv || !hw_prepare_update_args) {
+ CAM_ERR(CAM_LRME, "Invalid args %pK %pK",
+ hw_mgr_priv, hw_prepare_update_args);
+ return -EINVAL;
+ }
+
+ device_index = CAM_LRME_DECODE_DEVICE_INDEX(args->ctxt_to_hw_map);
+ if (device_index >= hw_mgr->device_count) {
+ CAM_ERR(CAM_LRME, "Invalid device index %d", device_index);
+ return -EPERM;
+ }
+
+ rc = cam_lrme_mgr_util_get_device(hw_mgr, device_index, &hw_device);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Error in getting device %d", rc);
+ goto error;
+ }
+
+ rc = cam_lrme_mgr_util_packet_validate(args->packet);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Error in packet validation %d", rc);
+ goto error;
+ }
+
+ rc = cam_packet_util_get_kmd_buffer(args->packet, &kmd_buf);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Error in get kmd buf buffer %d", rc);
+ goto error;
+ }
+
+ CAM_DBG(CAM_LRME,
+ "KMD Buf : hdl=%d, cpu_addr=%pK, offset=%d, size=%d, used=%d",
+ kmd_buf.handle, kmd_buf.cpu_addr, kmd_buf.offset,
+ kmd_buf.size, kmd_buf.used_bytes);
+
+ rc = cam_packet_util_process_patches(args->packet,
+ hw_mgr->device_iommu.non_secure, hw_mgr->device_iommu.secure);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Patch packet failed, rc=%d", rc);
+ return rc;
+ }
+
+ memset(&config_args, 0, sizeof(config_args));
+ config_args.hw_device = hw_device;
+
+ rc = cam_lrme_mgr_util_prepare_io_buffer(
+ hw_mgr->device_iommu.non_secure, args,
+ config_args.input_buf, config_args.output_buf,
+ CAM_LRME_MAX_IO_BUFFER);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Error in prepare IO Buf %d", rc);
+ goto error;
+ }
+ /* Check port number */
+ if (args->num_in_map_entries == 0 || args->num_out_map_entries == 0) {
+ CAM_ERR(CAM_LRME, "Error in port number in %d, out %d",
+ args->num_in_map_entries, args->num_out_map_entries);
+ goto error;
+ }
+
+ rc = cam_lrme_mgr_util_prepare_hw_update_entries(hw_mgr, args,
+ &config_args, &kmd_buf);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Error in hw update entries %d", rc);
+ goto error;
+ }
+
+ rc = cam_lrme_mgr_util_get_frame_req(&hw_mgr->frame_free_list,
+ &frame_req, &hw_mgr->free_req_lock);
+ if (rc || !frame_req) {
+ CAM_ERR(CAM_LRME, "Can not get free frame request");
+ goto error;
+ }
+
+ frame_req->ctxt_to_hw_map = args->ctxt_to_hw_map;
+ frame_req->req_id = args->packet->header.request_id;
+ frame_req->hw_device = hw_device;
+ frame_req->num_hw_update_entries = args->num_hw_update_entries;
+ for (i = 0; i < args->num_hw_update_entries; i++)
+ frame_req->hw_update_entries[i] = args->hw_update_entries[i];
+
+ args->priv = frame_req;
+
+ CAM_DBG(CAM_LRME, "FramePrepare : Frame[%lld]", frame_req->req_id);
+
+ return 0;
+
+error:
+ return rc;
+}
+
+static int cam_lrme_mgr_hw_config(void *hw_mgr_priv,
+ void *hw_config_args)
+{
+ int rc = 0;
+ struct cam_lrme_hw_mgr *hw_mgr = hw_mgr_priv;
+ struct cam_hw_config_args *args =
+ (struct cam_hw_config_args *)hw_config_args;
+ struct cam_lrme_frame_request *frame_req;
+ struct cam_lrme_device *hw_device = NULL;
+ enum cam_lrme_hw_mgr_ctx_priority priority;
+
+ if (!hw_mgr_priv || !hw_config_args) {
+ CAM_ERR(CAM_LRME, "Invalid arguments, hw_mgr %pK, config %pK",
+ hw_mgr_priv, hw_config_args);
+ return -EINVAL;
+ }
+
+ if (!args->num_hw_update_entries) {
+ CAM_ERR(CAM_LRME, "No hw update entries");
+ return -EINVAL;
+ }
+
+ frame_req = (struct cam_lrme_frame_request *)args->priv;
+ if (!frame_req) {
+ CAM_ERR(CAM_LRME, "No frame request");
+ return -EINVAL;
+ }
+
+ hw_device = frame_req->hw_device;
+ if (!hw_device)
+ return -EINVAL;
+
+ priority = CAM_LRME_DECODE_PRIORITY(args->ctxt_to_hw_map);
+ if (priority == CAM_LRME_PRIORITY_HIGH) {
+ cam_lrme_mgr_util_put_frame_req(
+ &hw_device->frame_pending_list_high,
+ &frame_req->frame_list, &hw_device->high_req_lock);
+ } else {
+ cam_lrme_mgr_util_put_frame_req(
+ &hw_device->frame_pending_list_normal,
+ &frame_req->frame_list, &hw_device->normal_req_lock);
+ }
+
+ CAM_DBG(CAM_LRME, "schedule req %llu", frame_req->req_id);
+ rc = cam_lrme_mgr_util_schedule_frame_req(hw_mgr, hw_device);
+
+ return rc;
+}
+
+int cam_lrme_mgr_register_device(
+ struct cam_hw_intf *lrme_hw_intf,
+ struct cam_iommu_handle *device_iommu,
+ struct cam_iommu_handle *cdm_iommu)
+{
+ struct cam_lrme_device *hw_device;
+ char buf[128];
+ int i, rc;
+
+ hw_device = &g_lrme_hw_mgr.hw_device[lrme_hw_intf->hw_idx];
+
+ g_lrme_hw_mgr.device_iommu = *device_iommu;
+ g_lrme_hw_mgr.cdm_iommu = *cdm_iommu;
+
+ memcpy(&hw_device->hw_intf, lrme_hw_intf, sizeof(struct cam_hw_intf));
+
+ spin_lock_init(&hw_device->high_req_lock);
+ spin_lock_init(&hw_device->normal_req_lock);
+ INIT_LIST_HEAD(&hw_device->frame_pending_list_high);
+ INIT_LIST_HEAD(&hw_device->frame_pending_list_normal);
+
+ rc = snprintf(buf, sizeof(buf), "cam_lrme_device_submit_worker%d",
+ lrme_hw_intf->hw_idx);
+ CAM_DBG(CAM_LRME, "Create submit workq for %s", buf);
+ rc = cam_req_mgr_workq_create(buf,
+ CAM_LRME_WORKQ_NUM_TASK,
+ &hw_device->work, CRM_WORKQ_USAGE_NON_IRQ);
+ if (rc) {
+ CAM_ERR(CAM_LRME,
+ "Unable to create a worker, rc=%d", rc);
+ return rc;
+ }
+
+ for (i = 0; i < CAM_LRME_WORKQ_NUM_TASK; i++)
+ hw_device->work->task.pool[i].payload =
+ &hw_device->work_data[i];
+
+ if (hw_device->hw_intf.hw_ops.process_cmd) {
+ struct cam_lrme_hw_cmd_set_cb cb_args;
+
+ cb_args.cam_lrme_hw_mgr_cb = cam_lrme_mgr_cb;
+ cb_args.data = hw_device;
+
+ rc = hw_device->hw_intf.hw_ops.process_cmd(
+ hw_device->hw_intf.hw_priv,
+ CAM_LRME_HW_CMD_REGISTER_CB,
+ &cb_args, sizeof(cb_args));
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Register cb failed");
+ goto destroy_workqueue;
+ }
+ CAM_DBG(CAM_LRME, "cb registered");
+ }
+
+ if (hw_device->hw_intf.hw_ops.get_hw_caps) {
+ rc = hw_device->hw_intf.hw_ops.get_hw_caps(
+ hw_device->hw_intf.hw_priv, &hw_device->hw_caps,
+ sizeof(hw_device->hw_caps));
+ if (rc)
+ CAM_ERR(CAM_LRME, "Get caps failed");
+ } else {
+ CAM_ERR(CAM_LRME, "No get_hw_caps function");
+ goto destroy_workqueue;
+ }
+ g_lrme_hw_mgr.lrme_caps.dev_caps[lrme_hw_intf->hw_idx] =
+ hw_device->hw_caps;
+ g_lrme_hw_mgr.device_count++;
+ g_lrme_hw_mgr.lrme_caps.device_iommu = g_lrme_hw_mgr.device_iommu;
+ g_lrme_hw_mgr.lrme_caps.cdm_iommu = g_lrme_hw_mgr.cdm_iommu;
+ g_lrme_hw_mgr.lrme_caps.num_devices = g_lrme_hw_mgr.device_count;
+
+ hw_device->valid = true;
+
+ CAM_DBG(CAM_LRME, "device registration done");
+ return 0;
+
+destroy_workqueue:
+ cam_req_mgr_workq_destroy(&hw_device->work);
+
+ return rc;
+}
+
+int cam_lrme_mgr_deregister_device(int device_index)
+{
+ struct cam_lrme_device *hw_device;
+
+ hw_device = &g_lrme_hw_mgr.hw_device[device_index];
+ cam_req_mgr_workq_destroy(&hw_device->work);
+ memset(hw_device, 0x0, sizeof(struct cam_lrme_device));
+ g_lrme_hw_mgr.device_count--;
+
+ return 0;
+}
+
+int cam_lrme_hw_mgr_deinit(void)
+{
+ mutex_destroy(&g_lrme_hw_mgr.hw_mgr_mutex);
+ memset(&g_lrme_hw_mgr, 0x0, sizeof(g_lrme_hw_mgr));
+
+ return 0;
+}
+
+int cam_lrme_hw_mgr_init(struct cam_hw_mgr_intf *hw_mgr_intf,
+ cam_hw_event_cb_func cam_lrme_dev_buf_done_cb)
+{
+ int i, rc = 0;
+ struct cam_lrme_frame_request *frame_req;
+
+ if (!hw_mgr_intf)
+ return -EINVAL;
+
+ CAM_DBG(CAM_LRME, "device count %d", g_lrme_hw_mgr.device_count);
+ if (g_lrme_hw_mgr.device_count > CAM_LRME_HW_MAX) {
+ CAM_ERR(CAM_LRME, "Invalid count of devices");
+ return -EINVAL;
+ }
+
+ memset(hw_mgr_intf, 0, sizeof(*hw_mgr_intf));
+
+ mutex_init(&g_lrme_hw_mgr.hw_mgr_mutex);
+ spin_lock_init(&g_lrme_hw_mgr.free_req_lock);
+ INIT_LIST_HEAD(&g_lrme_hw_mgr.frame_free_list);
+
+ /* Init hw mgr frame requests and add to free list */
+ for (i = 0; i < CAM_CTX_REQ_MAX * CAM_CTX_MAX; i++) {
+ frame_req = &g_lrme_hw_mgr.frame_req[i];
+
+ memset(frame_req, 0x0, sizeof(*frame_req));
+ INIT_LIST_HEAD(&frame_req->frame_list);
+
+ list_add_tail(&frame_req->frame_list,
+ &g_lrme_hw_mgr.frame_free_list);
+ }
+
+ hw_mgr_intf->hw_mgr_priv = &g_lrme_hw_mgr;
+ hw_mgr_intf->hw_get_caps = cam_lrme_mgr_get_caps;
+ hw_mgr_intf->hw_acquire = cam_lrme_mgr_hw_acquire;
+ hw_mgr_intf->hw_release = cam_lrme_mgr_hw_release;
+ hw_mgr_intf->hw_start = cam_lrme_mgr_hw_start;
+ hw_mgr_intf->hw_stop = cam_lrme_mgr_hw_stop;
+ hw_mgr_intf->hw_prepare_update = cam_lrme_mgr_hw_prepare_update;
+ hw_mgr_intf->hw_config = cam_lrme_mgr_hw_config;
+ hw_mgr_intf->hw_read = NULL;
+ hw_mgr_intf->hw_write = NULL;
+ hw_mgr_intf->hw_close = NULL;
+
+ g_lrme_hw_mgr.event_cb = cam_lrme_dev_buf_done_cb;
+
+ CAM_DBG(CAM_LRME, "Hw mgr init done");
+ return rc;
+}
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr.h b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr.h
new file mode 100644
index 0000000..f7ce4d2
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr.h
@@ -0,0 +1,120 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _CAM_LRME_HW_MGR_H_
+#define _CAM_LRME_HW_MGR_H_
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <media/cam_lrme.h>
+#include "cam_hw.h"
+#include "cam_hw_intf.h"
+#include "cam_cpas_api.h"
+#include "cam_debug_util.h"
+#include "cam_hw_mgr_intf.h"
+#include "cam_req_mgr_workq.h"
+#include "cam_lrme_hw_intf.h"
+#include "cam_context.h"
+
+#define CAM_LRME_HW_MAX 1
+#define CAM_LRME_WORKQ_NUM_TASK 10
+
+#define CAM_LRME_DECODE_DEVICE_INDEX(ctxt_to_hw_map) \
+ ((uint64_t)ctxt_to_hw_map & 0xF)
+
+#define CAM_LRME_DECODE_PRIORITY(ctxt_to_hw_map) \
+ (((uint64_t)ctxt_to_hw_map & 0xF0) >> 4)
+
+#define CAM_LRME_DECODE_CTX_INDEX(ctxt_to_hw_map) \
+ ((uint64_t)ctxt_to_hw_map >> CAM_LRME_CTX_INDEX_SHIFT)
+
+/**
+ * enum cam_lrme_hw_mgr_ctx_priority
+ *
+ * CAM_LRME_PRIORITY_HIGH : High priority client
+ * CAM_LRME_PRIORITY_NORMAL : Normal priority client
+ */
+enum cam_lrme_hw_mgr_ctx_priority {
+ CAM_LRME_PRIORITY_HIGH,
+ CAM_LRME_PRIORITY_NORMAL,
+};
+
+/**
+ * struct cam_lrme_mgr_work_data : HW Mgr work data
+ *
+ * hw_device : Pointer to the hw device
+ */
+struct cam_lrme_mgr_work_data {
+ struct cam_lrme_device *hw_device;
+};
+
+/**
+ * struct cam_lrme_device : LRME HW device
+ *
+ * @hw_caps : HW device's capabilities
+ * @hw_intf : HW device's interface information
+ * @num_context : Number of contexts using this device
+ * @valid : Whether this device is valid
+ * @work : HW device's work queue
+ * @work_data : HW device's work data
+ * @frame_pending_list_high : High priority request queue
+ * @frame_pending_list_normal : Normal priority request queue
+ * @high_req_lock : Spinlock of high priority queue
+ * @normal_req_lock : Spinlock of normal priority queue
+ */
+struct cam_lrme_device {
+ struct cam_lrme_dev_cap hw_caps;
+ struct cam_hw_intf hw_intf;
+ uint32_t num_context;
+ bool valid;
+ struct cam_req_mgr_core_workq *work;
+ struct cam_lrme_mgr_work_data work_data[CAM_LRME_WORKQ_NUM_TASK];
+ struct list_head frame_pending_list_high;
+ struct list_head frame_pending_list_normal;
+ spinlock_t high_req_lock;
+ spinlock_t normal_req_lock;
+};
+
+/**
+ * struct cam_lrme_hw_mgr : LRME HW manager
+ *
+ * @device_count : Number of HW devices
+ * @frame_free_list : List of free frame request
+ * @hw_mgr_mutex : Mutex to protect HW manager data
+ * @free_req_lock :Spinlock to protect frame_free_list
+ * @hw_device : List of HW devices
+ * @device_iommu : Device iommu
+ * @cdm_iommu : cdm iommu
+ * @frame_req : List of frame request to use
+ * @lrme_caps : LRME capabilities
+ * @event_cb : IRQ callback function
+ */
+struct cam_lrme_hw_mgr {
+ uint32_t device_count;
+ struct list_head frame_free_list;
+ struct mutex hw_mgr_mutex;
+ spinlock_t free_req_lock;
+ struct cam_lrme_device hw_device[CAM_LRME_HW_MAX];
+ struct cam_iommu_handle device_iommu;
+ struct cam_iommu_handle cdm_iommu;
+ struct cam_lrme_frame_request frame_req[CAM_CTX_REQ_MAX * CAM_CTX_MAX];
+ struct cam_lrme_query_cap_cmd lrme_caps;
+ cam_hw_event_cb_func event_cb;
+};
+
+int cam_lrme_mgr_register_device(struct cam_hw_intf *lrme_hw_intf,
+ struct cam_iommu_handle *device_iommu,
+ struct cam_iommu_handle *cdm_iommu);
+int cam_lrme_mgr_deregister_device(int device_index);
+
+#endif /* _CAM_LRME_HW_MGR_H_ */
diff --git a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr_intf.h
similarity index 61%
copy from arch/arm64/boot/dts/qcom/sdm845-qvr.dts
copy to drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr_intf.h
index 5513c92..8bb609c 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-qvr.dts
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/cam_lrme_hw_mgr_intf.h
@@ -10,15 +10,16 @@
* GNU General Public License for more details.
*/
+#ifndef _CAM_LRME_HW_MGR_INTF_H_
+#define _CAM_LRME_HW_MGR_INTF_H_
-/dts-v1/;
+#include <linux/of.h>
-#include "sdm845-v2.dtsi"
-#include "sdm845-qvr.dtsi"
-#include "sdm845-camera-sensor-qvr.dtsi"
+#include "cam_debug_util.h"
+#include "cam_hw_mgr_intf.h"
-/ {
- model = "Qualcomm Technologies, Inc. SDM845 QVR";
- compatible = "qcom,sdm845-qvr", "qcom,sdm845", "qcom,qvr";
- qcom,board-id = <0x01000B 0x20>;
-};
+int cam_lrme_hw_mgr_init(struct cam_hw_mgr_intf *hw_mgr_intf,
+ cam_hw_event_cb_func cam_lrme_dev_buf_done_cb);
+int cam_lrme_hw_mgr_deinit(void);
+
+#endif /* _CAM_LRME_HW_MGR_INTF_H_ */
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/Makefile b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/Makefile
new file mode 100644
index 0000000..c65d862
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/Makefile
@@ -0,0 +1,13 @@
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_utils
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_req_mgr
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_core
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_sync
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_smmu
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_cdm
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw
+ccflags-y += -Idrivers/media/platform/msm/camera0
+ccflags-y += -Idrivers/media/platform/msm/camera/cam_cpas/include
+
+obj-$(CONFIG_SPECTRA_CAMERA) += cam_lrme_hw_dev.o cam_lrme_hw_core.o cam_lrme_hw_soc.o
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_core.c b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_core.c
new file mode 100644
index 0000000..dbd969c
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_core.c
@@ -0,0 +1,1027 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "cam_lrme_hw_core.h"
+#include "cam_lrme_hw_soc.h"
+#include "cam_smmu_api.h"
+
+static void cam_lrme_cdm_write_reg_val_pair(uint32_t *buffer,
+ uint32_t *index, uint32_t reg_offset, uint32_t reg_value)
+{
+ buffer[(*index)++] = reg_offset;
+ buffer[(*index)++] = reg_value;
+}
+
+static void cam_lrme_hw_util_fill_fe_reg(struct cam_lrme_hw_io_buffer *io_buf,
+ uint32_t index, uint32_t *reg_val_pair, uint32_t *num_cmd,
+ struct cam_lrme_hw_info *hw_info)
+{
+ uint32_t reg_val;
+
+ /* 1. config buffer size */
+ reg_val = io_buf->io_cfg->planes[0].width;
+ reg_val |= (io_buf->io_cfg->planes[0].height << 16);
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_rd_reg.bus_client_reg[index].rd_buffer_size,
+ reg_val);
+
+ CAM_DBG(CAM_LRME,
+ "width %d", io_buf->io_cfg->planes[0].width);
+ CAM_DBG(CAM_LRME,
+ "height %d", io_buf->io_cfg->planes[0].height);
+
+ /* 2. config image address */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_rd_reg.bus_client_reg[index].addr_image,
+ io_buf->io_addr[0]);
+
+ CAM_DBG(CAM_LRME, "io addr %llu", io_buf->io_addr[0]);
+
+ /* 3. config stride */
+ reg_val = io_buf->io_cfg->planes[0].plane_stride;
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_rd_reg.bus_client_reg[index].rd_stride,
+ reg_val);
+
+ CAM_DBG(CAM_LRME, "plane_stride %d",
+ io_buf->io_cfg->planes[0].plane_stride);
+
+ /* 4. enable client */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_rd_reg.bus_client_reg[index].core_cfg, 0x1);
+
+ /* 5. unpack_cfg */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_rd_reg.bus_client_reg[index].unpack_cfg_0, 0x0);
+}
+
+static void cam_lrme_hw_util_fill_we_reg(struct cam_lrme_hw_io_buffer *io_buf,
+ uint32_t index, uint32_t *reg_val_pair, uint32_t *num_cmd,
+ struct cam_lrme_hw_info *hw_info)
+{
+ /* config client mode */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_wr_reg.bus_client_reg[index].cfg,
+ 0x1);
+
+ /* image address */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_wr_reg.bus_client_reg[index].addr_image,
+ io_buf->io_addr[0]);
+ CAM_DBG(CAM_LRME, "io addr %llu", io_buf->io_addr[0]);
+
+ /* buffer width and height */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_wr_reg.bus_client_reg[index].buffer_width_cfg,
+ io_buf->io_cfg->planes[0].width);
+ CAM_DBG(CAM_LRME, "width %d", io_buf->io_cfg->planes[0].width);
+
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_wr_reg.bus_client_reg[index].buffer_height_cfg,
+ io_buf->io_cfg->planes[0].height);
+ CAM_DBG(CAM_LRME, "height %d", io_buf->io_cfg->planes[0].height);
+
+ /* packer cfg */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_wr_reg.bus_client_reg[index].packer_cfg,
+ (index == 0) ? 0x1 : 0x5);
+
+ /* client stride */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, num_cmd,
+ hw_info->bus_wr_reg.bus_client_reg[index].wr_stride,
+ io_buf->io_cfg->planes[0].meta_stride);
+ CAM_DBG(CAM_LRME, "plane_stride %d",
+ io_buf->io_cfg->planes[0].plane_stride);
+}
+
+
+static int cam_lrme_hw_util_process_config_hw(struct cam_hw_info *lrme_hw,
+ struct cam_lrme_hw_cmd_config_args *config_args)
+{
+ int i;
+ struct cam_hw_soc_info *soc_info = &lrme_hw->soc_info;
+ struct cam_lrme_cdm_info *hw_cdm_info;
+ uint32_t *cmd_buf_addr = config_args->cmd_buf_addr;
+ uint32_t reg_val_pair[CAM_LRME_MAX_REG_PAIR_NUM];
+ struct cam_lrme_hw_io_buffer *io_buf;
+ struct cam_lrme_hw_info *hw_info =
+ ((struct cam_lrme_core *)lrme_hw->core_info)->hw_info;
+ uint32_t num_cmd = 0;
+ uint32_t size;
+ uint32_t mem_base, available_size = config_args->size;
+ uint32_t output_res_mask = 0, input_res_mask = 0;
+
+
+ if (!cmd_buf_addr) {
+ CAM_ERR(CAM_LRME, "Invalid input args");
+ return -EINVAL;
+ }
+
+ hw_cdm_info =
+ ((struct cam_lrme_core *)lrme_hw->core_info)->hw_cdm_info;
+
+ for (i = 0; i < CAM_LRME_MAX_IO_BUFFER; i++) {
+ io_buf = &config_args->input_buf[i];
+
+ if (io_buf->valid == false)
+ break;
+
+ if (io_buf->io_cfg->direction != CAM_BUF_INPUT) {
+ CAM_ERR(CAM_LRME, "Incorrect direction %d %d",
+ io_buf->io_cfg->direction, CAM_BUF_INPUT);
+ return -EINVAL;
+ }
+ CAM_DBG(CAM_LRME,
+ "resource_type %d", io_buf->io_cfg->resource_type);
+
+ switch (io_buf->io_cfg->resource_type) {
+ case CAM_LRME_IO_TYPE_TAR:
+ cam_lrme_hw_util_fill_fe_reg(io_buf, 0, reg_val_pair,
+ &num_cmd, hw_info);
+
+ input_res_mask |= CAM_LRME_INPUT_PORT_TYPE_TAR;
+ break;
+ case CAM_LRME_IO_TYPE_REF:
+ cam_lrme_hw_util_fill_fe_reg(io_buf, 1, reg_val_pair,
+ &num_cmd, hw_info);
+
+ input_res_mask |= CAM_LRME_INPUT_PORT_TYPE_REF;
+ break;
+ default:
+ CAM_ERR(CAM_LRME, "wrong resource_type %d",
+ io_buf->io_cfg->resource_type);
+ return -EINVAL;
+ }
+ }
+
+ for (i = 0; i < CAM_LRME_BUS_RD_MAX_CLIENTS; i++)
+ if (!((input_res_mask >> i) & 0x1))
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, &num_cmd,
+ hw_info->bus_rd_reg.bus_client_reg[i].core_cfg,
+ 0x0);
+
+ for (i = 0; i < CAM_LRME_MAX_IO_BUFFER; i++) {
+ io_buf = &config_args->output_buf[i];
+
+ if (io_buf->valid == false)
+ break;
+
+ if (io_buf->io_cfg->direction != CAM_BUF_OUTPUT) {
+ CAM_ERR(CAM_LRME, "Incorrect direction %d %d",
+ io_buf->io_cfg->direction, CAM_BUF_INPUT);
+ return -EINVAL;
+ }
+
+ CAM_DBG(CAM_LRME, "resource_type %d",
+ io_buf->io_cfg->resource_type);
+ switch (io_buf->io_cfg->resource_type) {
+ case CAM_LRME_IO_TYPE_DS2:
+ cam_lrme_hw_util_fill_we_reg(io_buf, 0, reg_val_pair,
+ &num_cmd, hw_info);
+
+ output_res_mask |= CAM_LRME_OUTPUT_PORT_TYPE_DS2;
+ break;
+ case CAM_LRME_IO_TYPE_RES:
+ cam_lrme_hw_util_fill_we_reg(io_buf, 1, reg_val_pair,
+ &num_cmd, hw_info);
+
+ output_res_mask |= CAM_LRME_OUTPUT_PORT_TYPE_RES;
+ break;
+
+ default:
+ CAM_ERR(CAM_LRME, "wrong resource_type %d",
+ io_buf->io_cfg->resource_type);
+ return -EINVAL;
+ }
+ }
+
+ for (i = 0; i < CAM_LRME_BUS_RD_MAX_CLIENTS; i++)
+ if (!((output_res_mask >> i) & 0x1))
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, &num_cmd,
+ hw_info->bus_wr_reg.bus_client_reg[i].cfg, 0x0);
+
+ if (output_res_mask) {
+ /* write composite mask */
+ cam_lrme_cdm_write_reg_val_pair(reg_val_pair, &num_cmd,
+ hw_info->bus_wr_reg.common_reg.composite_mask_0,
+ output_res_mask);
+ }
+
+ size = hw_cdm_info->cdm_ops->cdm_required_size_changebase();
+ if ((size * 4) > available_size) {
+ CAM_ERR(CAM_LRME, "buf size:%d is not sufficient, expected: %d",
+ available_size, size);
+ return -EINVAL;
+ }
+
+ mem_base = CAM_SOC_GET_REG_MAP_CAM_BASE(soc_info, CAM_LRME_BASE_IDX);
+
+ hw_cdm_info->cdm_ops->cdm_write_changebase(cmd_buf_addr, mem_base);
+ cmd_buf_addr += size;
+ available_size -= (size * 4);
+
+ size = hw_cdm_info->cdm_ops->cdm_required_size_reg_random(
+ num_cmd / 2);
+
+ if ((size * 4) > available_size) {
+ CAM_ERR(CAM_LRME, "buf size:%d is not sufficient, expected: %d",
+ available_size, size);
+ return -ENOMEM;
+ }
+
+ hw_cdm_info->cdm_ops->cdm_write_regrandom(cmd_buf_addr, num_cmd / 2,
+ reg_val_pair);
+ cmd_buf_addr += size;
+ available_size -= (size * 4);
+
+ config_args->config_buf_size =
+ config_args->size - available_size;
+
+ return 0;
+}
+
+static int cam_lrme_hw_util_submit_go(struct cam_hw_info *lrme_hw)
+{
+ struct cam_lrme_core *lrme_core;
+ struct cam_hw_soc_info *soc_info;
+ struct cam_lrme_hw_info *hw_info;
+
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+ hw_info = lrme_core->hw_info;
+ soc_info = &lrme_hw->soc_info;
+
+ cam_io_w_mb(0x1, soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.cmd);
+
+ return 0;
+}
+
+static int cam_lrme_hw_util_reset(struct cam_hw_info *lrme_hw,
+ uint32_t reset_type)
+{
+ struct cam_lrme_core *lrme_core;
+ struct cam_hw_soc_info *soc_info = &lrme_hw->soc_info;
+ struct cam_lrme_hw_info *hw_info;
+ long time_left;
+
+ lrme_core = lrme_hw->core_info;
+ hw_info = lrme_core->hw_info;
+
+ switch (reset_type) {
+ case CAM_LRME_HW_RESET_TYPE_HW_RESET:
+ reinit_completion(&lrme_core->reset_complete);
+ cam_io_w_mb(0x1, soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_rst_cmd);
+ time_left = wait_for_completion_timeout(
+ &lrme_core->reset_complete,
+ msecs_to_jiffies(CAM_LRME_HW_RESET_TIMEOUT));
+ if (time_left <= 0) {
+ CAM_ERR(CAM_LRME,
+ "HW reset wait failed time_left=%ld",
+ time_left);
+ return -ETIMEDOUT;
+ }
+ break;
+ case CAM_LRME_HW_RESET_TYPE_SW_RESET:
+ cam_io_w_mb(0x3, soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.sw_reset);
+ cam_io_w_mb(0x3, soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.sw_reset);
+ reinit_completion(&lrme_core->reset_complete);
+ cam_io_w_mb(0x2, soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_rst_cmd);
+ time_left = wait_for_completion_timeout(
+ &lrme_core->reset_complete,
+ msecs_to_jiffies(CAM_LRME_HW_RESET_TIMEOUT));
+ if (time_left <= 0) {
+ CAM_ERR(CAM_LRME,
+ "SW reset wait failed time_left=%ld",
+ time_left);
+ return -ETIMEDOUT;
+ }
+ break;
+ }
+
+ return 0;
+}
+
+int cam_lrme_hw_util_get_caps(struct cam_hw_info *lrme_hw,
+ struct cam_lrme_dev_cap *hw_caps)
+{
+ struct cam_hw_soc_info *soc_info = &lrme_hw->soc_info;
+ struct cam_lrme_hw_info *hw_info =
+ ((struct cam_lrme_core *)lrme_hw->core_info)->hw_info;
+ uint32_t reg_value;
+
+ if (!hw_info) {
+ CAM_ERR(CAM_LRME, "Invalid hw info data");
+ return -EINVAL;
+ }
+
+ reg_value = cam_io_r_mb(soc_info->reg_map[0].mem_base +
+ hw_info->clc_reg.clc_hw_version);
+ hw_caps->clc_hw_version.gen =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xf0000000, 0x1C);
+ hw_caps->clc_hw_version.rev =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xfff0000, 0x10);
+ hw_caps->clc_hw_version.step =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xffff, 0x0);
+
+ reg_value = cam_io_r_mb(soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.hw_version);
+ hw_caps->bus_rd_hw_version.gen =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xf0000000, 0x1C);
+ hw_caps->bus_rd_hw_version.rev =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xfff0000, 0x10);
+ hw_caps->bus_rd_hw_version.step =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xffff, 0x0);
+
+ reg_value = cam_io_r_mb(soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.hw_version);
+ hw_caps->bus_wr_hw_version.gen =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xf0000000, 0x1C);
+ hw_caps->bus_wr_hw_version.rev =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xfff0000, 0x10);
+ hw_caps->bus_wr_hw_version.step =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xffff, 0x0);
+
+ reg_value = cam_io_r_mb(soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_hw_version);
+ hw_caps->top_hw_version.gen =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xf0000000, 0x1C);
+ hw_caps->top_hw_version.rev =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xfff0000, 0x10);
+ hw_caps->top_hw_version.step =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xffff, 0x0);
+
+ reg_value = cam_io_r_mb(soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_titan_version);
+ hw_caps->top_titan_version.gen =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xf0000000, 0x1C);
+ hw_caps->top_titan_version.rev =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xfff0000, 0x10);
+ hw_caps->top_titan_version.step =
+ CAM_BITS_MASK_SHIFT(reg_value, 0xffff, 0x0);
+
+ return 0;
+}
+
+static int cam_lrme_hw_util_submit_req(struct cam_lrme_core *lrme_core,
+ struct cam_lrme_frame_request *frame_req)
+{
+ struct cam_lrme_cdm_info *hw_cdm_info =
+ lrme_core->hw_cdm_info;
+ struct cam_cdm_bl_request *cdm_cmd = hw_cdm_info->cdm_cmd;
+ struct cam_hw_update_entry *cmd;
+ int i, rc = 0;
+
+ if (frame_req->num_hw_update_entries > 0) {
+ cdm_cmd->cmd_arrary_count = frame_req->num_hw_update_entries;
+ cdm_cmd->type = CAM_CDM_BL_CMD_TYPE_MEM_HANDLE;
+ cdm_cmd->flag = false;
+ cdm_cmd->userdata = NULL;
+ cdm_cmd->cookie = 0;
+
+ for (i = 0; i <= frame_req->num_hw_update_entries; i++) {
+ cmd = (frame_req->hw_update_entries + i);
+ cdm_cmd->cmd[i].bl_addr.mem_handle = cmd->handle;
+ cdm_cmd->cmd[i].offset = cmd->offset;
+ cdm_cmd->cmd[i].len = cmd->len;
+ }
+
+ rc = cam_cdm_submit_bls(hw_cdm_info->cdm_handle, cdm_cmd);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to submit cdm commands");
+ return -EINVAL;
+ }
+ } else {
+ CAM_ERR(CAM_LRME, "No hw update entry");
+ rc = -EINVAL;
+ }
+
+ return rc;
+}
+
+static int cam_lrme_hw_util_process_err(struct cam_hw_info *lrme_hw)
+{
+ struct cam_lrme_core *lrme_core = lrme_hw->core_info;
+ struct cam_lrme_frame_request *req_proc, *req_submit;
+ struct cam_lrme_hw_cb_args cb_args;
+ int rc;
+
+ req_proc = lrme_core->req_proc;
+ req_submit = lrme_core->req_submit;
+ cb_args.cb_type = CAM_LRME_CB_ERROR;
+
+ if ((lrme_core->state != CAM_LRME_CORE_STATE_PROCESSING) &&
+ (lrme_core->state != CAM_LRME_CORE_STATE_REQ_PENDING) &&
+ (lrme_core->state != CAM_LRME_CORE_STATE_REQ_PROC_PEND)) {
+ CAM_ERR(CAM_LRME, "Get error irq in wrong state %d",
+ lrme_core->state);
+ }
+
+ CAM_ERR_RATE_LIMIT(CAM_LRME, "Start recovery");
+ lrme_core->state = CAM_LRME_CORE_STATE_RECOVERY;
+ rc = cam_lrme_hw_util_reset(lrme_hw, CAM_LRME_HW_RESET_TYPE_HW_RESET);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Failed to reset");
+
+ lrme_core->req_proc = NULL;
+ lrme_core->req_submit = NULL;
+ if (!rc)
+ lrme_core->state = CAM_LRME_CORE_STATE_IDLE;
+
+ cb_args.frame_req = req_proc;
+ lrme_core->hw_mgr_cb.cam_lrme_hw_mgr_cb(lrme_core->hw_mgr_cb.data,
+ &cb_args);
+
+ cb_args.frame_req = req_submit;
+ lrme_core->hw_mgr_cb.cam_lrme_hw_mgr_cb(lrme_core->hw_mgr_cb.data,
+ &cb_args);
+
+ return rc;
+}
+
+static int cam_lrme_hw_util_process_reg_update(
+ struct cam_hw_info *lrme_hw, struct cam_lrme_hw_cb_args *cb_args)
+{
+ struct cam_lrme_core *lrme_core = lrme_hw->core_info;
+ int rc = 0;
+
+ cb_args->cb_type |= CAM_LRME_CB_COMP_REG_UPDATE;
+ if (lrme_core->state == CAM_LRME_CORE_STATE_REQ_PENDING) {
+ lrme_core->state = CAM_LRME_CORE_STATE_PROCESSING;
+ } else {
+ CAM_ERR(CAM_LRME, "Reg update in wrong state %d",
+ lrme_core->state);
+ rc = cam_lrme_hw_util_process_err(lrme_hw);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Failed to reset");
+ return -EINVAL;
+ }
+
+ lrme_core->req_proc = lrme_core->req_submit;
+ lrme_core->req_submit = NULL;
+
+ return 0;
+}
+
+static int cam_lrme_hw_util_process_idle(
+ struct cam_hw_info *lrme_hw, struct cam_lrme_hw_cb_args *cb_args)
+{
+ struct cam_lrme_core *lrme_core = lrme_hw->core_info;
+ int rc = 0;
+
+ cb_args->cb_type |= CAM_LRME_CB_BUF_DONE;
+ switch (lrme_core->state) {
+ case CAM_LRME_CORE_STATE_REQ_PROC_PEND:
+ cam_lrme_hw_util_submit_go(lrme_hw);
+ lrme_core->state = CAM_LRME_CORE_STATE_REQ_PENDING;
+ break;
+
+ case CAM_LRME_CORE_STATE_PROCESSING:
+ lrme_core->state = CAM_LRME_CORE_STATE_IDLE;
+ break;
+
+ default:
+ CAM_ERR(CAM_LRME, "Idle in wrong state %d",
+ lrme_core->state);
+ rc = cam_lrme_hw_util_process_err(lrme_hw);
+ return rc;
+ }
+ cb_args->frame_req = lrme_core->req_proc;
+ lrme_core->req_proc = NULL;
+
+ return 0;
+}
+
+void cam_lrme_set_irq(struct cam_hw_info *lrme_hw,
+ enum cam_lrme_irq_set set)
+{
+ struct cam_hw_soc_info *soc_info = &lrme_hw->soc_info;
+ struct cam_lrme_core *lrme_core = lrme_hw->core_info;
+ struct cam_lrme_hw_info *hw_info = lrme_core->hw_info;
+
+ switch (set) {
+ case CAM_LRME_IRQ_ENABLE:
+ cam_io_w_mb(0xFFFF,
+ soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_irq_mask);
+ cam_io_w_mb(0xFFFF,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_mask_0);
+ cam_io_w_mb(0xFFFF,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_mask_1);
+ cam_io_w_mb(0xFFFF,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.irq_mask);
+ break;
+
+ case CAM_LRME_IRQ_DISABLE:
+ cam_io_w_mb(0x0,
+ soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_irq_mask);
+ cam_io_w_mb(0x0,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_mask_0);
+ cam_io_w_mb(0x0,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_mask_1);
+ cam_io_w_mb(0x0,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.irq_mask);
+ break;
+ }
+}
+
+
+int cam_lrme_hw_process_irq(void *priv, void *data)
+{
+ struct cam_lrme_hw_work_data *work_data;
+ struct cam_hw_info *lrme_hw;
+ struct cam_lrme_core *lrme_core;
+ int rc = 0;
+ uint32_t top_irq_status, fe_irq_status;
+ uint32_t *we_irq_status;
+ struct cam_lrme_hw_cb_args cb_args;
+
+ if (!data || !priv) {
+ CAM_ERR(CAM_LRME, "Invalid data %pK %pK", data, priv);
+ return -EINVAL;
+ }
+
+ memset(&cb_args, 0, sizeof(struct cam_lrme_hw_cb_args));
+ lrme_hw = (struct cam_hw_info *)priv;
+ work_data = (struct cam_lrme_hw_work_data *)data;
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+ top_irq_status = work_data->top_irq_status;
+ fe_irq_status = work_data->fe_irq_status;
+ we_irq_status = work_data->we_irq_status;
+
+ CAM_DBG(CAM_LRME,
+ "top status %x, fe status %x, we status0 %x, we status1 %x",
+ top_irq_status, fe_irq_status, we_irq_status[0],
+ we_irq_status[1]);
+ CAM_DBG(CAM_LRME, "Current state %d", lrme_core->state);
+
+ mutex_lock(&lrme_hw->hw_mutex);
+
+ if (top_irq_status & (1 << 3)) {
+ CAM_DBG(CAM_LRME, "Error");
+ rc = cam_lrme_hw_util_process_err(lrme_hw);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Process error failed");
+ goto end;
+ }
+
+ if (we_irq_status[0] & (1 << 1)) {
+ CAM_DBG(CAM_LRME, "reg update");
+ rc = cam_lrme_hw_util_process_reg_update(lrme_hw, &cb_args);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Process reg_update failed");
+ goto end;
+ }
+ }
+
+ if (top_irq_status & (1 << 4)) {
+ CAM_DBG(CAM_LRME, "IDLE");
+
+ rc = cam_lrme_hw_util_process_idle(lrme_hw, &cb_args);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Process idle failed");
+ goto end;
+ }
+ }
+
+ if (lrme_core->hw_mgr_cb.cam_lrme_hw_mgr_cb) {
+ lrme_core->hw_mgr_cb.cam_lrme_hw_mgr_cb(lrme_core->
+ hw_mgr_cb.data, &cb_args);
+ } else {
+ CAM_ERR(CAM_LRME, "No hw mgr cb");
+ rc = -EINVAL;
+ }
+
+end:
+ mutex_unlock(&lrme_hw->hw_mutex);
+ return rc;
+}
+
+int cam_lrme_hw_start(void *hw_priv, void *hw_start_args, uint32_t arg_size)
+{
+ struct cam_hw_info *lrme_hw = (struct cam_hw_info *)hw_priv;
+ int rc = 0;
+ struct cam_lrme_core *lrme_core;
+
+ if (!lrme_hw) {
+ CAM_ERR(CAM_LRME,
+ "Invalid input params, lrme_hw %pK",
+ lrme_hw);
+ return -EINVAL;
+ }
+
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+
+ mutex_lock(&lrme_hw->hw_mutex);
+
+ if (lrme_hw->open_count > 0) {
+ lrme_hw->open_count++;
+ CAM_DBG(CAM_LRME, "This device is activated before");
+ goto unlock;
+ }
+
+ rc = cam_lrme_soc_enable_resources(lrme_hw);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to enable soc resources");
+ goto unlock;
+ }
+
+ rc = cam_lrme_hw_util_reset(lrme_hw, CAM_LRME_HW_RESET_TYPE_HW_RESET);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to reset hw");
+ goto disable_soc;
+ }
+
+ if (lrme_core->hw_cdm_info) {
+ struct cam_lrme_cdm_info *hw_cdm_info =
+ lrme_core->hw_cdm_info;
+
+ rc = cam_cdm_stream_on(hw_cdm_info->cdm_handle);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to stream on cdm");
+ goto disable_soc;
+ }
+ }
+
+ lrme_hw->hw_state = CAM_HW_STATE_POWER_UP;
+ lrme_hw->open_count++;
+ lrme_core->state = CAM_LRME_CORE_STATE_IDLE;
+
+ CAM_DBG(CAM_LRME, "open count %d", lrme_hw->open_count);
+ mutex_unlock(&lrme_hw->hw_mutex);
+ return rc;
+
+disable_soc:
+ if (cam_lrme_soc_disable_resources(lrme_hw))
+ CAM_ERR(CAM_LRME, "Error in disable soc resources");
+unlock:
+ CAM_DBG(CAM_LRME, "open count %d", lrme_hw->open_count);
+ mutex_unlock(&lrme_hw->hw_mutex);
+ return rc;
+}
+
+int cam_lrme_hw_stop(void *hw_priv, void *hw_stop_args, uint32_t arg_size)
+{
+ struct cam_hw_info *lrme_hw = (struct cam_hw_info *)hw_priv;
+ int rc = 0;
+ struct cam_lrme_core *lrme_core;
+
+ if (!lrme_hw) {
+ CAM_ERR(CAM_LRME, "Invalid argument");
+ return -EINVAL;
+ }
+
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+
+ mutex_lock(&lrme_hw->hw_mutex);
+
+ if (lrme_hw->open_count == 0) {
+ mutex_unlock(&lrme_hw->hw_mutex);
+ CAM_ERR(CAM_LRME, "Error Unbalanced stop");
+ return -EINVAL;
+ }
+ lrme_hw->open_count--;
+
+ CAM_DBG(CAM_LRME, "open count %d", lrme_hw->open_count);
+
+ if (lrme_hw->open_count)
+ goto unlock;
+
+ lrme_core->req_proc = NULL;
+ lrme_core->req_submit = NULL;
+
+ if (lrme_core->hw_cdm_info) {
+ struct cam_lrme_cdm_info *hw_cdm_info =
+ lrme_core->hw_cdm_info;
+
+ rc = cam_cdm_stream_off(hw_cdm_info->cdm_handle);
+ if (rc) {
+ CAM_ERR(CAM_LRME,
+ "Failed in CDM StreamOff, handle=0x%x, rc=%d",
+ hw_cdm_info->cdm_handle, rc);
+ goto unlock;
+ }
+ }
+
+ rc = cam_lrme_soc_disable_resources(lrme_hw);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed in Disable SOC, rc=%d", rc);
+ goto unlock;
+ }
+
+ lrme_hw->hw_state = CAM_HW_STATE_POWER_DOWN;
+ if (lrme_core->state == CAM_LRME_CORE_STATE_IDLE) {
+ lrme_core->state = CAM_LRME_CORE_STATE_INIT;
+ } else {
+ CAM_ERR(CAM_LRME, "HW in wrong state %d", lrme_core->state);
+ return -EINVAL;
+ }
+
+unlock:
+ mutex_unlock(&lrme_hw->hw_mutex);
+ return rc;
+}
+
+int cam_lrme_hw_submit_req(void *hw_priv, void *hw_submit_args,
+ uint32_t arg_size)
+{
+ struct cam_hw_info *lrme_hw = (struct cam_hw_info *)hw_priv;
+ struct cam_lrme_core *lrme_core;
+ struct cam_lrme_hw_submit_args *args =
+ (struct cam_lrme_hw_submit_args *)hw_submit_args;
+ int rc = 0;
+ struct cam_lrme_frame_request *frame_req;
+
+
+ if (!hw_priv || !hw_submit_args) {
+ CAM_ERR(CAM_LRME, "Invalid input");
+ return -EINVAL;
+ }
+
+ if (sizeof(struct cam_lrme_hw_submit_args) != arg_size) {
+ CAM_ERR(CAM_LRME,
+ "size of args %lu, arg_size %d",
+ sizeof(struct cam_lrme_hw_submit_args), arg_size);
+ return -EINVAL;
+ }
+
+ frame_req = args->frame_req;
+
+ mutex_lock(&lrme_hw->hw_mutex);
+
+ if (lrme_hw->open_count == 0) {
+ CAM_ERR(CAM_LRME, "HW is not open");
+ mutex_unlock(&lrme_hw->hw_mutex);
+ return -EINVAL;
+ }
+
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+ if (lrme_core->state != CAM_LRME_CORE_STATE_IDLE &&
+ lrme_core->state != CAM_LRME_CORE_STATE_PROCESSING) {
+ mutex_unlock(&lrme_hw->hw_mutex);
+ CAM_DBG(CAM_LRME, "device busy, can not submit, state %d",
+ lrme_core->state);
+ return -EBUSY;
+ }
+
+ if (lrme_core->req_submit != NULL) {
+ CAM_ERR(CAM_LRME, "req_submit is not NULL");
+ return -EBUSY;
+ }
+
+ rc = cam_lrme_hw_util_submit_req(lrme_core, frame_req);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Submit req failed");
+ goto error;
+ }
+
+ switch (lrme_core->state) {
+ case CAM_LRME_CORE_STATE_PROCESSING:
+ lrme_core->state = CAM_LRME_CORE_STATE_REQ_PROC_PEND;
+ break;
+
+ case CAM_LRME_CORE_STATE_IDLE:
+ cam_lrme_hw_util_submit_go(lrme_hw);
+ lrme_core->state = CAM_LRME_CORE_STATE_REQ_PENDING;
+ break;
+
+ default:
+ CAM_ERR(CAM_LRME, "Wrong hw state");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ lrme_core->req_submit = frame_req;
+ mutex_unlock(&lrme_hw->hw_mutex);
+ CAM_DBG(CAM_LRME, "Release lock, submit done for req %llu",
+ frame_req->req_id);
+
+ return 0;
+
+error:
+ mutex_unlock(&lrme_hw->hw_mutex);
+
+ return rc;
+
+}
+
+int cam_lrme_hw_reset(void *hw_priv, void *reset_core_args, uint32_t arg_size)
+{
+ struct cam_hw_info *lrme_hw = hw_priv;
+ struct cam_lrme_core *lrme_core;
+ struct cam_lrme_hw_reset_args *lrme_reset_args = reset_core_args;
+ int rc;
+
+ if (!hw_priv) {
+ CAM_ERR(CAM_LRME, "Invalid input args");
+ return -EINVAL;
+ }
+
+ if (!reset_core_args ||
+ sizeof(struct cam_lrme_hw_reset_args) != arg_size) {
+ CAM_ERR(CAM_LRME, "Invalid reset args");
+ return -EINVAL;
+ }
+
+ lrme_core = lrme_hw->core_info;
+
+ mutex_lock(&lrme_hw->hw_mutex);
+ if (lrme_core->state == CAM_LRME_CORE_STATE_RECOVERY) {
+ mutex_unlock(&lrme_hw->hw_mutex);
+ CAM_ERR(CAM_LRME, "Reset not allowed in %d state",
+ lrme_core->state);
+ return -EINVAL;
+ }
+
+ lrme_core->state = CAM_LRME_CORE_STATE_RECOVERY;
+
+ rc = cam_lrme_hw_util_reset(lrme_hw, lrme_reset_args->reset_type);
+ if (rc) {
+ mutex_unlock(&lrme_hw->hw_mutex);
+ CAM_ERR(CAM_FD, "Failed to reset");
+ return rc;
+ }
+
+ lrme_core->state = CAM_LRME_CORE_STATE_IDLE;
+
+ mutex_unlock(&lrme_hw->hw_mutex);
+
+ return 0;
+}
+
+int cam_lrme_hw_get_caps(void *hw_priv, void *get_hw_cap_args,
+ uint32_t arg_size)
+{
+ struct cam_hw_info *lrme_hw;
+ struct cam_lrme_core *lrme_core;
+ struct cam_lrme_dev_cap *lrme_hw_caps =
+ (struct cam_lrme_dev_cap *)get_hw_cap_args;
+
+ if (!hw_priv || !get_hw_cap_args) {
+ CAM_ERR(CAM_LRME, "Invalid input pointers %pK %pK",
+ hw_priv, get_hw_cap_args);
+ return -EINVAL;
+ }
+
+ lrme_hw = (struct cam_hw_info *)hw_priv;
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+ *lrme_hw_caps = lrme_core->hw_caps;
+
+ return 0;
+}
+
+irqreturn_t cam_lrme_hw_irq(int irq_num, void *data)
+{
+ struct cam_hw_info *lrme_hw;
+ struct cam_lrme_core *lrme_core;
+ struct cam_hw_soc_info *soc_info;
+ struct cam_lrme_hw_info *hw_info;
+ struct crm_workq_task *task;
+ struct cam_lrme_hw_work_data *work_data;
+ uint32_t top_irq_status, fe_irq_status, we_irq_status0, we_irq_status1;
+ int rc;
+
+ if (!data) {
+ CAM_ERR(CAM_LRME, "Invalid data in IRQ callback");
+ return -EINVAL;
+ }
+
+ lrme_hw = (struct cam_hw_info *)data;
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+ soc_info = &lrme_hw->soc_info;
+ hw_info = lrme_core->hw_info;
+
+ top_irq_status = cam_io_r_mb(
+ soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_irq_status);
+ CAM_DBG(CAM_LRME, "top_irq_status %x", top_irq_status);
+ cam_io_w_mb(top_irq_status,
+ soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_irq_clear);
+ top_irq_status &= CAM_LRME_TOP_IRQ_MASK;
+
+ fe_irq_status = cam_io_r_mb(
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.irq_status);
+ CAM_DBG(CAM_LRME, "fe_irq_status %x", fe_irq_status);
+ cam_io_w_mb(fe_irq_status,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.irq_clear);
+ fe_irq_status &= CAM_LRME_FE_IRQ_MASK;
+
+ we_irq_status0 = cam_io_r_mb(
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_status_0);
+ CAM_DBG(CAM_LRME, "we_irq_status[0] %x", we_irq_status0);
+ cam_io_w_mb(we_irq_status0,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_clear_0);
+ we_irq_status0 &= CAM_LRME_WE_IRQ_MASK_0;
+
+ we_irq_status1 = cam_io_r_mb(
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_status_1);
+ CAM_DBG(CAM_LRME, "we_irq_status[1] %x", we_irq_status1);
+ cam_io_w_mb(we_irq_status1,
+ soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_clear_1);
+ we_irq_status1 &= CAM_LRME_WE_IRQ_MASK_1;
+
+ cam_io_w_mb(0x1, soc_info->reg_map[0].mem_base +
+ hw_info->titan_reg.top_irq_cmd);
+ cam_io_w_mb(0x1, soc_info->reg_map[0].mem_base +
+ hw_info->bus_wr_reg.common_reg.irq_cmd);
+ cam_io_w_mb(0x1, soc_info->reg_map[0].mem_base +
+ hw_info->bus_rd_reg.common_reg.irq_cmd);
+
+ if (top_irq_status & 0x1) {
+ complete(&lrme_core->reset_complete);
+ top_irq_status &= (~0x1);
+ }
+
+ if (top_irq_status || fe_irq_status ||
+ we_irq_status0 || we_irq_status1) {
+ task = cam_req_mgr_workq_get_task(lrme_core->work);
+ if (!task) {
+ CAM_ERR(CAM_LRME, "no empty task available");
+ return -ENOMEM;
+ }
+ work_data = (struct cam_lrme_hw_work_data *)task->payload;
+ work_data->top_irq_status = top_irq_status;
+ work_data->fe_irq_status = fe_irq_status;
+ work_data->we_irq_status[0] = we_irq_status0;
+ work_data->we_irq_status[1] = we_irq_status1;
+ task->process_cb = cam_lrme_hw_process_irq;
+ rc = cam_req_mgr_workq_enqueue_task(task, data,
+ CRM_TASK_PRIORITY_0);
+ if (rc)
+ CAM_ERR(CAM_LRME,
+ "Failed in enqueue work task, rc=%d", rc);
+ }
+
+ return IRQ_HANDLED;
+}
+
+int cam_lrme_hw_process_cmd(void *hw_priv, uint32_t cmd_type,
+ void *cmd_args, uint32_t arg_size)
+{
+ struct cam_hw_info *lrme_hw = (struct cam_hw_info *)hw_priv;
+ int rc = 0;
+
+ switch (cmd_type) {
+ case CAM_LRME_HW_CMD_PREPARE_HW_UPDATE: {
+ struct cam_lrme_hw_cmd_config_args *config_args;
+
+ config_args = (struct cam_lrme_hw_cmd_config_args *)cmd_args;
+ rc = cam_lrme_hw_util_process_config_hw(lrme_hw, config_args);
+ break;
+ }
+
+ case CAM_LRME_HW_CMD_REGISTER_CB: {
+ struct cam_lrme_hw_cmd_set_cb *cb_args;
+ struct cam_lrme_device *hw_device;
+ struct cam_lrme_core *lrme_core =
+ (struct cam_lrme_core *)lrme_hw->core_info;
+ cb_args = (struct cam_lrme_hw_cmd_set_cb *)cmd_args;
+ lrme_core->hw_mgr_cb.cam_lrme_hw_mgr_cb =
+ cb_args->cam_lrme_hw_mgr_cb;
+ lrme_core->hw_mgr_cb.data = cb_args->data;
+ hw_device = cb_args->data;
+ rc = 0;
+ break;
+ }
+
+ case CAM_LRME_HW_CMD_SUBMIT: {
+ struct cam_lrme_hw_submit_args *submit_args;
+
+ submit_args = (struct cam_lrme_hw_submit_args *)cmd_args;
+ rc = cam_lrme_hw_submit_req(hw_priv,
+ submit_args, arg_size);
+ break;
+ }
+
+ default:
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_core.h b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_core.h
new file mode 100644
index 0000000..bf2f370
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_core.h
@@ -0,0 +1,457 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _CAM_LRME_HW_CORE_H_
+#define _CAM_LRME_HW_CORE_H_
+
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <media/cam_defs.h>
+#include <media/cam_lrme.h>
+
+#include "cam_common_util.h"
+#include "cam_debug_util.h"
+#include "cam_io_util.h"
+#include "cam_cpas_api.h"
+#include "cam_cdm_intf_api.h"
+#include "cam_lrme_hw_intf.h"
+#include "cam_lrme_hw_soc.h"
+#include "cam_req_mgr_workq.h"
+
+#define CAM_LRME_HW_RESET_TIMEOUT 3000
+
+#define CAM_LRME_BUS_RD_MAX_CLIENTS 2
+#define CAM_LRME_BUS_WR_MAX_CLIENTS 2
+
+#define CAM_LRME_HW_WORKQ_NUM_TASK 30
+
+#define CAM_LRME_TOP_IRQ_MASK 0x19
+#define CAM_LRME_WE_IRQ_MASK_0 0x2
+#define CAM_LRME_WE_IRQ_MASK_1 0x0
+#define CAM_LRME_FE_IRQ_MASK 0x0
+
+#define CAM_LRME_MAX_REG_PAIR_NUM 60
+
+/**
+ * enum cam_lrme_irq_set
+ *
+ * @CAM_LRME_IRQ_ENABLE : Enable irqs
+ * @CAM_LRME_IRQ_DISABLE : Disable irqs
+ */
+enum cam_lrme_irq_set {
+ CAM_LRME_IRQ_ENABLE,
+ CAM_LRME_IRQ_DISABLE,
+};
+
+/**
+ * struct cam_lrme_cdm_info : information used to submit cdm command
+ *
+ * @cdm_handle : CDM handle for this device
+ * @cdm_ops : CDM ops
+ * @cdm_cmd : CDM command pointer
+ */
+struct cam_lrme_cdm_info {
+ uint32_t cdm_handle;
+ struct cam_cdm_utils_ops *cdm_ops;
+ struct cam_cdm_bl_request *cdm_cmd;
+};
+
+/**
+ * struct cam_lrme_hw_work_data : Work data for HW work queue
+ *
+ * @top_irq_status : Top registers irq status
+ * @fe_irq_status : FE engine irq status
+ * @we_irq_status : WE engine irq status
+ */
+struct cam_lrme_hw_work_data {
+ uint32_t top_irq_status;
+ uint32_t fe_irq_status;
+ uint32_t we_irq_status[2];
+};
+
+/**
+ * enum cam_lrme_core_state : LRME core states
+ *
+ * @CAM_LRME_CORE_STATE_UNINIT : LRME is in uninit state
+ * @CAM_LRME_CORE_STATE_INIT : LRME is in init state after probe
+ * @ CAM_LRME_CORE_STATE_IDLE : LRME is in idle state. Hardware is in
+ * this state when no frame is processing
+ * or waiting for this core.
+ * @CAM_LRME_CORE_STATE_REQ_PENDING : LRME is in pending state. One frame is
+ * waiting for processing
+ * @CAM_LRME_CORE_STATE_PROCESSING : LRME is in processing state. HW manager
+ * can submit one more frame to HW
+ * @CAM_LRME_CORE_STATE_REQ_PROC_PEND : Indicate two frames are inside HW.
+ * @CAM_LRME_CORE_STATE_RECOVERY : Indicate core is in the process of reset
+ * @CAM_LRME_CORE_STATE_MAX : upper limit of states
+ */
+enum cam_lrme_core_state {
+ CAM_LRME_CORE_STATE_UNINIT,
+ CAM_LRME_CORE_STATE_INIT,
+ CAM_LRME_CORE_STATE_IDLE,
+ CAM_LRME_CORE_STATE_REQ_PENDING,
+ CAM_LRME_CORE_STATE_PROCESSING,
+ CAM_LRME_CORE_STATE_REQ_PROC_PEND,
+ CAM_LRME_CORE_STATE_RECOVERY,
+ CAM_LRME_CORE_STATE_MAX,
+};
+
+/**
+ * struct cam_lrme_core : LRME HW core information
+ *
+ * @hw_info : Pointer to base HW information structure
+ * @device_iommu : Device iommu handle
+ * @cdm_iommu : CDM iommu handle
+ * @hw_caps : Hardware capabilities
+ * @state : Hardware state
+ * @reset_complete : Reset completion
+ * @work : Hardware workqueue to handle irq events
+ * @work_data : Work data used by hardware workqueue
+ * @hw_mgr_cb : Hw manager callback
+ * @req_proc : Pointer to the processing frame request
+ * @req_submit : Pointer to the frame request waiting for processing
+ * @hw_cdm_info : CDM information used by this device
+ * @hw_idx : Hardware index
+ */
+struct cam_lrme_core {
+ struct cam_lrme_hw_info *hw_info;
+ struct cam_iommu_handle device_iommu;
+ struct cam_iommu_handle cdm_iommu;
+ struct cam_lrme_dev_cap hw_caps;
+ enum cam_lrme_core_state state;
+ struct completion reset_complete;
+ struct cam_req_mgr_core_workq *work;
+ struct cam_lrme_hw_work_data work_data[CAM_LRME_HW_WORKQ_NUM_TASK];
+ struct cam_lrme_hw_cmd_set_cb hw_mgr_cb;
+ struct cam_lrme_frame_request *req_proc;
+ struct cam_lrme_frame_request *req_submit;
+ struct cam_lrme_cdm_info *hw_cdm_info;
+ uint32_t hw_idx;
+};
+
+/**
+ * struct cam_lrme_bus_rd_reg_common : Offsets of FE common registers
+ *
+ * @hw_version : Offset of hw_version register
+ * @hw_capability : Offset of hw_capability register
+ * @sw_reset : Offset of sw_reset register
+ * @cgc_override : Offset of cgc_override register
+ * @irq_mask : Offset of irq_mask register
+ * @irq_clear : Offset of irq_clear register
+ * @irq_cmd : Offset of irq_cmd register
+ * @irq_status : Offset of irq_status register
+ * @cmd : Offset of cmd register
+ * @irq_set : Offset of irq_set register
+ * @misr_reset : Offset of misr_reset register
+ * @security_cfg : Offset of security_cfg register
+ * @pwr_iso_cfg : Offset of pwr_iso_cfg register
+ * @pwr_iso_seed : Offset of pwr_iso_seed register
+ * @test_bus_ctrl : Offset of test_bus_ctrl register
+ * @spare : Offset of spare register
+ */
+struct cam_lrme_bus_rd_reg_common {
+ uint32_t hw_version;
+ uint32_t hw_capability;
+ uint32_t sw_reset;
+ uint32_t cgc_override;
+ uint32_t irq_mask;
+ uint32_t irq_clear;
+ uint32_t irq_cmd;
+ uint32_t irq_status;
+ uint32_t cmd;
+ uint32_t irq_set;
+ uint32_t misr_reset;
+ uint32_t security_cfg;
+ uint32_t pwr_iso_cfg;
+ uint32_t pwr_iso_seed;
+ uint32_t test_bus_ctrl;
+ uint32_t spare;
+};
+
+/**
+ * struct cam_lrme_bus_wr_reg_common : Offset of WE common registers
+ * @hw_version : Offset of hw_version register
+ * @hw_capability : Offset of hw_capability register
+ * @sw_reset : Offset of sw_reset register
+ * @cgc_override : Offset of cgc_override register
+ * @misr_reset : Offset of misr_reset register
+ * @pwr_iso_cfg : Offset of pwr_iso_cfg register
+ * @test_bus_ctrl : Offset of test_bus_ctrl register
+ * @composite_mask_0 : Offset of composite_mask_0 register
+ * @irq_mask_0 : Offset of irq_mask_0 register
+ * @irq_mask_1 : Offset of irq_mask_1 register
+ * @irq_clear_0 : Offset of irq_clear_0 register
+ * @irq_clear_1 : Offset of irq_clear_1 register
+ * @irq_status_0 : Offset of irq_status_0 register
+ * @irq_status_1 : Offset of irq_status_1 register
+ * @irq_cmd : Offset of irq_cmd register
+ * @irq_set_0 : Offset of irq_set_0 register
+ * @irq_set_1 : Offset of irq_set_1 register
+ * @addr_fifo_status : Offset of addr_fifo_status register
+ * @frame_header_cfg0 : Offset of frame_header_cfg0 register
+ * @frame_header_cfg1 : Offset of frame_header_cfg1 register
+ * @spare : Offset of spare register
+ */
+struct cam_lrme_bus_wr_reg_common {
+ uint32_t hw_version;
+ uint32_t hw_capability;
+ uint32_t sw_reset;
+ uint32_t cgc_override;
+ uint32_t misr_reset;
+ uint32_t pwr_iso_cfg;
+ uint32_t test_bus_ctrl;
+ uint32_t composite_mask_0;
+ uint32_t irq_mask_0;
+ uint32_t irq_mask_1;
+ uint32_t irq_clear_0;
+ uint32_t irq_clear_1;
+ uint32_t irq_status_0;
+ uint32_t irq_status_1;
+ uint32_t irq_cmd;
+ uint32_t irq_set_0;
+ uint32_t irq_set_1;
+ uint32_t addr_fifo_status;
+ uint32_t frame_header_cfg0;
+ uint32_t frame_header_cfg1;
+ uint32_t spare;
+};
+
+/**
+ * struct cam_lrme_bus_rd_bus_client : Offset of FE registers
+ *
+ * @core_cfg : Offset of core_cfg register
+ * @ccif_meta_data : Offset of ccif_meta_data register
+ * @addr_image : Offset of addr_image register
+ * @rd_buffer_size : Offset of rd_buffer_size register
+ * @rd_stride : Offset of rd_stride register
+ * @unpack_cfg_0 : Offset of unpack_cfg_0 register
+ * @latency_buff_allocation : Offset of latency_buff_allocation register
+ * @burst_limit_cfg : Offset of burst_limit_cfg register
+ * @misr_cfg_0 : Offset of misr_cfg_0 register
+ * @misr_cfg_1 : Offset of misr_cfg_1 register
+ * @misr_rd_val : Offset of misr_rd_val register
+ * @debug_status_cfg : Offset of debug_status_cfg register
+ * @debug_status_0 : Offset of debug_status_0 register
+ * @debug_status_1 : Offset of debug_status_1 register
+ */
+struct cam_lrme_bus_rd_bus_client {
+ uint32_t core_cfg;
+ uint32_t ccif_meta_data;
+ uint32_t addr_image;
+ uint32_t rd_buffer_size;
+ uint32_t rd_stride;
+ uint32_t unpack_cfg_0;
+ uint32_t latency_buff_allocation;
+ uint32_t burst_limit_cfg;
+ uint32_t misr_cfg_0;
+ uint32_t misr_cfg_1;
+ uint32_t misr_rd_val;
+ uint32_t debug_status_cfg;
+ uint32_t debug_status_0;
+ uint32_t debug_status_1;
+};
+
+/**
+ * struct cam_lrme_bus_wr_bus_client : Offset of WE registers
+ *
+ * @status_0 : Offset of status_0 register
+ * @status_1 : Offset of status_1 register
+ * @cfg : Offset of cfg register
+ * @addr_frame_header : Offset of addr_frame_header register
+ * @frame_header_cfg : Offset of frame_header_cfg register
+ * @addr_image : Offset of addr_image register
+ * @addr_image_offset : Offset of addr_image_offset register
+ * @buffer_width_cfg : Offset of buffer_width_cfg register
+ * @buffer_height_cfg : Offset of buffer_height_cfg register
+ * @packer_cfg : Offset of packer_cfg register
+ * @wr_stride : Offset of wr_stride register
+ * @irq_subsample_cfg_period : Offset of irq_subsample_cfg_period register
+ * @irq_subsample_cfg_pattern : Offset of irq_subsample_cfg_pattern register
+ * @burst_limit_cfg : Offset of burst_limit_cfg register
+ * @misr_cfg : Offset of misr_cfg register
+ * @misr_rd_word_sel : Offset of misr_rd_word_sel register
+ * @misr_val : Offset of misr_val register
+ * @debug_status_cfg : Offset of debug_status_cfg register
+ * @debug_status_0 : Offset of debug_status_0 register
+ * @debug_status_1 : Offset of debug_status_1 register
+ */
+struct cam_lrme_bus_wr_bus_client {
+ uint32_t status_0;
+ uint32_t status_1;
+ uint32_t cfg;
+ uint32_t addr_frame_header;
+ uint32_t frame_header_cfg;
+ uint32_t addr_image;
+ uint32_t addr_image_offset;
+ uint32_t buffer_width_cfg;
+ uint32_t buffer_height_cfg;
+ uint32_t packer_cfg;
+ uint32_t wr_stride;
+ uint32_t irq_subsample_cfg_period;
+ uint32_t irq_subsample_cfg_pattern;
+ uint32_t burst_limit_cfg;
+ uint32_t misr_cfg;
+ uint32_t misr_rd_word_sel;
+ uint32_t misr_val;
+ uint32_t debug_status_cfg;
+ uint32_t debug_status_0;
+ uint32_t debug_status_1;
+};
+
+/**
+ * struct cam_lrme_bus_rd_hw_info : FE registers information
+ *
+ * @common_reg : FE common register
+ * @bus_client_reg : List of FE bus registers information
+ */
+struct cam_lrme_bus_rd_hw_info {
+ struct cam_lrme_bus_rd_reg_common common_reg;
+ struct cam_lrme_bus_rd_bus_client
+ bus_client_reg[CAM_LRME_BUS_RD_MAX_CLIENTS];
+};
+
+/**
+ * struct cam_lrme_bus_wr_hw_info : WE engine registers information
+ *
+ * @common_reg : WE common register
+ * @bus_client_reg : List of WE bus registers information
+ */
+struct cam_lrme_bus_wr_hw_info {
+ struct cam_lrme_bus_wr_reg_common common_reg;
+ struct cam_lrme_bus_wr_bus_client
+ bus_client_reg[CAM_LRME_BUS_WR_MAX_CLIENTS];
+};
+
+/**
+ * struct cam_lrme_clc_reg : Offset of clc registers
+ *
+ * @clc_hw_version : Offset of clc_hw_version register
+ * @clc_hw_status : Offset of clc_hw_status register
+ * @clc_hw_status_dbg : Offset of clc_hw_status_dbg register
+ * @clc_module_cfg : Offset of clc_module_cfg register
+ * @clc_moduleformat : Offset of clc_moduleformat register
+ * @clc_rangestep : Offset of clc_rangestep register
+ * @clc_offset : Offset of clc_offset register
+ * @clc_maxallowedsad : Offset of clc_maxallowedsad register
+ * @clc_minallowedtarmad : Offset of clc_minallowedtarmad register
+ * @clc_meaningfulsaddiff : Offset of clc_meaningfulsaddiff register
+ * @clc_minsaddiffdenom : Offset of clc_minsaddiffdenom register
+ * @clc_robustnessmeasuredistmap_0 : Offset of measuredistmap_0 register
+ * @clc_robustnessmeasuredistmap_1 : Offset of measuredistmap_1 register
+ * @clc_robustnessmeasuredistmap_2 : Offset of measuredistmap_2 register
+ * @clc_robustnessmeasuredistmap_3 : Offset of measuredistmap_3 register
+ * @clc_robustnessmeasuredistmap_4 : Offset of measuredistmap_4 register
+ * @clc_robustnessmeasuredistmap_5 : Offset of measuredistmap_5 register
+ * @clc_robustnessmeasuredistmap_6 : Offset of measuredistmap_6 register
+ * @clc_robustnessmeasuredistmap_7 : Offset of measuredistmap_7 register
+ * @clc_ds_crop_horizontal : Offset of clc_ds_crop_horizontal register
+ * @clc_ds_crop_vertical : Offset of clc_ds_crop_vertical register
+ * @clc_tar_pd_unpacker : Offset of clc_tar_pd_unpacker register
+ * @clc_ref_pd_unpacker : Offset of clc_ref_pd_unpacker register
+ * @clc_sw_override : Offset of clc_sw_override register
+ * @clc_tar_height : Offset of clc_tar_height register
+ * @clc_test_bus_ctrl : Offset of clc_test_bus_ctrl register
+ * @clc_spare : Offset of clc_spare register
+ */
+struct cam_lrme_clc_reg {
+ uint32_t clc_hw_version;
+ uint32_t clc_hw_status;
+ uint32_t clc_hw_status_dbg;
+ uint32_t clc_module_cfg;
+ uint32_t clc_moduleformat;
+ uint32_t clc_rangestep;
+ uint32_t clc_offset;
+ uint32_t clc_maxallowedsad;
+ uint32_t clc_minallowedtarmad;
+ uint32_t clc_meaningfulsaddiff;
+ uint32_t clc_minsaddiffdenom;
+ uint32_t clc_robustnessmeasuredistmap_0;
+ uint32_t clc_robustnessmeasuredistmap_1;
+ uint32_t clc_robustnessmeasuredistmap_2;
+ uint32_t clc_robustnessmeasuredistmap_3;
+ uint32_t clc_robustnessmeasuredistmap_4;
+ uint32_t clc_robustnessmeasuredistmap_5;
+ uint32_t clc_robustnessmeasuredistmap_6;
+ uint32_t clc_robustnessmeasuredistmap_7;
+ uint32_t clc_ds_crop_horizontal;
+ uint32_t clc_ds_crop_vertical;
+ uint32_t clc_tar_pd_unpacker;
+ uint32_t clc_ref_pd_unpacker;
+ uint32_t clc_sw_override;
+ uint32_t clc_tar_height;
+ uint32_t clc_ref_height;
+ uint32_t clc_test_bus_ctrl;
+ uint32_t clc_spare;
+};
+
+/**
+ * struct cam_lrme_titan_reg : Offset of LRME top registers
+ *
+ * @top_hw_version : Offset of top_hw_version register
+ * @top_titan_version : Offset of top_titan_version register
+ * @top_rst_cmd : Offset of top_rst_cmd register
+ * @top_core_clk_cfg : Offset of top_core_clk_cfg register
+ * @top_irq_status : Offset of top_irq_status register
+ * @top_irq_mask : Offset of top_irq_mask register
+ * @top_irq_clear : Offset of top_irq_clear register
+ * @top_irq_set : Offset of top_irq_set register
+ * @top_irq_cmd : Offset of top_irq_cmd register
+ * @top_violation_status : Offset of top_violation_status register
+ * @top_spare : Offset of top_spare register
+ */
+struct cam_lrme_titan_reg {
+ uint32_t top_hw_version;
+ uint32_t top_titan_version;
+ uint32_t top_rst_cmd;
+ uint32_t top_core_clk_cfg;
+ uint32_t top_irq_status;
+ uint32_t top_irq_mask;
+ uint32_t top_irq_clear;
+ uint32_t top_irq_set;
+ uint32_t top_irq_cmd;
+ uint32_t top_violation_status;
+ uint32_t top_spare;
+};
+
+/**
+ * struct cam_lrme_hw_info : LRME registers information
+ *
+ * @clc_reg : LRME CLC registers
+ * @bus_rd_reg : LRME FE registers
+ * @bus_wr_reg : LRME WE registers
+ * @titan_reg : LRME top reisters
+ */
+struct cam_lrme_hw_info {
+ struct cam_lrme_clc_reg clc_reg;
+ struct cam_lrme_bus_rd_hw_info bus_rd_reg;
+ struct cam_lrme_bus_wr_hw_info bus_wr_reg;
+ struct cam_lrme_titan_reg titan_reg;
+};
+
+int cam_lrme_hw_process_irq(void *priv, void *data);
+int cam_lrme_hw_submit_req(void *hw_priv, void *hw_submit_args,
+ uint32_t arg_size);
+int cam_lrme_hw_reset(void *hw_priv, void *reset_core_args, uint32_t arg_size);
+int cam_lrme_hw_stop(void *hw_priv, void *stop_args, uint32_t arg_size);
+int cam_lrme_hw_get_caps(void *hw_priv, void *get_hw_cap_args,
+ uint32_t arg_size);
+irqreturn_t cam_lrme_hw_irq(int irq_num, void *data);
+int cam_lrme_hw_process_cmd(void *hw_priv, uint32_t cmd_type,
+ void *cmd_args, uint32_t arg_size);
+int cam_lrme_hw_util_get_caps(struct cam_hw_info *lrme_hw,
+ struct cam_lrme_dev_cap *hw_caps);
+int cam_lrme_hw_start(void *hw_priv, void *hw_init_args, uint32_t arg_size);
+int cam_lrme_hw_flush(void *hw_priv, void *hw_flush_args, uint32_t arg_size);
+void cam_lrme_set_irq(struct cam_hw_info *lrme_hw, enum cam_lrme_irq_set set);
+
+#endif /* _CAM_LRME_HW_CORE_H_ */
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_dev.c b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_dev.c
new file mode 100644
index 0000000..2e63752
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_dev.c
@@ -0,0 +1,320 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/platform_device.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <media/cam_req_mgr.h>
+
+#include "cam_subdev.h"
+#include "cam_lrme_hw_intf.h"
+#include "cam_lrme_hw_core.h"
+#include "cam_lrme_hw_soc.h"
+#include "cam_lrme_hw_reg.h"
+#include "cam_req_mgr_workq.h"
+#include "cam_lrme_hw_mgr.h"
+#include "cam_mem_mgr_api.h"
+#include "cam_smmu_api.h"
+
+#define CAM_LRME_HW_WORKQ_NUM_TASK 30
+
+static int cam_lrme_hw_dev_util_cdm_acquire(struct cam_lrme_core *lrme_core,
+ struct cam_hw_info *lrme_hw)
+{
+ int rc, i;
+ struct cam_cdm_bl_request *cdm_cmd;
+ struct cam_cdm_acquire_data cdm_acquire;
+ struct cam_lrme_cdm_info *hw_cdm_info;
+
+ hw_cdm_info = kzalloc(sizeof(struct cam_lrme_cdm_info),
+ GFP_KERNEL);
+ if (!hw_cdm_info) {
+ CAM_ERR(CAM_LRME, "No memory for hw_cdm_info");
+ return -ENOMEM;
+ }
+
+ cdm_cmd = kzalloc((sizeof(struct cam_cdm_bl_request) +
+ ((CAM_LRME_MAX_HW_ENTRIES - 1) *
+ sizeof(struct cam_cdm_bl_cmd))), GFP_KERNEL);
+ if (!cdm_cmd) {
+ CAM_ERR(CAM_LRME, "No memory for cdm_cmd");
+ kfree(hw_cdm_info);
+ return -ENOMEM;
+ }
+
+ memset(&cdm_acquire, 0, sizeof(cdm_acquire));
+ strlcpy(cdm_acquire.identifier, "lrmecdm", sizeof("lrmecdm"));
+ cdm_acquire.cell_index = lrme_hw->soc_info.index;
+ cdm_acquire.handle = 0;
+ cdm_acquire.userdata = hw_cdm_info;
+ cdm_acquire.cam_cdm_callback = NULL;
+ cdm_acquire.id = CAM_CDM_VIRTUAL;
+ cdm_acquire.base_array_cnt = lrme_hw->soc_info.num_reg_map;
+ for (i = 0; i < lrme_hw->soc_info.num_reg_map; i++)
+ cdm_acquire.base_array[i] = &lrme_hw->soc_info.reg_map[i];
+
+ rc = cam_cdm_acquire(&cdm_acquire);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Can't acquire cdm");
+ goto error;
+ }
+
+ hw_cdm_info->cdm_cmd = cdm_cmd;
+ hw_cdm_info->cdm_ops = cdm_acquire.ops;
+ hw_cdm_info->cdm_handle = cdm_acquire.handle;
+
+ lrme_core->hw_cdm_info = hw_cdm_info;
+ CAM_DBG(CAM_LRME, "cdm acquire done");
+
+ return 0;
+error:
+ kfree(cdm_cmd);
+ kfree(hw_cdm_info);
+ return rc;
+}
+
+static int cam_lrme_hw_dev_probe(struct platform_device *pdev)
+{
+ struct cam_hw_info *lrme_hw;
+ struct cam_hw_intf lrme_hw_intf;
+ struct cam_lrme_core *lrme_core;
+ const struct of_device_id *match_dev = NULL;
+ struct cam_lrme_hw_info *hw_info;
+ int rc, i;
+
+ lrme_hw = kzalloc(sizeof(struct cam_hw_info), GFP_KERNEL);
+ if (!lrme_hw) {
+ CAM_ERR(CAM_LRME, "No memory to create lrme_hw");
+ return -ENOMEM;
+ }
+
+ lrme_core = kzalloc(sizeof(struct cam_lrme_core), GFP_KERNEL);
+ if (!lrme_core) {
+ CAM_ERR(CAM_LRME, "No memory to create lrme_core");
+ kfree(lrme_hw);
+ return -ENOMEM;
+ }
+
+ lrme_hw->core_info = lrme_core;
+ lrme_hw->hw_state = CAM_HW_STATE_POWER_DOWN;
+ lrme_hw->soc_info.pdev = pdev;
+ lrme_hw->soc_info.dev = &pdev->dev;
+ lrme_hw->soc_info.dev_name = pdev->name;
+ lrme_hw->open_count = 0;
+ lrme_core->state = CAM_LRME_CORE_STATE_INIT;
+
+ mutex_init(&lrme_hw->hw_mutex);
+ spin_lock_init(&lrme_hw->hw_lock);
+ init_completion(&lrme_hw->hw_complete);
+ init_completion(&lrme_core->reset_complete);
+
+ rc = cam_req_mgr_workq_create("cam_lrme_hw_worker",
+ CAM_LRME_HW_WORKQ_NUM_TASK,
+ &lrme_core->work, CRM_WORKQ_USAGE_IRQ);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Unable to create a workq, rc=%d", rc);
+ goto free_memory;
+ }
+
+ for (i = 0; i < CAM_LRME_HW_WORKQ_NUM_TASK; i++)
+ lrme_core->work->task.pool[i].payload =
+ &lrme_core->work_data[i];
+
+ match_dev = of_match_device(pdev->dev.driver->of_match_table,
+ &pdev->dev);
+ if (!match_dev || !match_dev->data) {
+ CAM_ERR(CAM_LRME, "No Of_match data, %pK", match_dev);
+ rc = -EINVAL;
+ goto destroy_workqueue;
+ }
+ hw_info = (struct cam_lrme_hw_info *)match_dev->data;
+ lrme_core->hw_info = hw_info;
+
+ rc = cam_lrme_soc_init_resources(&lrme_hw->soc_info,
+ cam_lrme_hw_irq, lrme_hw);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to init soc, rc=%d", rc);
+ goto destroy_workqueue;
+ }
+
+ rc = cam_lrme_hw_dev_util_cdm_acquire(lrme_core, lrme_hw);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to acquire cdm");
+ goto deinit_platform_res;
+ }
+
+ rc = cam_smmu_get_handle("lrme", &lrme_core->device_iommu.non_secure);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Get iommu handle failed");
+ goto release_cdm;
+ }
+
+ rc = cam_smmu_ops(lrme_core->device_iommu.non_secure, CAM_SMMU_ATTACH);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "LRME attach iommu handle failed, rc=%d", rc);
+ goto destroy_smmu;
+ }
+
+ rc = cam_lrme_hw_start(lrme_hw, NULL, 0);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to hw init, rc=%d", rc);
+ goto detach_smmu;
+ }
+
+ rc = cam_lrme_hw_util_get_caps(lrme_hw, &lrme_core->hw_caps);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to get hw caps, rc=%d", rc);
+ if (cam_lrme_hw_stop(lrme_hw, NULL, 0))
+ CAM_ERR(CAM_LRME, "Failed in hw deinit");
+ goto detach_smmu;
+ }
+
+ rc = cam_lrme_hw_stop(lrme_hw, NULL, 0);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to deinit hw, rc=%d", rc);
+ goto detach_smmu;
+ }
+
+ lrme_core->hw_idx = lrme_hw->soc_info.index;
+ lrme_hw_intf.hw_priv = lrme_hw;
+ lrme_hw_intf.hw_idx = lrme_hw->soc_info.index;
+ lrme_hw_intf.hw_ops.get_hw_caps = cam_lrme_hw_get_caps;
+ lrme_hw_intf.hw_ops.init = NULL;
+ lrme_hw_intf.hw_ops.deinit = NULL;
+ lrme_hw_intf.hw_ops.reset = cam_lrme_hw_reset;
+ lrme_hw_intf.hw_ops.reserve = NULL;
+ lrme_hw_intf.hw_ops.release = NULL;
+ lrme_hw_intf.hw_ops.start = cam_lrme_hw_start;
+ lrme_hw_intf.hw_ops.stop = cam_lrme_hw_stop;
+ lrme_hw_intf.hw_ops.read = NULL;
+ lrme_hw_intf.hw_ops.write = NULL;
+ lrme_hw_intf.hw_ops.process_cmd = cam_lrme_hw_process_cmd;
+ lrme_hw_intf.hw_type = CAM_HW_LRME;
+
+ rc = cam_cdm_get_iommu_handle("lrmecdm", &lrme_core->cdm_iommu);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to acquire the CDM iommu handles");
+ goto detach_smmu;
+ }
+
+ rc = cam_lrme_mgr_register_device(&lrme_hw_intf,
+ &lrme_core->device_iommu,
+ &lrme_core->cdm_iommu);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to register device");
+ goto detach_smmu;
+ }
+
+ platform_set_drvdata(pdev, lrme_hw);
+ CAM_DBG(CAM_LRME, "LRME-%d probe successful", lrme_hw_intf.hw_idx);
+
+ return rc;
+
+detach_smmu:
+ cam_smmu_ops(lrme_core->device_iommu.non_secure, CAM_SMMU_DETACH);
+destroy_smmu:
+ cam_smmu_destroy_handle(lrme_core->device_iommu.non_secure);
+release_cdm:
+ cam_cdm_release(lrme_core->hw_cdm_info->cdm_handle);
+ kfree(lrme_core->hw_cdm_info->cdm_cmd);
+ kfree(lrme_core->hw_cdm_info);
+deinit_platform_res:
+ if (cam_lrme_soc_deinit_resources(&lrme_hw->soc_info))
+ CAM_ERR(CAM_LRME, "Failed in soc deinit");
+ mutex_destroy(&lrme_hw->hw_mutex);
+destroy_workqueue:
+ cam_req_mgr_workq_destroy(&lrme_core->work);
+free_memory:
+ mutex_destroy(&lrme_hw->hw_mutex);
+ kfree(lrme_hw);
+ kfree(lrme_core);
+
+ return rc;
+}
+
+static int cam_lrme_hw_dev_remove(struct platform_device *pdev)
+{
+ int rc = 0;
+ struct cam_hw_info *lrme_hw;
+ struct cam_lrme_core *lrme_core;
+
+ lrme_hw = platform_get_drvdata(pdev);
+ if (!lrme_hw) {
+ CAM_ERR(CAM_LRME, "Invalid lrme_hw from fd_hw_intf");
+ rc = -ENODEV;
+ goto deinit_platform_res;
+ }
+
+ lrme_core = (struct cam_lrme_core *)lrme_hw->core_info;
+ if (!lrme_core) {
+ CAM_ERR(CAM_LRME, "Invalid lrme_core from fd_hw");
+ rc = -EINVAL;
+ goto deinit_platform_res;
+ }
+
+ cam_smmu_ops(lrme_core->device_iommu.non_secure, CAM_SMMU_DETACH);
+ cam_smmu_destroy_handle(lrme_core->device_iommu.non_secure);
+ cam_cdm_release(lrme_core->hw_cdm_info->cdm_handle);
+ cam_lrme_mgr_deregister_device(lrme_core->hw_idx);
+
+ kfree(lrme_core->hw_cdm_info->cdm_cmd);
+ kfree(lrme_core->hw_cdm_info);
+ kfree(lrme_core);
+
+deinit_platform_res:
+ rc = cam_lrme_soc_deinit_resources(&lrme_hw->soc_info);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Error in LRME soc deinit, rc=%d", rc);
+
+ mutex_destroy(&lrme_hw->hw_mutex);
+ kfree(lrme_hw);
+
+ return rc;
+}
+
+static const struct of_device_id cam_lrme_hw_dt_match[] = {
+ {
+ .compatible = "qcom,lrme",
+ .data = &cam_lrme10_hw_info,
+ },
+ {}
+};
+
+MODULE_DEVICE_TABLE(of, cam_lrme_hw_dt_match);
+
+static struct platform_driver cam_lrme_hw_driver = {
+ .probe = cam_lrme_hw_dev_probe,
+ .remove = cam_lrme_hw_dev_remove,
+ .driver = {
+ .name = "cam_lrme_hw",
+ .owner = THIS_MODULE,
+ .of_match_table = cam_lrme_hw_dt_match,
+ },
+};
+
+static int __init cam_lrme_hw_init_module(void)
+{
+ return platform_driver_register(&cam_lrme_hw_driver);
+}
+
+static void __exit cam_lrme_hw_exit_module(void)
+{
+ platform_driver_unregister(&cam_lrme_hw_driver);
+}
+
+module_init(cam_lrme_hw_init_module);
+module_exit(cam_lrme_hw_exit_module);
+MODULE_DESCRIPTION("CAM LRME HW driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_intf.h b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_intf.h
new file mode 100644
index 0000000..d16b174
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_intf.h
@@ -0,0 +1,200 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _CAM_LRME_HW_INTF_H_
+#define _CAM_LRME_HW_INTF_H_
+
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <media/cam_cpas.h>
+#include <media/cam_req_mgr.h>
+#include <media/cam_lrme.h>
+
+#include "cam_io_util.h"
+#include "cam_soc_util.h"
+#include "cam_hw.h"
+#include "cam_hw_intf.h"
+#include "cam_subdev.h"
+#include "cam_cpas_api.h"
+#include "cam_hw_mgr_intf.h"
+#include "cam_debug_util.h"
+
+
+#define CAM_LRME_MAX_IO_BUFFER 2
+#define CAM_LRME_MAX_HW_ENTRIES 5
+
+#define CAM_LRME_BASE_IDX 0
+
+/**
+ * enum cam_lrme_hw_type : Enum for LRME HW type
+ *
+ * @CAM_HW_LRME : LRME HW type
+ */
+enum cam_lrme_hw_type {
+ CAM_HW_LRME,
+};
+
+/**
+ * enum cam_lrme_cb_type : HW manager call back type
+ *
+ * @CAM_LRME_CB_BUF_DONE : Indicate buf done has been generated
+ * @CAM_LRME_CB_COMP_REG_UPDATE : Indicate receiving WE comp reg update
+ * @CAM_LRME_CB_PUT_FRAME : Request HW manager to put back the frame
+ * @CAM_LRME_CB_ERROR : Indicate error irq has been generated
+ */
+enum cam_lrme_cb_type {
+ CAM_LRME_CB_BUF_DONE = 1,
+ CAM_LRME_CB_COMP_REG_UPDATE = 1 << 1,
+ CAM_LRME_CB_PUT_FRAME = 1 << 2,
+ CAM_LRME_CB_ERROR = 1 << 3,
+};
+
+/**
+ * enum cam_lrme_hw_cmd_type : HW CMD type
+ *
+ * @CAM_LRME_HW_CMD_prepare_hw_update : Prepare HW update
+ * @CAM_LRME_HW_CMD_REGISTER_CB : register HW manager callback
+ * @CAM_LRME_HW_CMD_SUBMIT : Submit frame to HW
+ */
+enum cam_lrme_hw_cmd_type {
+ CAM_LRME_HW_CMD_PREPARE_HW_UPDATE,
+ CAM_LRME_HW_CMD_REGISTER_CB,
+ CAM_LRME_HW_CMD_SUBMIT,
+};
+
+/**
+ * enum cam_lrme_hw_reset_type : Type of reset
+ *
+ * @CAM_LRME_HW_RESET_TYPE_HW_RESET : HW reset
+ * @CAM_LRME_HW_RESET_TYPE_SW_RESET : SW reset
+ */
+enum cam_lrme_hw_reset_type {
+ CAM_LRME_HW_RESET_TYPE_HW_RESET,
+ CAM_LRME_HW_RESET_TYPE_SW_RESET,
+};
+
+/**
+ *struct cam_lrme_frame_request : LRME frame request
+ *
+ * @frame_list : List head
+ * @req_id : Request ID
+ * @ctxt_to_hw_map : Information about context id, priority and device id
+ * @hw_device : Pointer to HW device
+ * @hw_update_entries : List of hw_update_entries
+ * @num_hw_update_entries : number of hw_update_entries
+ */
+struct cam_lrme_frame_request {
+ struct list_head frame_list;
+ uint64_t req_id;
+ void *ctxt_to_hw_map;
+ struct cam_lrme_device *hw_device;
+ struct cam_hw_update_entry hw_update_entries[CAM_LRME_MAX_HW_ENTRIES];
+ uint32_t num_hw_update_entries;
+};
+
+/**
+ * struct cam_lrme_hw_io_buffer : IO buffer information
+ *
+ * @valid : Indicate whether this IO config is valid
+ * @io_cfg : Pointer to IO configuration
+ * @num_buf : Number of buffers
+ * @num_plane : Number of planes
+ * @io_addr : List of IO address
+ */
+struct cam_lrme_hw_io_buffer {
+ bool valid;
+ struct cam_buf_io_cfg *io_cfg;
+ uint32_t num_buf;
+ uint32_t num_plane;
+ uint64_t io_addr[CAM_PACKET_MAX_PLANES];
+};
+
+/**
+ * struct cam_lrme_hw_cmd_config_args : Args for prepare HW update
+ *
+ * @hw_device : Pointer to HW device
+ * @input_buf : List of input buffers
+ * @output_buf : List of output buffers
+ * @cmd_buf_addr : Pointer to available KMD buffer
+ * @size : Available KMD buffer size
+ * @config_buf_size : Size used to prepare update
+ */
+struct cam_lrme_hw_cmd_config_args {
+ struct cam_lrme_device *hw_device;
+ struct cam_lrme_hw_io_buffer input_buf[CAM_LRME_MAX_IO_BUFFER];
+ struct cam_lrme_hw_io_buffer output_buf[CAM_LRME_MAX_IO_BUFFER];
+ uint32_t *cmd_buf_addr;
+ uint32_t size;
+ uint32_t config_buf_size;
+};
+
+/**
+ * struct cam_lrme_hw_flush_args : Args for flush HW
+ *
+ * @ctxt_to_hw_map : Identity of context
+ * @req_to_flush : Pointer to the frame need to flush in
+ * case of single frame flush
+ * @flush_type : Flush type
+ */
+struct cam_lrme_hw_flush_args {
+ void *ctxt_to_hw_map;
+ struct cam_lrme_frame_request *req_to_flush;
+ uint32_t flush_type;
+};
+
+/**
+ * struct cam_lrme_hw_reset_args : Args for reset HW
+ *
+ * @reset_type : Enum cam_lrme_hw_reset_type
+ */
+struct cam_lrme_hw_reset_args {
+ uint32_t reset_type;
+};
+
+/**
+ * struct cam_lrme_hw_cb_args : HW manager callback args
+ *
+ * @cb_type : Callback event type
+ * @frame_req : Pointer to the frame associated with the cb
+ */
+struct cam_lrme_hw_cb_args {
+ uint32_t cb_type;
+ struct cam_lrme_frame_request *frame_req;
+};
+
+/**
+ * struct cam_lrme_hw_cmd_set_cb : Args for set callback function
+ *
+ * @cam_lrme_hw_mgr_cb : Callback function pointer
+ * @data : Data sent along with callback function
+ */
+struct cam_lrme_hw_cmd_set_cb {
+ int (*cam_lrme_hw_mgr_cb)(void *data,
+ struct cam_lrme_hw_cb_args *args);
+ void *data;
+};
+
+/**
+ * struct cam_lrme_hw_submit_args : Args for submit request
+ *
+ * @hw_update_entries : List of hw update entries used to program registers
+ * @num_hw_update_entries : Number of hw update entries
+ * @frame_req : Pointer to the frame request
+ */
+struct cam_lrme_hw_submit_args {
+ struct cam_hw_update_entry *hw_update_entries;
+ uint32_t num_hw_update_entries;
+ struct cam_lrme_frame_request *frame_req;
+};
+
+#endif /* _CAM_LRME_HW_INTF_H_ */
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_reg.h b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_reg.h
new file mode 100644
index 0000000..39cfde7
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_reg.h
@@ -0,0 +1,193 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _CAM_LRME_HW_REG_H_
+#define _CAM_LRME_HW_REG_H_
+
+#include "cam_lrme_hw_core.h"
+
+static struct cam_lrme_hw_info cam_lrme10_hw_info = {
+ .clc_reg = {
+ .clc_hw_version = 0x00000000,
+ .clc_hw_status = 0x00000004,
+ .clc_hw_status_dbg = 0x00000008,
+ .clc_module_cfg = 0x00000060,
+ .clc_moduleformat = 0x000000A8,
+ .clc_rangestep = 0x00000068,
+ .clc_offset = 0x0000006C,
+ .clc_maxallowedsad = 0x00000070,
+ .clc_minallowedtarmad = 0x00000074,
+ .clc_meaningfulsaddiff = 0x00000078,
+ .clc_minsaddiffdenom = 0x0000007C,
+ .clc_robustnessmeasuredistmap_0 = 0x00000080,
+ .clc_robustnessmeasuredistmap_1 = 0x00000084,
+ .clc_robustnessmeasuredistmap_2 = 0x00000088,
+ .clc_robustnessmeasuredistmap_3 = 0x0000008C,
+ .clc_robustnessmeasuredistmap_4 = 0x00000090,
+ .clc_robustnessmeasuredistmap_5 = 0x00000094,
+ .clc_robustnessmeasuredistmap_6 = 0x00000098,
+ .clc_robustnessmeasuredistmap_7 = 0x0000009C,
+ .clc_ds_crop_horizontal = 0x000000A0,
+ .clc_ds_crop_vertical = 0x000000A4,
+ .clc_tar_pd_unpacker = 0x000000AC,
+ .clc_ref_pd_unpacker = 0x000000B0,
+ .clc_sw_override = 0x000000B4,
+ .clc_tar_height = 0x000000B8,
+ .clc_ref_height = 0x000000BC,
+ .clc_test_bus_ctrl = 0x000001F8,
+ .clc_spare = 0x000001FC,
+ },
+ .bus_rd_reg = {
+ .common_reg = {
+ .hw_version = 0x00000200,
+ .hw_capability = 0x00000204,
+ .sw_reset = 0x00000208,
+ .cgc_override = 0x0000020C,
+ .irq_mask = 0x00000210,
+ .irq_clear = 0x00000214,
+ .irq_cmd = 0x00000218,
+ .irq_status = 0x0000021C,
+ .cmd = 0x00000220,
+ .irq_set = 0x00000224,
+ .misr_reset = 0x0000022C,
+ .security_cfg = 0x00000230,
+ .pwr_iso_cfg = 0x00000234,
+ .pwr_iso_seed = 0x00000238,
+ .test_bus_ctrl = 0x00000248,
+ .spare = 0x0000024C,
+ },
+ .bus_client_reg = {
+ /* bus client 0 */
+ {
+ .core_cfg = 0x00000250,
+ .ccif_meta_data = 0x00000254,
+ .addr_image = 0x00000258,
+ .rd_buffer_size = 0x0000025C,
+ .rd_stride = 0x00000260,
+ .unpack_cfg_0 = 0x00000264,
+ .latency_buff_allocation = 0x00000278,
+ .burst_limit_cfg = 0x00000280,
+ .misr_cfg_0 = 0x00000284,
+ .misr_cfg_1 = 0x00000288,
+ .misr_rd_val = 0x0000028C,
+ .debug_status_cfg = 0x00000290,
+ .debug_status_0 = 0x00000294,
+ .debug_status_1 = 0x00000298,
+ },
+ /* bus client 1 */
+ {
+ .core_cfg = 0x000002F0,
+ .ccif_meta_data = 0x000002F4,
+ .addr_image = 0x000002F8,
+ .rd_buffer_size = 0x000002FC,
+ .rd_stride = 0x00000300,
+ .unpack_cfg_0 = 0x00000304,
+ .latency_buff_allocation = 0x00000318,
+ .burst_limit_cfg = 0x00000320,
+ .misr_cfg_0 = 0x00000324,
+ .misr_cfg_1 = 0x00000328,
+ .misr_rd_val = 0x0000032C,
+ .debug_status_cfg = 0x00000330,
+ .debug_status_0 = 0x00000334,
+ .debug_status_1 = 0x00000338,
+ },
+ },
+ },
+ .bus_wr_reg = {
+ .common_reg = {
+ .hw_version = 0x00000500,
+ .hw_capability = 0x00000504,
+ .sw_reset = 0x00000508,
+ .cgc_override = 0x0000050C,
+ .misr_reset = 0x000005C8,
+ .pwr_iso_cfg = 0x000005CC,
+ .test_bus_ctrl = 0x0000061C,
+ .composite_mask_0 = 0x00000510,
+ .irq_mask_0 = 0x00000544,
+ .irq_mask_1 = 0x00000548,
+ .irq_clear_0 = 0x00000550,
+ .irq_clear_1 = 0x00000554,
+ .irq_status_0 = 0x0000055C,
+ .irq_status_1 = 0x00000560,
+ .irq_cmd = 0x00000568,
+ .irq_set_0 = 0x000005BC,
+ .irq_set_1 = 0x000005C0,
+ .addr_fifo_status = 0x000005A8,
+ .frame_header_cfg0 = 0x000005AC,
+ .frame_header_cfg1 = 0x000005B0,
+ .spare = 0x00000620,
+ },
+ .bus_client_reg = {
+ /* bus client 0 */
+ {
+ .status_0 = 0x00000700,
+ .status_1 = 0x00000704,
+ .cfg = 0x00000708,
+ .addr_frame_header = 0x0000070C,
+ .frame_header_cfg = 0x00000710,
+ .addr_image = 0x00000714,
+ .addr_image_offset = 0x00000718,
+ .buffer_width_cfg = 0x0000071C,
+ .buffer_height_cfg = 0x00000720,
+ .packer_cfg = 0x00000724,
+ .wr_stride = 0x00000728,
+ .irq_subsample_cfg_period = 0x00000748,
+ .irq_subsample_cfg_pattern = 0x0000074C,
+ .burst_limit_cfg = 0x0000075C,
+ .misr_cfg = 0x00000760,
+ .misr_rd_word_sel = 0x00000764,
+ .misr_val = 0x00000768,
+ .debug_status_cfg = 0x0000076C,
+ .debug_status_0 = 0x00000770,
+ .debug_status_1 = 0x00000774,
+ },
+ /* bus client 1 */
+ {
+ .status_0 = 0x00000800,
+ .status_1 = 0x00000804,
+ .cfg = 0x00000808,
+ .addr_frame_header = 0x0000080C,
+ .frame_header_cfg = 0x00000810,
+ .addr_image = 0x00000814,
+ .addr_image_offset = 0x00000818,
+ .buffer_width_cfg = 0x0000081C,
+ .buffer_height_cfg = 0x00000820,
+ .packer_cfg = 0x00000824,
+ .wr_stride = 0x00000828,
+ .irq_subsample_cfg_period = 0x00000848,
+ .irq_subsample_cfg_pattern = 0x0000084C,
+ .burst_limit_cfg = 0x0000085C,
+ .misr_cfg = 0x00000860,
+ .misr_rd_word_sel = 0x00000864,
+ .misr_val = 0x00000868,
+ .debug_status_cfg = 0x0000086C,
+ .debug_status_0 = 0x00000870,
+ .debug_status_1 = 0x00000874,
+ },
+ },
+ },
+ .titan_reg = {
+ .top_hw_version = 0x00000900,
+ .top_titan_version = 0x00000904,
+ .top_rst_cmd = 0x00000908,
+ .top_core_clk_cfg = 0x00000920,
+ .top_irq_status = 0x0000090C,
+ .top_irq_mask = 0x00000910,
+ .top_irq_clear = 0x00000914,
+ .top_irq_set = 0x00000918,
+ .top_irq_cmd = 0x0000091C,
+ .top_violation_status = 0x00000924,
+ .top_spare = 0x000009FC,
+ },
+};
+
+#endif /* _CAM_LRME_HW_REG_H_ */
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_soc.c b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_soc.c
new file mode 100644
index 0000000..75de0dd
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_soc.c
@@ -0,0 +1,158 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/of.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include "cam_lrme_hw_core.h"
+#include "cam_lrme_hw_soc.h"
+
+
+int cam_lrme_soc_enable_resources(struct cam_hw_info *lrme_hw)
+{
+ struct cam_hw_soc_info *soc_info = &lrme_hw->soc_info;
+ struct cam_lrme_soc_private *soc_private =
+ (struct cam_lrme_soc_private *)soc_info->soc_private;
+ struct cam_ahb_vote ahb_vote;
+ struct cam_axi_vote axi_vote;
+ int rc = 0;
+
+ ahb_vote.type = CAM_VOTE_ABSOLUTE;
+ ahb_vote.vote.level = CAM_SVS_VOTE;
+ axi_vote.compressed_bw = 7200000;
+ axi_vote.uncompressed_bw = 7200000;
+ rc = cam_cpas_start(soc_private->cpas_handle, &ahb_vote, &axi_vote);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to start cpas, rc %d", rc);
+ return -EFAULT;
+ }
+
+ rc = cam_soc_util_enable_platform_resource(soc_info, true, CAM_SVS_VOTE,
+ true);
+ if (rc) {
+ CAM_ERR(CAM_LRME,
+ "Failed to enable platform resource, rc %d", rc);
+ goto stop_cpas;
+ }
+
+ cam_lrme_set_irq(lrme_hw, CAM_LRME_IRQ_ENABLE);
+
+ return rc;
+
+stop_cpas:
+ if (cam_cpas_stop(soc_private->cpas_handle))
+ CAM_ERR(CAM_LRME, "Failed to stop cpas");
+
+ return rc;
+}
+
+int cam_lrme_soc_disable_resources(struct cam_hw_info *lrme_hw)
+{
+ struct cam_hw_soc_info *soc_info = &lrme_hw->soc_info;
+ struct cam_lrme_soc_private *soc_private;
+ int rc = 0;
+
+ soc_private = soc_info->soc_private;
+
+ cam_lrme_set_irq(lrme_hw, CAM_LRME_IRQ_DISABLE);
+
+ rc = cam_soc_util_disable_platform_resource(soc_info, true, true);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed to disable platform resource");
+ return rc;
+ }
+ rc = cam_cpas_stop(soc_private->cpas_handle);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Failed to stop cpas");
+
+ return rc;
+}
+
+int cam_lrme_soc_init_resources(struct cam_hw_soc_info *soc_info,
+ irq_handler_t irq_handler, void *private_data)
+{
+ struct cam_lrme_soc_private *soc_private;
+ struct cam_cpas_register_params cpas_register_param;
+ int rc;
+
+ rc = cam_soc_util_get_dt_properties(soc_info);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed in get_dt_properties, rc=%d", rc);
+ return rc;
+ }
+
+ rc = cam_soc_util_request_platform_resource(soc_info, irq_handler,
+ private_data);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "Failed in request_platform_resource rc=%d",
+ rc);
+ return rc;
+ }
+
+ soc_private = kzalloc(sizeof(struct cam_lrme_soc_private), GFP_KERNEL);
+ if (!soc_private) {
+ rc = -ENOMEM;
+ goto release_res;
+ }
+ soc_info->soc_private = soc_private;
+
+ memset(&cpas_register_param, 0, sizeof(cpas_register_param));
+ strlcpy(cpas_register_param.identifier,
+ "lrmecpas", CAM_HW_IDENTIFIER_LENGTH);
+ cpas_register_param.cell_index = soc_info->index;
+ cpas_register_param.dev = &soc_info->pdev->dev;
+ cpas_register_param.userdata = private_data;
+ cpas_register_param.cam_cpas_client_cb = NULL;
+
+ rc = cam_cpas_register_client(&cpas_register_param);
+ if (rc) {
+ CAM_ERR(CAM_LRME, "CPAS registration failed");
+ goto free_soc_private;
+ }
+ soc_private->cpas_handle = cpas_register_param.client_handle;
+ CAM_DBG(CAM_LRME, "CPAS handle=%d", soc_private->cpas_handle);
+
+ return rc;
+
+free_soc_private:
+ kfree(soc_info->soc_private);
+ soc_info->soc_private = NULL;
+release_res:
+ cam_soc_util_release_platform_resource(soc_info);
+
+ return rc;
+}
+
+int cam_lrme_soc_deinit_resources(struct cam_hw_soc_info *soc_info)
+{
+ struct cam_lrme_soc_private *soc_private =
+ (struct cam_lrme_soc_private *)soc_info->soc_private;
+ int rc;
+
+ rc = cam_cpas_unregister_client(soc_private->cpas_handle);
+ if (rc)
+ CAM_ERR(CAM_LRME, "Unregister cpas failed, handle=%d, rc=%d",
+ soc_private->cpas_handle, rc);
+
+ rc = cam_soc_util_release_platform_resource(soc_info);
+ if (rc)
+ CAM_ERR(CAM_LRME, "release platform failed, rc=%d", rc);
+
+ kfree(soc_info->soc_private);
+ soc_info->soc_private = NULL;
+
+ return rc;
+}
diff --git a/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_soc.h b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_soc.h
new file mode 100644
index 0000000..44e8486
--- /dev/null
+++ b/drivers/media/platform/msm/camera/cam_lrme/lrme_hw_mgr/lrme_hw/cam_lrme_hw_soc.h
@@ -0,0 +1,28 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _CAM_LRME_HW_SOC_H_
+#define _CAM_LRME_HW_SOC_H_
+
+#include "cam_soc_util.h"
+
+struct cam_lrme_soc_private {
+ uint32_t cpas_handle;
+};
+
+int cam_lrme_soc_enable_resources(struct cam_hw_info *lrme_hw);
+int cam_lrme_soc_disable_resources(struct cam_hw_info *lrme_hw);
+int cam_lrme_soc_init_resources(struct cam_hw_soc_info *soc_info,
+ irq_handler_t irq_handler, void *private_data);
+int cam_lrme_soc_deinit_resources(struct cam_hw_soc_info *soc_info);
+
+#endif /* _CAM_LRME_HW_SOC_H_ */
diff --git a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.c b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.c
index f38af7d..d7a382f 100644
--- a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.c
+++ b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.c
@@ -194,11 +194,13 @@ static int __cam_req_mgr_traverse(struct cam_req_mgr_traverse *traverse_data)
tbl = traverse_data->tbl;
apply_data = traverse_data->apply_data;
- CAM_DBG(CAM_CRM, "Enter pd %d idx %d state %d skip %d status %d",
+ CAM_DBG(CAM_CRM,
+ "Enter pd %d idx %d state %d skip %d status %d skip_idx %d",
tbl->pd, curr_idx, tbl->slot[curr_idx].state,
- tbl->skip_traverse, traverse_data->in_q->slot[curr_idx].status);
+ tbl->skip_traverse, traverse_data->in_q->slot[curr_idx].status,
+ traverse_data->in_q->slot[curr_idx].skip_idx);
- if (tbl->inject_delay > 0) {
+ if (tbl->inject_delay > 0 && (traverse_data->validate_only == false)) {
CAM_DBG(CAM_CRM, "Injecting Delay of one frame");
apply_data[tbl->pd].req_id = -1;
tbl->inject_delay--;
@@ -220,15 +222,18 @@ static int __cam_req_mgr_traverse(struct cam_req_mgr_traverse *traverse_data)
}
if (rc >= 0) {
SET_SUCCESS_BIT(traverse_data->result, tbl->pd);
- apply_data[tbl->pd].pd = tbl->pd;
- apply_data[tbl->pd].req_id =
- CRM_GET_REQ_ID(traverse_data->in_q, curr_idx);
- apply_data[tbl->pd].idx = curr_idx;
+ if (traverse_data->validate_only == false) {
+ apply_data[tbl->pd].pd = tbl->pd;
+ apply_data[tbl->pd].req_id =
+ CRM_GET_REQ_ID(traverse_data->in_q,
+ curr_idx);
+ apply_data[tbl->pd].idx = curr_idx;
- /* If traverse is sucessful decrement traverse skip */
- if (tbl->skip_traverse > 0) {
- apply_data[tbl->pd].req_id = -1;
- tbl->skip_traverse--;
+ /* If traverse is success dec traverse skip */
+ if (tbl->skip_traverse > 0) {
+ apply_data[tbl->pd].req_id = -1;
+ tbl->skip_traverse--;
+ }
}
} else {
/* linked pd table is not ready for this traverse yet */
@@ -338,6 +343,7 @@ static void __cam_req_mgr_reset_req_slot(struct cam_req_mgr_core_link *link,
slot->req_id = -1;
slot->skip_idx = 0;
slot->recover = 0;
+ slot->sync_mode = CAM_REQ_MGR_SYNC_MODE_NO_SYNC;
slot->status = CRM_SLOT_STATUS_NO_REQ;
/* Reset all pd table slot */
@@ -460,17 +466,18 @@ static int __cam_req_mgr_send_req(struct cam_req_mgr_core_link *link,
/**
* __cam_req_mgr_check_link_is_ready()
*
- * @brief : traverse through all request tables and see if all devices are
- * ready to apply request settings.
- * @link : pointer to link whose input queue and req tbl are
- * traversed through
- * @idx : index within input request queue
+ * @brief : traverse through all request tables and see if
+ * all devices are ready to apply request settings
+ * @link : pointer to link whose input queue and req tbl are
+ * traversed through
+ * @idx : index within input request queue
+ * @validate_only : Whether to validate only and/or update settings
*
* @return : 0 for success, negative for failure
*
*/
static int __cam_req_mgr_check_link_is_ready(struct cam_req_mgr_core_link *link,
- int32_t idx)
+ int32_t idx, bool validate_only)
{
int rc;
struct cam_req_mgr_traverse traverse_data;
@@ -480,14 +487,18 @@ static int __cam_req_mgr_check_link_is_ready(struct cam_req_mgr_core_link *link,
in_q = link->req.in_q;
apply_data = link->req.apply_data;
- memset(apply_data, 0,
- sizeof(struct cam_req_mgr_apply) * CAM_PIPELINE_DELAY_MAX);
+ if (validate_only == false) {
+ memset(apply_data, 0,
+ sizeof(struct cam_req_mgr_apply) *
+ CAM_PIPELINE_DELAY_MAX);
+ }
traverse_data.apply_data = apply_data;
traverse_data.idx = idx;
traverse_data.tbl = link->req.l_tbl;
traverse_data.in_q = in_q;
traverse_data.result = 0;
+ traverse_data.validate_only = validate_only;
/*
* Traverse through all pd tables, if result is success,
* apply the settings
@@ -510,6 +521,209 @@ static int __cam_req_mgr_check_link_is_ready(struct cam_req_mgr_core_link *link,
}
/**
+ * __cam_req_mgr_find_slot_for_req()
+ *
+ * @brief : Find idx from input queue at which req id is enqueued
+ * @in_q : input request queue pointer
+ * @req_id : request id which needs to be searched in input queue
+ *
+ * @return : slot index where passed request id is stored, -1 for failure
+ *
+ */
+static int32_t __cam_req_mgr_find_slot_for_req(
+ struct cam_req_mgr_req_queue *in_q, int64_t req_id)
+{
+ int32_t idx, i;
+ struct cam_req_mgr_slot *slot;
+
+ idx = in_q->rd_idx;
+ for (i = 0; i < in_q->num_slots; i++) {
+ slot = &in_q->slot[idx];
+ if (slot->req_id == req_id) {
+ CAM_DBG(CAM_CRM,
+ "req: %lld found at idx: %d status: %d sync_mode: %d",
+ req_id, idx, slot->status, slot->sync_mode);
+ break;
+ }
+ __cam_req_mgr_dec_idx(&idx, 1, in_q->num_slots);
+ }
+ if (i >= in_q->num_slots)
+ idx = -1;
+
+ return idx;
+}
+
+/**
+ * __cam_req_mgr_reset_sof_cnt()
+ *
+ * @brief : the sof_count for both the links are reset
+ * @link : pointer to link whose input queue and req tbl are
+ * traversed through
+ *
+ */
+static void __cam_req_mgr_reset_sof_cnt(
+ struct cam_req_mgr_core_link *link)
+{
+ link->sof_counter = -1;
+ link->sync_link->sof_counter = -1;
+ link->frame_skip_flag = false;
+}
+
+/**
+ * __cam_req_mgr_sof_cnt_initialize()
+ *
+ * @brief : when the sof count is intially -1 it increments count
+ * and computes the sync_self_ref for this link
+ * the count needs to be wrapped back starting from 0
+ * @link : pointer to link whose input queue and req tbl are
+ * traversed through
+ *
+ */
+static void __cam_req_mgr_sof_cnt_initialize(
+ struct cam_req_mgr_core_link *link)
+{
+ link->sof_counter++;
+ link->sync_self_ref = link->sof_counter -
+ link->sync_link->sof_counter;
+}
+
+/**
+ * __cam_req_mgr_wrap_sof_cnt()
+ *
+ * @brief : once the sof count reaches a predefined maximum
+ * the count needs to be wrapped back starting from 0
+ * @link : pointer to link whose input queue and req tbl are
+ * traversed through
+ *
+ */
+static void __cam_req_mgr_wrap_sof_cnt(
+ struct cam_req_mgr_core_link *link)
+{
+ link->sof_counter = (MAX_SYNC_COUNT -
+ (link->sync_link->sof_counter));
+ link->sync_link->sof_counter = 0;
+}
+
+/**
+ * __cam_req_mgr_validate_sof_cnt()
+ *
+ * @brief : validates sof count difference for a given link
+ * @link : pointer to link whose input queue and req tbl are
+ * traversed through
+ * @sync_link : pointer to the sync link
+ * @return : 0 for success, negative for failure
+ *
+ */
+static int __cam_req_mgr_validate_sof_cnt(
+ struct cam_req_mgr_core_link *link,
+ struct cam_req_mgr_core_link *sync_link)
+{
+ int64_t sync_diff = 0;
+ int rc = 0;
+
+ if (link->sof_counter == MAX_SYNC_COUNT)
+ __cam_req_mgr_wrap_sof_cnt(link);
+
+ sync_diff = link->sof_counter - sync_link->sof_counter;
+ if (sync_diff != link->sync_self_ref) {
+ link->sync_link->frame_skip_flag = true;
+ CAM_WARN(CAM_CRM,
+ "Detected anomaly, skip link:%d, self=%lld, other=%lld",
+ link->link_hdl, link->sof_counter,
+ sync_link->sof_counter);
+ rc = -EPERM;
+ }
+
+ return rc;
+}
+
+
+/**
+ * __cam_req_mgr_process_sync_req()
+ *
+ * @brief : processes requests during sync mode
+ * @link : pointer to link whose input queue and req tbl are
+ * traversed through
+ * @slot : pointer to the current slot being processed
+ * @return : 0 for success, negative for failure
+ *
+ */
+static int __cam_req_mgr_process_sync_req(
+ struct cam_req_mgr_core_link *link,
+ struct cam_req_mgr_slot *slot)
+{
+ struct cam_req_mgr_core_link *sync_link = NULL;
+ int64_t req_id = 0;
+ int sync_slot_idx = 0, rc = 0;
+
+ if (!link->sync_link) {
+ CAM_ERR(CAM_CRM, "Sync link null");
+ return -EINVAL;
+ }
+
+ sync_link = link->sync_link;
+ req_id = slot->req_id;
+ if (link->sof_counter == -1) {
+ __cam_req_mgr_sof_cnt_initialize(link);
+ } else if (link->frame_skip_flag &&
+ (sync_link->sync_self_ref != -1)) {
+ CAM_DBG(CAM_CRM, "Link[%x] Req[%lld] Resetting values",
+ link->link_hdl, req_id);
+ __cam_req_mgr_reset_sof_cnt(link);
+ __cam_req_mgr_sof_cnt_initialize(link);
+ } else {
+ link->sof_counter++;
+ }
+
+ rc = __cam_req_mgr_check_link_is_ready(link, slot->idx, true);
+ if (rc) {
+ CAM_DBG(CAM_CRM,
+ "Req: %lld [My link]not available link: %x, rc=%d",
+ req_id, link->link_hdl, rc);
+ goto failure;
+ }
+
+ sync_slot_idx = __cam_req_mgr_find_slot_for_req(
+ sync_link->req.in_q, req_id);
+ if (sync_slot_idx != -1) {
+ rc = __cam_req_mgr_check_link_is_ready(
+ sync_link, sync_slot_idx, true);
+ CAM_DBG(CAM_CRM, "sync_slot_idx=%d, status=%d, rc=%d",
+ sync_slot_idx,
+ sync_link->req.in_q->slot[sync_slot_idx].status,
+ rc);
+ } else {
+ CAM_DBG(CAM_CRM, "sync_slot_idx=%d, rc=%d",
+ sync_slot_idx, rc);
+ }
+
+ if ((sync_slot_idx != -1) &&
+ ((sync_link->req.in_q->slot[sync_slot_idx].status ==
+ CRM_SLOT_STATUS_REQ_APPLIED) || (rc == 0))) {
+ rc = __cam_req_mgr_validate_sof_cnt(link, sync_link);
+ if (rc) {
+ CAM_DBG(CAM_CRM,
+ "Req: %lld validate failed: %x",
+ req_id, sync_link->link_hdl);
+ goto failure;
+ }
+ __cam_req_mgr_check_link_is_ready(link, slot->idx, false);
+ } else {
+ CAM_DBG(CAM_CRM,
+ "Req: %lld [Other link] not ready to apply on link: %x",
+ req_id, sync_link->link_hdl);
+ rc = -EPERM;
+ goto failure;
+ }
+
+ return rc;
+
+failure:
+ link->sof_counter--;
+ return rc;
+}
+
+/**
* __cam_req_mgr_process_req()
*
* @brief : processes read index in request queue and traverse through table
@@ -529,7 +743,7 @@ static int __cam_req_mgr_process_req(struct cam_req_mgr_core_link *link,
in_q = link->req.in_q;
session = (struct cam_req_mgr_core_session *)link->parent;
-
+ mutex_lock(&session->lock);
/*
* Check if new read index,
* - if in pending state, traverse again to complete
@@ -537,32 +751,48 @@ static int __cam_req_mgr_process_req(struct cam_req_mgr_core_link *link,
* - if in applied_state, somthign wrong.
* - if in no_req state, no new req
*/
- CAM_DBG(CAM_CRM, "idx %d req_status %d",
- in_q->rd_idx, in_q->slot[in_q->rd_idx].status);
+ CAM_DBG(CAM_CRM, "SOF Req[%lld] idx %d req_status %d",
+ in_q->slot[in_q->rd_idx].req_id, in_q->rd_idx,
+ in_q->slot[in_q->rd_idx].status);
slot = &in_q->slot[in_q->rd_idx];
if (slot->status == CRM_SLOT_STATUS_NO_REQ) {
CAM_DBG(CAM_CRM, "No Pending req");
- return 0;
+ rc = 0;
+ goto error;
}
if (trigger != CAM_TRIGGER_POINT_SOF &&
trigger != CAM_TRIGGER_POINT_EOF)
- return rc;
+ goto error;
+
+ if ((trigger == CAM_TRIGGER_POINT_EOF) &&
+ (!(link->trigger_mask & CAM_TRIGGER_POINT_SOF))) {
+ CAM_DBG(CAM_CRM, "Applying for last SOF fails");
+ rc = -EINVAL;
+ goto error;
+ }
if (trigger == CAM_TRIGGER_POINT_SOF) {
if (link->trigger_mask) {
CAM_ERR_RATE_LIMIT(CAM_CRM,
"Applying for last EOF fails");
- return -EINVAL;
+ rc = -EINVAL;
+ goto error;
}
- rc = __cam_req_mgr_check_link_is_ready(link, slot->idx);
- if (rc < 0) {
- /* If traverse result is not success, then some devices
- * are not ready with packet for the asked request id,
- * hence try again in next sof
- */
+ if (slot->sync_mode == CAM_REQ_MGR_SYNC_MODE_SYNC)
+ rc = __cam_req_mgr_process_sync_req(link, slot);
+ else
+ rc = __cam_req_mgr_check_link_is_ready(link,
+ slot->idx, false);
+
+ if (rc < 0) {
+ /*
+ * If traverse result is not success, then some devices
+ * are not ready with packet for the asked request id,
+ * hence try again in next sof
+ */
slot->status = CRM_SLOT_STATUS_REQ_PENDING;
spin_lock_bh(&link->link_state_spin_lock);
if (link->state == CAM_CRM_LINK_STATE_ERR) {
@@ -578,20 +808,17 @@ static int __cam_req_mgr_process_req(struct cam_req_mgr_core_link *link,
rc = -EPERM;
}
spin_unlock_bh(&link->link_state_spin_lock);
- return rc;
+ goto error;
}
}
- if (trigger == CAM_TRIGGER_POINT_EOF &&
- (!(link->trigger_mask & CAM_TRIGGER_POINT_SOF))) {
- CAM_DBG(CAM_CRM, "Applying for last SOF fails");
- return -EINVAL;
- }
rc = __cam_req_mgr_send_req(link, link->req.in_q, trigger);
if (rc < 0) {
/* Apply req failed retry at next sof */
slot->status = CRM_SLOT_STATUS_REQ_PENDING;
} else {
+ CAM_DBG(CAM_CRM, "Applied req[%lld] on link[%x] success",
+ slot->req_id, link->link_hdl);
link->trigger_mask |= trigger;
spin_lock_bh(&link->link_state_spin_lock);
@@ -605,8 +832,9 @@ static int __cam_req_mgr_process_req(struct cam_req_mgr_core_link *link,
if (link->trigger_mask == link->subscribe_event) {
slot->status = CRM_SLOT_STATUS_REQ_APPLIED;
link->trigger_mask = 0;
- CAM_DBG(CAM_CRM, "req is applied\n");
-
+ CAM_DBG(CAM_CRM, "req %d is applied on link %d",
+ slot->req_id,
+ link->link_hdl);
idx = in_q->rd_idx;
__cam_req_mgr_dec_idx(
&idx, link->max_delay + 1,
@@ -614,7 +842,11 @@ static int __cam_req_mgr_process_req(struct cam_req_mgr_core_link *link,
__cam_req_mgr_reset_req_slot(link, idx);
}
}
+ mutex_unlock(&session->lock);
+ return rc;
+error:
+ mutex_unlock(&session->lock);
return rc;
}
@@ -703,39 +935,6 @@ static void __cam_req_mgr_destroy_all_tbl(struct cam_req_mgr_req_tbl **l_tbl)
}
/**
- * __cam_req_mgr_find_slot_for_req()
- *
- * @brief : Find idx from input queue at which req id is enqueued
- * @in_q : input request queue pointer
- * @req_id : request id which needs to be searched in input queue
- *
- * @return : slot index where passed request id is stored, -1 for failure
- *
- */
-static int32_t __cam_req_mgr_find_slot_for_req(
- struct cam_req_mgr_req_queue *in_q, int64_t req_id)
-{
- int32_t idx, i;
- struct cam_req_mgr_slot *slot;
-
- idx = in_q->wr_idx;
- for (i = 0; i < in_q->num_slots; i++) {
- slot = &in_q->slot[idx];
- if (slot->req_id == req_id) {
- CAM_DBG(CAM_CRM, "req %lld found at %d %d status %d",
- req_id, idx, slot->idx,
- slot->status);
- break;
- }
- __cam_req_mgr_dec_idx(&idx, 1, in_q->num_slots);
- }
- if (i >= in_q->num_slots)
- idx = -1;
-
- return idx;
-}
-
-/**
* __cam_req_mgr_setup_in_q()
*
* @brief : Initialize req table data
@@ -810,15 +1009,34 @@ static int __cam_req_mgr_reset_in_q(struct cam_req_mgr_req_data *req)
*/
static void __cam_req_mgr_sof_freeze(unsigned long data)
{
- struct cam_req_mgr_timer *timer = (struct cam_req_mgr_timer *)data;
- struct cam_req_mgr_core_link *link = NULL;
+ struct cam_req_mgr_timer *timer = (struct cam_req_mgr_timer *)data;
+ struct cam_req_mgr_core_link *link = NULL;
+ struct cam_req_mgr_core_session *session = NULL;
+ struct cam_req_mgr_message msg;
if (!timer) {
CAM_ERR(CAM_CRM, "NULL timer");
return;
}
link = (struct cam_req_mgr_core_link *)timer->parent;
- CAM_ERR(CAM_CRM, "SOF freeze for link %x", link->link_hdl);
+ session = (struct cam_req_mgr_core_session *)link->parent;
+
+ CAM_ERR(CAM_CRM, "SOF freeze for session %d link 0x%x",
+ session->session_hdl, link->link_hdl);
+
+ memset(&msg, 0, sizeof(msg));
+
+ msg.session_hdl = session->session_hdl;
+ msg.u.err_msg.error_type = CAM_REQ_MGR_ERROR_TYPE_DEVICE;
+ msg.u.err_msg.request_id = 0;
+ msg.u.err_msg.link_hdl = link->link_hdl;
+
+
+ if (cam_req_mgr_notify_message(&msg,
+ V4L_EVENT_CAM_REQ_MGR_ERROR, V4L_EVENT_CAM_REQ_MGR_EVENT))
+ CAM_ERR(CAM_CRM,
+ "Error notifying SOF freeze for session %d link 0x%x",
+ session->session_hdl, link->link_hdl);
}
/**
@@ -863,12 +1081,14 @@ static void __cam_req_mgr_destroy_subdev(
* @brief : Cleans up the mem allocated while linking
* @link : pointer to link, mem associated with this link is freed
*
+ * @return : returns if unlink for any device was success or failure
*/
-static void __cam_req_mgr_destroy_link_info(struct cam_req_mgr_core_link *link)
+static int __cam_req_mgr_destroy_link_info(struct cam_req_mgr_core_link *link)
{
int32_t i = 0;
struct cam_req_mgr_connected_device *dev;
struct cam_req_mgr_core_dev_link_setup link_data;
+ int rc = 0;
link_data.link_enable = 0;
link_data.link_hdl = link->link_hdl;
@@ -881,7 +1101,11 @@ static void __cam_req_mgr_destroy_link_info(struct cam_req_mgr_core_link *link)
if (dev != NULL) {
link_data.dev_hdl = dev->dev_hdl;
if (dev->ops && dev->ops->link_setup)
- dev->ops->link_setup(&link_data);
+ rc = dev->ops->link_setup(&link_data);
+ if (rc)
+ CAM_ERR(CAM_CRM,
+ "Unlink failed dev_hdl %d",
+ dev->dev_hdl);
dev->dev_hdl = 0;
dev->parent = NULL;
dev->ops = NULL;
@@ -896,6 +1120,7 @@ static void __cam_req_mgr_destroy_link_info(struct cam_req_mgr_core_link *link)
link->num_devs = 0;
link->max_delay = 0;
+ return rc;
}
/**
@@ -912,6 +1137,7 @@ static struct cam_req_mgr_core_link *__cam_req_mgr_reserve_link(
{
struct cam_req_mgr_core_link *link;
struct cam_req_mgr_req_queue *in_q;
+ int i;
if (!session || !g_crm_core_dev) {
CAM_ERR(CAM_CRM, "NULL session/core_dev ptr");
@@ -950,16 +1176,34 @@ static struct cam_req_mgr_core_link *__cam_req_mgr_reserve_link(
in_q->num_slots = 0;
link->state = CAM_CRM_LINK_STATE_IDLE;
link->parent = (void *)session;
+ link->sync_link = NULL;
mutex_unlock(&link->lock);
mutex_lock(&session->lock);
- session->links[session->num_links] = link;
+ /* Loop through and find a free index */
+ for (i = 0; i < MAX_LINKS_PER_SESSION; i++) {
+ if (!session->links[i]) {
+ session->links[i] = link;
+ break;
+ }
+ }
+
+ if (i == MAX_LINKS_PER_SESSION) {
+ CAM_ERR(CAM_CRM, "Free link index not found");
+ goto error;
+ }
+
session->num_links++;
CAM_DBG(CAM_CRM, "Active session links (%d)",
session->num_links);
mutex_unlock(&session->lock);
return link;
+error:
+ mutex_unlock(&session->lock);
+ kfree(link);
+ kfree(in_q);
+ return NULL;
}
/**
@@ -987,7 +1231,7 @@ static void __cam_req_mgr_unreserve_link(
CAM_WARN(CAM_CRM, "No active link or invalid state %d",
session->num_links);
else {
- for (i = 0; i < session->num_links; i++) {
+ for (i = 0; i < MAX_LINKS_PER_SESSION; i++) {
if (session->links[i] == *link)
session->links[i] = NULL;
}
@@ -1079,6 +1323,7 @@ int cam_req_mgr_process_flush_req(void *priv, void *data)
for (i = 0; i < in_q->num_slots; i++) {
slot = &in_q->slot[i];
slot->req_id = -1;
+ slot->sync_mode = CAM_REQ_MGR_SYNC_MODE_NO_SYNC;
slot->skip_idx = 1;
slot->status = CRM_SLOT_STATUS_NO_REQ;
}
@@ -1150,9 +1395,10 @@ int cam_req_mgr_process_sched_req(void *priv, void *data)
link = (struct cam_req_mgr_core_link *)priv;
task_data = (struct crm_task_payload *)data;
sched_req = (struct cam_req_mgr_sched_request *)&task_data->u;
- CAM_DBG(CAM_CRM, "link_hdl %x req_id %lld",
+ CAM_DBG(CAM_CRM, "link_hdl %x req_id %lld sync_mode %d",
sched_req->link_hdl,
- sched_req->req_id);
+ sched_req->req_id,
+ sched_req->sync_mode);
in_q = link->req.in_q;
@@ -1163,11 +1409,12 @@ int cam_req_mgr_process_sched_req(void *priv, void *data)
slot->status != CRM_SLOT_STATUS_REQ_APPLIED)
CAM_WARN(CAM_CRM, "in_q overwrite %d", slot->status);
- CAM_DBG(CAM_CRM, "sched_req %lld at slot %d",
- sched_req->req_id, in_q->wr_idx);
+ CAM_DBG(CAM_CRM, "sched_req %lld at slot %d sync_mode %d",
+ sched_req->req_id, in_q->wr_idx, sched_req->sync_mode);
slot->status = CRM_SLOT_STATUS_REQ_ADDED;
slot->req_id = sched_req->req_id;
+ slot->sync_mode = sched_req->sync_mode;
slot->skip_idx = 0;
slot->recover = sched_req->bubble_enable;
__cam_req_mgr_inc_idx(&in_q->wr_idx, 1, in_q->num_slots);
@@ -1418,6 +1665,8 @@ static int cam_req_mgr_process_trigger(void *priv, void *data)
* Check if any new req is pending in slot, if not finish the
* lower pipeline delay device with available req ids.
*/
+ CAM_DBG(CAM_CRM, "link[%x] Req[%lld] invalidating slot",
+ link->link_hdl, in_q->slot[in_q->rd_idx].req_id);
__cam_req_mgr_check_next_req_slot(in_q);
__cam_req_mgr_inc_idx(&in_q->rd_idx, 1, in_q->num_slots);
}
@@ -1815,6 +2064,7 @@ int cam_req_mgr_create_session(
mutex_lock(&cam_session->lock);
cam_session->session_hdl = session_hdl;
cam_session->num_links = 0;
+ cam_session->sync_mode = CAM_REQ_MGR_SYNC_MODE_NO_SYNC;
list_add(&cam_session->entry, &g_crm_core_dev->session_head);
mutex_unlock(&cam_session->lock);
end:
@@ -1953,16 +2203,6 @@ int cam_req_mgr_link(struct cam_req_mgr_link_info *link_info)
goto setup_failed;
}
- /* Start watchdong timer to detect if camera hw goes into bad state */
- rc = crm_timer_init(&link->watchdog, CAM_REQ_MGR_WATCHDOG_TIMEOUT,
- link, &__cam_req_mgr_sof_freeze);
- if (rc < 0) {
- kfree(link->workq->task.pool[0].payload);
- __cam_req_mgr_destroy_link_info(link);
- cam_req_mgr_workq_destroy(&link->workq);
- goto setup_failed;
- }
-
mutex_unlock(&link->lock);
mutex_unlock(&g_crm_core_dev->crm_lock);
return rc;
@@ -1983,6 +2223,7 @@ int cam_req_mgr_unlink(struct cam_req_mgr_unlink_info *unlink_info)
int rc = 0;
struct cam_req_mgr_core_session *cam_session;
struct cam_req_mgr_core_link *link;
+ int i;
if (!unlink_info) {
CAM_ERR(CAM_CRM, "NULL pointer");
@@ -2015,6 +2256,19 @@ int cam_req_mgr_unlink(struct cam_req_mgr_unlink_info *unlink_info)
spin_unlock_bh(&link->link_state_spin_lock);
__cam_req_mgr_print_req_tbl(&link->req);
+ if ((cam_session->sync_mode != CAM_REQ_MGR_SYNC_MODE_NO_SYNC) &&
+ (link->sync_link)) {
+ /*
+ * make sure to unlink sync setup under the assumption
+ * of only having 2 links in a given session
+ */
+ cam_session->sync_mode = CAM_REQ_MGR_SYNC_MODE_NO_SYNC;
+ for (i = 0; i < MAX_LINKS_PER_SESSION; i++) {
+ if (cam_session->links[i])
+ cam_session->links[i]->sync_link = NULL;
+ }
+ }
+
/* Destroy workq payload data */
kfree(link->workq->task.pool[0].payload);
link->workq->task.pool[0].payload = NULL;
@@ -2024,8 +2278,12 @@ int cam_req_mgr_unlink(struct cam_req_mgr_unlink_info *unlink_info)
cam_req_mgr_workq_destroy(&link->workq);
- /* Cleanuprequest tables */
- __cam_req_mgr_destroy_link_info(link);
+ /* Cleanup request tables and unlink devices */
+ rc = __cam_req_mgr_destroy_link_info(link);
+ if (rc) {
+ CAM_ERR(CAM_CORE, "Unlink failed. Cannot proceed");
+ return rc;
+ }
/* Free memory holding data of linked devs */
__cam_req_mgr_destroy_subdev(link->l_dev);
@@ -2066,17 +2324,20 @@ int cam_req_mgr_schedule_request(
CAM_DBG(CAM_CRM, "link ptr NULL %x", sched_req->link_hdl);
return -EINVAL;
}
+
session = (struct cam_req_mgr_core_session *)link->parent;
if (!session) {
CAM_WARN(CAM_CRM, "session ptr NULL %x", sched_req->link_hdl);
return -EINVAL;
}
- CAM_DBG(CAM_CRM, "link %x req %lld",
- sched_req->link_hdl, sched_req->req_id);
+
+ CAM_DBG(CAM_CRM, "link %x req %lld, sync_mode %d",
+ sched_req->link_hdl, sched_req->req_id, sched_req->sync_mode);
task_data.type = CRM_WORKQ_TASK_SCHED_REQ;
sched = (struct cam_req_mgr_sched_request *)&task_data.u;
sched->req_id = sched_req->req_id;
+ sched->sync_mode = sched_req->sync_mode;
sched->link_hdl = sched_req->link_hdl;
if (session->force_err_recovery == AUTO_RECOVERY) {
sched->bubble_enable = sched_req->bubble_enable;
@@ -2087,22 +2348,73 @@ int cam_req_mgr_schedule_request(
rc = cam_req_mgr_process_sched_req(link, &task_data);
- CAM_DBG(CAM_CRM, "DONE dev %x req %lld",
- sched_req->link_hdl, sched_req->req_id);
+ CAM_DBG(CAM_CRM, "DONE dev %x req %lld sync_mode %d",
+ sched_req->link_hdl, sched_req->req_id, sched_req->sync_mode);
end:
return rc;
}
-int cam_req_mgr_sync_link(
- struct cam_req_mgr_sync_mode *sync_links)
+int cam_req_mgr_sync_config(
+ struct cam_req_mgr_sync_mode *sync_info)
{
- if (!sync_links) {
+ int rc = 0;
+ struct cam_req_mgr_core_session *cam_session;
+ struct cam_req_mgr_core_link *link1 = NULL;
+ struct cam_req_mgr_core_link *link2 = NULL;
+
+ if (!sync_info) {
CAM_ERR(CAM_CRM, "NULL pointer");
return -EINVAL;
}
- /* This function handles ioctl, implementation pending */
- return 0;
+ if ((sync_info->num_links < 0) ||
+ (sync_info->num_links > MAX_LINKS_PER_SESSION)) {
+ CAM_ERR(CAM_CRM, "Invalid num links %d", sync_info->num_links);
+ return -EINVAL;
+ }
+
+ /* session hdl's priv data is cam session struct */
+ cam_session = (struct cam_req_mgr_core_session *)
+ cam_get_device_priv(sync_info->session_hdl);
+ if (!cam_session) {
+ CAM_ERR(CAM_CRM, "NULL pointer");
+ return -EINVAL;
+ }
+
+ mutex_lock(&cam_session->lock);
+ CAM_DBG(CAM_CRM, "link handles %x %x",
+ sync_info->link_hdls[0], sync_info->link_hdls[1]);
+
+ /* only two links existing per session in dual cam use case*/
+ link1 = cam_get_device_priv(sync_info->link_hdls[0]);
+ if (!link1) {
+ CAM_ERR(CAM_CRM, "link1 NULL pointer");
+ rc = -EINVAL;
+ goto done;
+ }
+
+ link2 = cam_get_device_priv(sync_info->link_hdls[1]);
+ if (!link2) {
+ CAM_ERR(CAM_CRM, "link2 NULL pointer");
+ rc = -EINVAL;
+ goto done;
+ }
+
+ link1->sof_counter = -1;
+ link1->sync_self_ref = -1;
+ link1->frame_skip_flag = false;
+ link1->sync_link = link2;
+
+ link2->sof_counter = -1;
+ link2->sync_self_ref = -1;
+ link2->frame_skip_flag = false;
+ link2->sync_link = link1;
+
+ cam_session->sync_mode = sync_info->sync_mode;
+
+done:
+ mutex_unlock(&cam_session->lock);
+ return rc;
}
int cam_req_mgr_flush_requests(
@@ -2173,6 +2485,55 @@ int cam_req_mgr_flush_requests(
return rc;
}
+int cam_req_mgr_link_control(struct cam_req_mgr_link_control *control)
+{
+ int rc = 0;
+ int i;
+ struct cam_req_mgr_core_link *link = NULL;
+
+ if (!control) {
+ CAM_ERR(CAM_CRM, "Control command is NULL");
+ rc = -EINVAL;
+ goto end;
+ }
+
+ mutex_lock(&g_crm_core_dev->crm_lock);
+ for (i = 0; i < control->num_links; i++) {
+ link = (struct cam_req_mgr_core_link *)
+ cam_get_device_priv(control->link_hdls[i]);
+ if (!link) {
+ CAM_ERR(CAM_CRM, "Link(%d) is NULL on session 0x%x",
+ i, control->session_hdl);
+ rc = -EINVAL;
+ break;
+ }
+
+ mutex_lock(&link->lock);
+ if (control->ops == CAM_REQ_MGR_LINK_ACTIVATE) {
+ /* Start SOF watchdog timer */
+ rc = crm_timer_init(&link->watchdog,
+ CAM_REQ_MGR_WATCHDOG_TIMEOUT, link,
+ &__cam_req_mgr_sof_freeze);
+ if (rc < 0) {
+ CAM_ERR(CAM_CRM,
+ "SOF timer start fails: link=0x%x",
+ link->link_hdl);
+ rc = -EFAULT;
+ }
+ } else if (control->ops == CAM_REQ_MGR_LINK_DEACTIVATE) {
+ /* Destroy SOF watchdog timer */
+ crm_timer_exit(&link->watchdog);
+ } else {
+ CAM_ERR(CAM_CRM, "Invalid link control command");
+ rc = -EINVAL;
+ }
+ mutex_unlock(&link->lock);
+ }
+ mutex_unlock(&g_crm_core_dev->crm_lock);
+end:
+ return rc;
+}
+
int cam_req_mgr_core_device_init(void)
{
diff --git a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.h b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.h
index e17047d..42f8c77 100644
--- a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.h
+++ b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_core.h
@@ -30,6 +30,8 @@
#define CRM_WORKQ_NUM_TASKS 60
+#define MAX_SYNC_COUNT 65535
+
/**
* enum crm_workq_task_type
* @codes: to identify which type of task is present
@@ -122,11 +124,12 @@ enum cam_req_mgr_link_state {
/**
* struct cam_req_mgr_traverse
- * @idx : slot index
- * @result : contains which all tables were able to apply successfully
- * @tbl : pointer of pipeline delay based request table
- * @apply_data : pointer which various tables will update during traverse
- * @in_q : input request queue pointer
+ * @idx : slot index
+ * @result : contains which all tables were able to apply successfully
+ * @tbl : pointer of pipeline delay based request table
+ * @apply_data : pointer which various tables will update during traverse
+ * @in_q : input request queue pointer
+ * @validate_only : Whether to validate only and/or update settings
*/
struct cam_req_mgr_traverse {
int32_t idx;
@@ -134,6 +137,7 @@ struct cam_req_mgr_traverse {
struct cam_req_mgr_req_tbl *tbl;
struct cam_req_mgr_apply *apply_data;
struct cam_req_mgr_req_queue *in_q;
+ bool validate_only;
};
/**
@@ -198,6 +202,7 @@ struct cam_req_mgr_req_tbl {
* - members updated due to external events
* @recover : if user enabled recovery for this request.
* @req_id : mask tracking which all devices have request ready
+ * @sync_mode : Sync mode in which req id in this slot has to applied
*/
struct cam_req_mgr_slot {
int32_t idx;
@@ -205,6 +210,7 @@ struct cam_req_mgr_slot {
enum crm_slot_status status;
int32_t recover;
int64_t req_id;
+ int32_t sync_mode;
};
/**
@@ -282,6 +288,12 @@ struct cam_req_mgr_connected_device {
* @subscribe_event : irqs that link subscribes, IFE should send
* notification to CRM at those hw events.
* @trigger_mask : mask on which irq the req is already applied
+ * @sync_link : pointer to the sync link for synchronization
+ * @sof_counter : sof counter during sync_mode
+ * @sync_self_ref : reference sync count against which the difference
+ * between sync_counts for a given link is checked
+ * @frame_skip_flag : flag that determines if a frame needs to be skipped
+ *
*/
struct cam_req_mgr_core_link {
int32_t link_hdl;
@@ -299,6 +311,10 @@ struct cam_req_mgr_core_link {
spinlock_t link_state_spin_lock;
uint32_t subscribe_event;
uint32_t trigger_mask;
+ struct cam_req_mgr_core_link *sync_link;
+ int64_t sof_counter;
+ int64_t sync_self_ref;
+ bool frame_skip_flag;
};
/**
@@ -315,6 +331,7 @@ struct cam_req_mgr_core_link {
* - Debug data
* @force_err_recovery : For debugging, we can force bubble recovery
* to be always ON or always OFF using debugfs.
+ * @sync_mode : Sync mode for this session links
*/
struct cam_req_mgr_core_session {
int32_t session_hdl;
@@ -323,6 +340,7 @@ struct cam_req_mgr_core_session {
struct list_head entry;
struct mutex lock;
int32_t force_err_recovery;
+ int32_t sync_mode;
};
/**
@@ -384,11 +402,11 @@ int cam_req_mgr_schedule_request(
struct cam_req_mgr_sched_request *sched_req);
/**
- * cam_req_mgr_sync_link()
+ * cam_req_mgr_sync_mode_setup()
* @brief: sync for links in a session
- * @sync_links: session, links info and master link info
+ * @sync_info: session, links info and master link info
*/
-int cam_req_mgr_sync_link(struct cam_req_mgr_sync_mode *sync_links);
+int cam_req_mgr_sync_config(struct cam_req_mgr_sync_mode *sync_info);
/**
* cam_req_mgr_flush_requests()
@@ -415,5 +433,13 @@ int cam_req_mgr_core_device_deinit(void);
* @brief: Handles camera close
*/
void cam_req_mgr_handle_core_shutdown(void);
+
+/**
+ * cam_req_mgr_link_control()
+ * @brief: Handles link control operations
+ * @control: Link control command
+ */
+int cam_req_mgr_link_control(struct cam_req_mgr_link_control *control);
+
#endif
diff --git a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.c b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.c
index c316dbb..e4944d0 100644
--- a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.c
+++ b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.c
@@ -316,18 +316,18 @@ static long cam_private_ioctl(struct file *file, void *fh,
break;
case CAM_REQ_MGR_SYNC_MODE: {
- struct cam_req_mgr_sync_mode sync_mode;
+ struct cam_req_mgr_sync_mode sync_info;
- if (k_ioctl->size != sizeof(sync_mode))
+ if (k_ioctl->size != sizeof(sync_info))
return -EINVAL;
- if (copy_from_user(&sync_mode,
+ if (copy_from_user(&sync_info,
(void *)k_ioctl->handle,
k_ioctl->size)) {
return -EFAULT;
}
- rc = cam_req_mgr_sync_link(&sync_mode);
+ rc = cam_req_mgr_sync_config(&sync_info);
}
break;
case CAM_REQ_MGR_ALLOC_BUF: {
@@ -408,6 +408,24 @@ static long cam_private_ioctl(struct file *file, void *fh,
rc = -EINVAL;
}
break;
+ case CAM_REQ_MGR_LINK_CONTROL: {
+ struct cam_req_mgr_link_control cmd;
+
+ if (k_ioctl->size != sizeof(cmd))
+ return -EINVAL;
+
+ if (copy_from_user(&cmd,
+ (void __user *)k_ioctl->handle,
+ k_ioctl->size)) {
+ rc = -EFAULT;
+ break;
+ }
+
+ rc = cam_req_mgr_link_control(&cmd);
+ if (rc)
+ rc = -EINVAL;
+ }
+ break;
default:
return -ENOIOCTLCMD;
}
@@ -462,7 +480,7 @@ static int cam_video_device_setup(void)
return rc;
}
-int cam_req_mgr_notify_frame_message(struct cam_req_mgr_message *msg,
+int cam_req_mgr_notify_message(struct cam_req_mgr_message *msg,
uint32_t id,
uint32_t type)
{
@@ -481,7 +499,7 @@ int cam_req_mgr_notify_frame_message(struct cam_req_mgr_message *msg,
return 0;
}
-EXPORT_SYMBOL(cam_req_mgr_notify_frame_message);
+EXPORT_SYMBOL(cam_req_mgr_notify_message);
void cam_video_device_cleanup(void)
{
diff --git a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.h b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.h
index 77faed9..93278b8 100644
--- a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.h
+++ b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_dev.h
@@ -43,7 +43,7 @@ struct cam_req_mgr_device {
#define CAM_REQ_MGR_GET_PAYLOAD_PTR(ev, type) \
(type *)((char *)ev.u.data)
-int cam_req_mgr_notify_frame_message(struct cam_req_mgr_message *msg,
+int cam_req_mgr_notify_message(struct cam_req_mgr_message *msg,
uint32_t id,
uint32_t type);
diff --git a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_util.c b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_util.c
index 1d2169b..f357941 100644
--- a/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_util.c
+++ b/drivers/media/platform/msm/camera/cam_req_mgr/cam_req_mgr_util.c
@@ -317,6 +317,8 @@ static int cam_destroy_hdl(int32_t dev_hdl, int dev_hdl_type)
}
hdl_tbl->hdl[idx].state = HDL_FREE;
+ hdl_tbl->hdl[idx].ops = NULL;
+ hdl_tbl->hdl[idx].priv = NULL;
clear_bit(idx, hdl_tbl->bitmap);
spin_unlock_bh(&hdl_tbl_lock);
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_core.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_core.c
index abfc190..febf922 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_core.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_core.c
@@ -44,9 +44,9 @@ int32_t cam_actuator_construct_default_power_setting(
goto free_power_settings;
}
- power_info->power_setting[0].seq_type = SENSOR_VAF;
- power_info->power_setting[0].seq_val = CAM_VAF;
- power_info->power_setting[0].config_val = 0;
+ power_info->power_down_setting[0].seq_type = SENSOR_VAF;
+ power_info->power_down_setting[0].seq_val = CAM_VAF;
+ power_info->power_down_setting[0].config_val = 0;
return rc;
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_dev.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_dev.c
index 64acea7..c5c9b0a 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_dev.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_actuator/cam_actuator_dev.c
@@ -186,13 +186,6 @@ static int32_t cam_actuator_driver_i2c_probe(struct i2c_client *client,
goto free_ctrl;
}
- soc_private = (struct cam_actuator_soc_private *)(id->driver_data);
- if (!soc_private) {
- CAM_ERR(CAM_EEPROM, "board info NULL");
- rc = -EINVAL;
- goto free_ctrl;
- }
-
rc = cam_actuator_init_subdev(a_ctrl);
if (rc)
goto free_soc;
@@ -249,8 +242,10 @@ static int32_t cam_actuator_driver_i2c_probe(struct i2c_client *client,
static int32_t cam_actuator_platform_remove(struct platform_device *pdev)
{
- struct cam_actuator_ctrl_t *a_ctrl;
int32_t rc = 0;
+ struct cam_actuator_ctrl_t *a_ctrl;
+ struct cam_actuator_soc_private *soc_private;
+ struct cam_sensor_power_ctrl_t *power_info;
a_ctrl = platform_get_drvdata(pdev);
if (!a_ctrl) {
@@ -258,8 +253,15 @@ static int32_t cam_actuator_platform_remove(struct platform_device *pdev)
return 0;
}
+ soc_private =
+ (struct cam_actuator_soc_private *)a_ctrl->soc_info.soc_private;
+ power_info = &soc_private->power_info;
+
kfree(a_ctrl->io_master_info.cci_client);
a_ctrl->io_master_info.cci_client = NULL;
+ kfree(power_info->power_setting);
+ kfree(power_info->power_down_setting);
+ kfree(a_ctrl->soc_info.soc_private);
kfree(a_ctrl->i2c_data.per_frame);
a_ctrl->i2c_data.per_frame = NULL;
devm_kfree(&pdev->dev, a_ctrl);
@@ -269,17 +271,29 @@ static int32_t cam_actuator_platform_remove(struct platform_device *pdev)
static int32_t cam_actuator_driver_i2c_remove(struct i2c_client *client)
{
- struct cam_actuator_ctrl_t *a_ctrl = i2c_get_clientdata(client);
int32_t rc = 0;
+ struct cam_actuator_ctrl_t *a_ctrl =
+ i2c_get_clientdata(client);
+ struct cam_actuator_soc_private *soc_private;
+ struct cam_sensor_power_ctrl_t *power_info;
/* Handle I2C Devices */
if (!a_ctrl) {
CAM_ERR(CAM_ACTUATOR, "Actuator device is NULL");
return -EINVAL;
}
+
+ soc_private =
+ (struct cam_actuator_soc_private *)a_ctrl->soc_info.soc_private;
+ power_info = &soc_private->power_info;
+
/*Free Allocated Mem */
kfree(a_ctrl->i2c_data.per_frame);
a_ctrl->i2c_data.per_frame = NULL;
+ kfree(power_info->power_setting);
+ kfree(power_info->power_down_setting);
+ kfree(a_ctrl->soc_info.soc_private);
+ a_ctrl->soc_info.soc_private = NULL;
kfree(a_ctrl);
return rc;
}
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_cci/cam_cci_core.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_cci/cam_cci_core.c
index f151b9b..d7a6504 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_cci/cam_cci_core.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_cci/cam_cci_core.c
@@ -730,17 +730,30 @@ static int32_t cam_cci_data_queue(struct cci_device *cci_dev,
reg_addr++;
} else {
if ((i + 1) <= cci_dev->payload_size) {
- if (i2c_msg->data_type ==
- CAMERA_SENSOR_I2C_TYPE_DWORD) {
+ switch (i2c_msg->data_type) {
+ case CAMERA_SENSOR_I2C_TYPE_DWORD:
data[i++] = (i2c_cmd->reg_data &
0xFF000000) >> 24;
+ /* fallthrough */
+ case CAMERA_SENSOR_I2C_TYPE_3B:
data[i++] = (i2c_cmd->reg_data &
0x00FF0000) >> 16;
+ /* fallthrough */
+ case CAMERA_SENSOR_I2C_TYPE_WORD:
+ data[i++] = (i2c_cmd->reg_data &
+ 0x0000FF00) >> 8;
+ /* fallthrough */
+ case CAMERA_SENSOR_I2C_TYPE_BYTE:
+ data[i++] = i2c_cmd->reg_data &
+ 0x000000FF;
+ break;
+ default:
+ CAM_ERR(CAM_CCI,
+ "invalid data type: %d",
+ i2c_msg->data_type);
+ return -EINVAL;
}
- data[i++] = (i2c_cmd->reg_data &
- 0x0000FF00) >> 8; /* MSB */
- data[i++] = i2c_cmd->reg_data &
- 0x000000FF; /* LSB */
+
if (c_ctrl->cmd ==
MSM_CCI_I2C_WRITE_SEQ)
reg_addr++;
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_csiphy/cam_csiphy_core.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_csiphy/cam_csiphy_core.c
index cb44cb8..2adca66 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_csiphy/cam_csiphy_core.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_csiphy/cam_csiphy_core.c
@@ -224,14 +224,13 @@ int32_t cam_csiphy_config_dev(struct csiphy_device *csiphy_dev)
{
int32_t rc = 0;
uint32_t lane_enable = 0, mask = 1, size = 0;
- uint16_t lane_mask = 0, i = 0, cfg_size = 0;
+ uint16_t lane_mask = 0, i = 0, cfg_size = 0, temp = 0;
uint8_t lane_cnt, lane_pos = 0;
uint16_t settle_cnt = 0;
void __iomem *csiphybase;
struct csiphy_reg_t (*reg_array)[MAX_SETTINGS_PER_LANE];
lane_cnt = csiphy_dev->csiphy_info.lane_cnt;
- lane_mask = csiphy_dev->csiphy_info.lane_mask & 0x1f;
csiphybase = csiphy_dev->soc_info.reg_map[0].mem_base;
if (!csiphybase) {
@@ -239,17 +238,6 @@ int32_t cam_csiphy_config_dev(struct csiphy_device *csiphy_dev)
return -EINVAL;
}
- for (i = 0; i < MAX_DPHY_DATA_LN; i++) {
- if (mask == 0x2) {
- if (lane_mask & mask)
- lane_enable |= 0x80;
- i--;
- } else if (lane_mask & mask) {
- lane_enable |= 0x1 << (i<<1);
- }
- mask <<= 1;
- }
-
if (!csiphy_dev->csiphy_info.csiphy_3phase) {
if (csiphy_dev->csiphy_info.combo_mode == 1)
reg_array =
@@ -260,6 +248,18 @@ int32_t cam_csiphy_config_dev(struct csiphy_device *csiphy_dev)
csiphy_dev->num_irq_registers = 11;
cfg_size = csiphy_dev->ctrl_reg->csiphy_reg.
csiphy_2ph_config_array_size;
+
+ lane_mask = csiphy_dev->csiphy_info.lane_mask & 0x1f;
+ for (i = 0; i < MAX_DPHY_DATA_LN; i++) {
+ if (mask == 0x2) {
+ if (lane_mask & mask)
+ lane_enable |= 0x80;
+ i--;
+ } else if (lane_mask & mask) {
+ lane_enable |= 0x1 << (i<<1);
+ }
+ mask <<= 1;
+ }
} else {
if (csiphy_dev->csiphy_info.combo_mode == 1)
reg_array =
@@ -267,9 +267,18 @@ int32_t cam_csiphy_config_dev(struct csiphy_device *csiphy_dev)
else
reg_array =
csiphy_dev->ctrl_reg->csiphy_3ph_reg;
- csiphy_dev->num_irq_registers = 20;
+ csiphy_dev->num_irq_registers = 11;
cfg_size = csiphy_dev->ctrl_reg->csiphy_reg.
csiphy_3ph_config_array_size;
+
+ lane_mask = csiphy_dev->csiphy_info.lane_mask & 0x7;
+ mask = lane_mask;
+ while (mask != 0) {
+ temp = (i << 1)+1;
+ lane_enable |= ((mask & 0x1) << temp);
+ mask >>= 1;
+ i++;
+ }
}
size = csiphy_dev->ctrl_reg->csiphy_reg.csiphy_common_array_size;
@@ -295,7 +304,7 @@ int32_t cam_csiphy_config_dev(struct csiphy_device *csiphy_dev)
}
}
- while (lane_mask & 0x1f) {
+ while (lane_mask) {
if (!(lane_mask & 0x1)) {
lane_pos++;
lane_mask >>= 1;
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_core.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_core.c
index 4c69afb..72b1779 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_core.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_core.c
@@ -657,6 +657,7 @@ static int32_t cam_eeprom_pkt_parse(struct cam_eeprom_ctrl_t *e_ctrl, void *arg)
struct cam_packet *csl_packet = NULL;
struct cam_eeprom_soc_private *soc_private =
(struct cam_eeprom_soc_private *)e_ctrl->soc_info.soc_private;
+ struct cam_sensor_power_ctrl_t *power_info = &soc_private->power_info;
ioctl_ctrl = (struct cam_control *)arg;
@@ -701,7 +702,7 @@ static int32_t cam_eeprom_pkt_parse(struct cam_eeprom_ctrl_t *e_ctrl, void *arg)
e_ctrl->cal_data.num_map = 0;
CAM_DBG(CAM_EEPROM,
"Returning the data using kernel probe");
- break;
+ break;
}
rc = cam_eeprom_init_pkt_parser(e_ctrl, csl_packet);
if (rc) {
@@ -750,16 +751,21 @@ static int32_t cam_eeprom_pkt_parse(struct cam_eeprom_ctrl_t *e_ctrl, void *arg)
memdata_free:
kfree(e_ctrl->cal_data.mapdata);
error:
+ kfree(power_info->power_setting);
+ kfree(power_info->power_down_setting);
kfree(e_ctrl->cal_data.map);
e_ctrl->cal_data.num_data = 0;
e_ctrl->cal_data.num_map = 0;
- e_ctrl->cam_eeprom_state = CAM_EEPROM_ACQUIRE;
+ e_ctrl->cam_eeprom_state = CAM_EEPROM_INIT;
return rc;
}
void cam_eeprom_shutdown(struct cam_eeprom_ctrl_t *e_ctrl)
{
int rc;
+ struct cam_eeprom_soc_private *soc_private =
+ (struct cam_eeprom_soc_private *)e_ctrl->soc_info.soc_private;
+ struct cam_sensor_power_ctrl_t *power_info = &soc_private->power_info;
if (e_ctrl->cam_eeprom_state == CAM_EEPROM_INIT)
return;
@@ -779,6 +785,9 @@ void cam_eeprom_shutdown(struct cam_eeprom_ctrl_t *e_ctrl)
e_ctrl->bridge_intf.device_hdl = -1;
e_ctrl->bridge_intf.link_hdl = -1;
e_ctrl->bridge_intf.session_hdl = -1;
+
+ kfree(power_info->power_setting);
+ kfree(power_info->power_down_setting);
}
e_ctrl->cam_eeprom_state = CAM_EEPROM_INIT;
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_dev.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_dev.c
index d667cf4..f3c4811 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_dev.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_dev.c
@@ -201,13 +201,6 @@ static int cam_eeprom_i2c_driver_probe(struct i2c_client *client,
goto free_soc;
}
- soc_private = (struct cam_eeprom_soc_private *)(id->driver_data);
- if (!soc_private) {
- CAM_ERR(CAM_EEPROM, "board info NULL");
- rc = -EINVAL;
- goto ectrl_free;
- }
-
rc = cam_eeprom_init_subdev(e_ctrl);
if (rc)
goto free_soc;
@@ -238,9 +231,11 @@ static int cam_eeprom_i2c_driver_probe(struct i2c_client *client,
static int cam_eeprom_i2c_driver_remove(struct i2c_client *client)
{
+ int i;
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct cam_eeprom_ctrl_t *e_ctrl;
struct cam_eeprom_soc_private *soc_private;
+ struct cam_hw_soc_info *soc_info;
if (!sd) {
CAM_ERR(CAM_EEPROM, "Subdevice is NULL");
@@ -260,10 +255,13 @@ static int cam_eeprom_i2c_driver_remove(struct i2c_client *client)
return -EINVAL;
}
- if (soc_private) {
- kfree(soc_private->power_info.gpio_num_info);
+ soc_info = &e_ctrl->soc_info;
+ for (i = 0; i < soc_info->num_clk; i++)
+ devm_clk_put(soc_info->dev, soc_info->clk[i]);
+
+ if (soc_private)
kfree(soc_private);
- }
+
kfree(e_ctrl);
return 0;
@@ -366,9 +364,11 @@ static int cam_eeprom_spi_driver_probe(struct spi_device *spi)
static int cam_eeprom_spi_driver_remove(struct spi_device *sdev)
{
+ int i;
struct v4l2_subdev *sd = spi_get_drvdata(sdev);
struct cam_eeprom_ctrl_t *e_ctrl;
struct cam_eeprom_soc_private *soc_private;
+ struct cam_hw_soc_info *soc_info;
if (!sd) {
CAM_ERR(CAM_EEPROM, "Subdevice is NULL");
@@ -381,6 +381,10 @@ static int cam_eeprom_spi_driver_remove(struct spi_device *sdev)
return -EINVAL;
}
+ soc_info = &e_ctrl->soc_info;
+ for (i = 0; i < soc_info->num_clk; i++)
+ devm_clk_put(soc_info->dev, soc_info->clk[i]);
+
kfree(e_ctrl->io_master_info.spi_client);
soc_private =
(struct cam_eeprom_soc_private *)e_ctrl->soc_info.soc_private;
@@ -451,6 +455,9 @@ static int32_t cam_eeprom_platform_driver_probe(
platform_set_drvdata(pdev, e_ctrl);
v4l2_set_subdevdata(&e_ctrl->v4l2_dev_str.sd, e_ctrl);
+
+ e_ctrl->cam_eeprom_state = CAM_EEPROM_INIT;
+
return rc;
free_soc:
kfree(soc_private);
@@ -463,7 +470,9 @@ static int32_t cam_eeprom_platform_driver_probe(
static int cam_eeprom_platform_driver_remove(struct platform_device *pdev)
{
+ int i;
struct cam_eeprom_ctrl_t *e_ctrl;
+ struct cam_hw_soc_info *soc_info;
e_ctrl = platform_get_drvdata(pdev);
if (!e_ctrl) {
@@ -471,7 +480,12 @@ static int cam_eeprom_platform_driver_remove(struct platform_device *pdev)
return -EINVAL;
}
- kfree(e_ctrl->soc_info.soc_private);
+ soc_info = &e_ctrl->soc_info;
+
+ for (i = 0; i < soc_info->num_clk; i++)
+ devm_clk_put(soc_info->dev, soc_info->clk[i]);
+
+ kfree(soc_info->soc_private);
kfree(e_ctrl->io_master_info.cci_client);
kfree(e_ctrl);
return 0;
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_soc.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_soc.c
index 70c40fd..c250045 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_soc.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_eeprom/cam_eeprom_soc.c
@@ -288,7 +288,7 @@ static int cam_eeprom_cmm_dts(struct cam_eeprom_soc_private *eb_info,
*/
int cam_eeprom_parse_dt(struct cam_eeprom_ctrl_t *e_ctrl)
{
- int rc = 0;
+ int i, rc = 0;
struct cam_hw_soc_info *soc_info = &e_ctrl->soc_info;
struct device_node *of_node = NULL;
struct cam_eeprom_soc_private *soc_private =
@@ -358,5 +358,16 @@ int cam_eeprom_parse_dt(struct cam_eeprom_ctrl_t *e_ctrl)
soc_private->i2c_info.slave_addr);
}
+ for (i = 0; i < soc_info->num_clk; i++) {
+ soc_info->clk[i] = devm_clk_get(soc_info->dev,
+ soc_info->clk_name[i]);
+ if (!soc_info->clk[i]) {
+ CAM_ERR(CAM_SENSOR, "get failed for %s",
+ soc_info->clk_name[i]);
+ rc = -ENOENT;
+ return rc;
+ }
+ }
+
return rc;
}
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_core.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_core.c
index d825f5e..4e4b112 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_core.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_core.c
@@ -46,9 +46,9 @@ int32_t cam_ois_construct_default_power_setting(
goto free_power_settings;
}
- power_info->power_setting[0].seq_type = SENSOR_VAF;
- power_info->power_setting[0].seq_val = CAM_VAF;
- power_info->power_setting[0].config_val = 0;
+ power_info->power_down_setting[0].seq_type = SENSOR_VAF;
+ power_info->power_down_setting[0].seq_val = CAM_VAF;
+ power_info->power_down_setting[0].config_val = 0;
return rc;
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_dev.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_dev.c
index 97fede2..5e1d719 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_dev.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_dev.c
@@ -226,13 +226,28 @@ static int cam_ois_i2c_driver_probe(struct i2c_client *client,
static int cam_ois_i2c_driver_remove(struct i2c_client *client)
{
- struct cam_ois_ctrl_t *o_ctrl = i2c_get_clientdata(client);
+ int i;
+ struct cam_ois_ctrl_t *o_ctrl = i2c_get_clientdata(client);
+ struct cam_hw_soc_info *soc_info;
+ struct cam_ois_soc_private *soc_private;
+ struct cam_sensor_power_ctrl_t *power_info;
if (!o_ctrl) {
CAM_ERR(CAM_OIS, "ois device is NULL");
return -EINVAL;
}
+ soc_info = &o_ctrl->soc_info;
+
+ for (i = 0; i < soc_info->num_clk; i++)
+ devm_clk_put(soc_info->dev, soc_info->clk[i]);
+
+ soc_private =
+ (struct cam_ois_soc_private *)soc_info->soc_private;
+ power_info = &soc_private->power_info;
+
+ kfree(power_info->power_setting);
+ kfree(power_info->power_down_setting);
kfree(o_ctrl->soc_info.soc_private);
kfree(o_ctrl);
@@ -251,6 +266,7 @@ static int32_t cam_ois_platform_driver_probe(
return -ENOMEM;
o_ctrl->soc_info.pdev = pdev;
+ o_ctrl->pdev = pdev;
o_ctrl->soc_info.dev = &pdev->dev;
o_ctrl->soc_info.dev_name = pdev->name;
@@ -302,6 +318,8 @@ static int32_t cam_ois_platform_driver_probe(
platform_set_drvdata(pdev, o_ctrl);
v4l2_set_subdevdata(&o_ctrl->v4l2_dev_str.sd, o_ctrl);
+ o_ctrl->cam_ois_state = CAM_OIS_INIT;
+
return rc;
unreg_subdev:
cam_unregister_subdev(&(o_ctrl->v4l2_dev_str));
@@ -316,7 +334,11 @@ static int32_t cam_ois_platform_driver_probe(
static int cam_ois_platform_driver_remove(struct platform_device *pdev)
{
- struct cam_ois_ctrl_t *o_ctrl;
+ int i;
+ struct cam_ois_ctrl_t *o_ctrl;
+ struct cam_ois_soc_private *soc_private;
+ struct cam_sensor_power_ctrl_t *power_info;
+ struct cam_hw_soc_info *soc_info;
o_ctrl = platform_get_drvdata(pdev);
if (!o_ctrl) {
@@ -324,6 +346,16 @@ static int cam_ois_platform_driver_remove(struct platform_device *pdev)
return -EINVAL;
}
+ soc_info = &o_ctrl->soc_info;
+ for (i = 0; i < soc_info->num_clk; i++)
+ devm_clk_put(soc_info->dev, soc_info->clk[i]);
+
+ soc_private =
+ (struct cam_ois_soc_private *)o_ctrl->soc_info.soc_private;
+ power_info = &soc_private->power_info;
+
+ kfree(power_info->power_setting);
+ kfree(power_info->power_down_setting);
kfree(o_ctrl->soc_info.soc_private);
kfree(o_ctrl->io_master_info.cci_client);
kfree(o_ctrl);
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_soc.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_soc.c
index 5886413..c8b3448 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_soc.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_ois/cam_ois_soc.c
@@ -27,7 +27,7 @@
*/
static int cam_ois_get_dt_data(struct cam_ois_ctrl_t *o_ctrl)
{
- int rc = 0;
+ int i, rc = 0;
struct cam_hw_soc_info *soc_info = &o_ctrl->soc_info;
struct cam_ois_soc_private *soc_private =
(struct cam_ois_soc_private *)o_ctrl->soc_info.soc_private;
@@ -65,6 +65,17 @@ static int cam_ois_get_dt_data(struct cam_ois_ctrl_t *o_ctrl)
return -EINVAL;
}
+ for (i = 0; i < soc_info->num_clk; i++) {
+ soc_info->clk[i] = devm_clk_get(soc_info->dev,
+ soc_info->clk_name[i]);
+ if (!soc_info->clk[i]) {
+ CAM_ERR(CAM_SENSOR, "get failed for %s",
+ soc_info->clk_name[i]);
+ rc = -ENOENT;
+ return rc;
+ }
+ }
+
return rc;
}
/**
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr.c
index 949f902..bb3789b 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr.c
@@ -51,6 +51,10 @@ static void cam_res_mgr_free_res(void)
kfree(flash_res);
}
mutex_unlock(&cam_res->flash_res_lock);
+
+ mutex_lock(&cam_res->clk_res_lock);
+ cam_res->shared_clk_ref_count = 0;
+ mutex_unlock(&cam_res->clk_res_lock);
}
void cam_res_mgr_led_trigger_register(const char *name, struct led_trigger **tp)
@@ -243,6 +247,9 @@ static bool cam_res_mgr_shared_pinctrl_check_hold(void)
}
}
+ if (cam_res->shared_clk_ref_count > 1)
+ hold = true;
+
return hold;
}
@@ -258,11 +265,13 @@ void cam_res_mgr_shared_pinctrl_put(void)
mutex_lock(&cam_res->gpio_res_lock);
if (cam_res->pstatus == PINCTRL_STATUS_PUT) {
CAM_DBG(CAM_RES, "The shared pinctrl already been put");
+ mutex_unlock(&cam_res->gpio_res_lock);
return;
}
if (cam_res_mgr_shared_pinctrl_check_hold()) {
CAM_INFO(CAM_RES, "Need hold put this pinctrl");
+ mutex_unlock(&cam_res->gpio_res_lock);
return;
}
@@ -330,10 +339,12 @@ int cam_res_mgr_shared_pinctrl_post_init(void)
pinctrl_info = &cam_res->dt.pinctrl_info;
/*
- * If no gpio resource in gpio_res_list, it means
- * this device don't have shared gpio
+ * If no gpio resource in gpio_res_list, and
+ * no shared clk now, it means this device
+ * don't have shared gpio.
*/
- if (list_empty(&cam_res->gpio_res_list)) {
+ if (list_empty(&cam_res->gpio_res_list) &&
+ cam_res->shared_clk_ref_count < 1) {
ret = pinctrl_select_state(pinctrl_info->pinctrl,
pinctrl_info->gpio_state_suspend);
devm_pinctrl_put(pinctrl_info->pinctrl);
@@ -555,16 +566,20 @@ int cam_res_mgr_gpio_set_value(unsigned int gpio, int value)
if (!found) {
gpio_set_value_cansleep(gpio, value);
} else {
- if (value)
+ if (value) {
gpio_res->power_on_count++;
- else
- gpio_res->power_on_count--;
-
- if (gpio_res->power_on_count > 0) {
- gpio_set_value_cansleep(gpio, value);
+ if (gpio_res->power_on_count < 2) {
+ gpio_set_value_cansleep(gpio, value);
+ CAM_DBG(CAM_RES,
+ "Shared GPIO(%d) : HIGH", gpio);
+ }
} else {
- gpio_res->power_on_count = 0;
- gpio_set_value_cansleep(gpio, 0);
+ gpio_res->power_on_count--;
+ if (gpio_res->power_on_count < 1) {
+ gpio_set_value_cansleep(gpio, value);
+ CAM_DBG(CAM_RES,
+ "Shared GPIO(%d) : LOW", gpio);
+ }
}
}
@@ -572,6 +587,20 @@ int cam_res_mgr_gpio_set_value(unsigned int gpio, int value)
}
EXPORT_SYMBOL(cam_res_mgr_gpio_set_value);
+void cam_res_mgr_shared_clk_config(bool value)
+{
+ if (!cam_res)
+ return;
+
+ mutex_lock(&cam_res->clk_res_lock);
+ if (value)
+ cam_res->shared_clk_ref_count++;
+ else
+ cam_res->shared_clk_ref_count--;
+ mutex_unlock(&cam_res->clk_res_lock);
+}
+EXPORT_SYMBOL(cam_res_mgr_shared_clk_config);
+
static int cam_res_mgr_parse_dt(struct device *dev)
{
int rc = 0;
@@ -645,6 +674,7 @@ static int cam_res_mgr_probe(struct platform_device *pdev)
cam_res->dev = &pdev->dev;
mutex_init(&cam_res->flash_res_lock);
mutex_init(&cam_res->gpio_res_lock);
+ mutex_init(&cam_res->clk_res_lock);
rc = cam_res_mgr_parse_dt(&pdev->dev);
if (rc) {
@@ -655,6 +685,7 @@ static int cam_res_mgr_probe(struct platform_device *pdev)
cam_res->shared_gpio_enabled = true;
}
+ cam_res->shared_clk_ref_count = 0;
cam_res->pstatus = PINCTRL_STATUS_PUT;
INIT_LIST_HEAD(&cam_res->gpio_res_list);
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_api.h b/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_api.h
index 1c4c6c8..7fb13ba 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_api.h
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_api.h
@@ -134,4 +134,15 @@ void cam_res_mgr_gpio_free_arry(struct device *dev,
*/
int cam_res_mgr_gpio_set_value(unsigned int gpio, int value);
+/**
+ * @brief: Config the shared clk ref count
+ *
+ * Config the shared clk ref count..
+ *
+ * @value : get or put the shared clk.
+ *
+ * @return None
+ */
+void cam_res_mgr_shared_clk_config(bool value);
+
#endif /* __CAM_RES_MGR_API_H__ */
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_private.h b/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_private.h
index 4d46c8e..53a8778 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_private.h
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_res_mgr/cam_res_mgr_private.h
@@ -96,6 +96,7 @@ struct cam_res_mgr_dt {
* @flash_res_list : List head of the flash resource
* @gpio_res_lock : GPIO resource lock
* @flash_res_lock : Flash resource lock
+ * @clk_res_lock : Clk resource lock
*/
struct cam_res_mgr {
struct device *dev;
@@ -104,10 +105,13 @@ struct cam_res_mgr {
bool shared_gpio_enabled;
enum pinctrl_status pstatus;
+ uint shared_clk_ref_count;
+
struct list_head gpio_res_list;
struct list_head flash_res_list;
struct mutex gpio_res_lock;
struct mutex flash_res_lock;
+ struct mutex clk_res_lock;
};
#endif /* __CAM_RES_MGR_PRIVATE_H__ */
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_core.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_core.c
index bc92d7e..97158e4 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_core.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_core.c
@@ -182,12 +182,7 @@ static int32_t cam_sensor_i2c_pkt_parse(struct cam_sensor_ctrl_t *s_ctrl,
return rc;
}
}
-
- i2c_reg_settings->request_id =
- csl_packet->header.request_id;
- i2c_reg_settings->is_settings_valid = 1;
- cam_sensor_update_req_mgr(s_ctrl, csl_packet);
- break;
+ break;
}
case CAM_SENSOR_PACKET_OPCODE_SENSOR_NOP: {
cam_sensor_update_req_mgr(s_ctrl, csl_packet);
@@ -207,6 +202,14 @@ static int32_t cam_sensor_i2c_pkt_parse(struct cam_sensor_ctrl_t *s_ctrl,
CAM_ERR(CAM_SENSOR, "Fail parsing I2C Pkt: %d", rc);
return rc;
}
+
+ if ((csl_packet->header.op_code & 0xFFFFFF) ==
+ CAM_SENSOR_PACKET_OPCODE_SENSOR_UPDATE) {
+ i2c_reg_settings->request_id =
+ csl_packet->header.request_id;
+ cam_sensor_update_req_mgr(s_ctrl, csl_packet);
+ }
+
return rc;
}
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_dev.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_dev.c
index 8ea767f..b60111a 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_dev.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor/cam_sensor_dev.c
@@ -208,7 +208,9 @@ static int32_t cam_sensor_driver_i2c_probe(struct i2c_client *client,
static int cam_sensor_platform_remove(struct platform_device *pdev)
{
+ int i;
struct cam_sensor_ctrl_t *s_ctrl;
+ struct cam_hw_soc_info *soc_info;
s_ctrl = platform_get_drvdata(pdev);
if (!s_ctrl) {
@@ -216,6 +218,10 @@ static int cam_sensor_platform_remove(struct platform_device *pdev)
return 0;
}
+ soc_info = &s_ctrl->soc_info;
+ for (i = 0; i < soc_info->num_clk; i++)
+ devm_clk_put(soc_info->dev, soc_info->clk[i]);
+
kfree(s_ctrl->i2c_data.per_frame);
devm_kfree(&pdev->dev, s_ctrl);
@@ -224,13 +230,19 @@ static int cam_sensor_platform_remove(struct platform_device *pdev)
static int cam_sensor_driver_i2c_remove(struct i2c_client *client)
{
+ int i;
struct cam_sensor_ctrl_t *s_ctrl = i2c_get_clientdata(client);
+ struct cam_hw_soc_info *soc_info;
if (!s_ctrl) {
CAM_ERR(CAM_SENSOR, "sensor device is NULL");
return 0;
}
+ soc_info = &s_ctrl->soc_info;
+ for (i = 0; i < soc_info->num_clk; i++)
+ devm_clk_put(soc_info->dev, soc_info->clk[i]);
+
kfree(s_ctrl->i2c_data.per_frame);
kfree(s_ctrl);
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_i2c.h b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_i2c.h
index 10b07ec..7cddcf9 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_i2c.h
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_i2c.h
@@ -163,4 +163,18 @@ int32_t cam_qup_i2c_poll(struct i2c_client *client,
int32_t cam_qup_i2c_write_table(
struct camera_io_master *client,
struct cam_sensor_i2c_reg_setting *write_setting);
+
+/**
+ * cam_qup_i2c_write_continuous_write: QUP based I2C write continuous(Burst/Seq)
+ * @client: QUP I2C client structure
+ * @write_setting: I2C register setting
+ * @cam_sensor_i2c_write_flag: burst or seq write
+ *
+ * This API handles QUP continuous write
+ */
+int32_t cam_qup_i2c_write_continuous_table(
+ struct camera_io_master *client,
+ struct cam_sensor_i2c_reg_setting *write_setting,
+ uint8_t cam_sensor_i2c_write_flag);
+
#endif /*_CAM_SENSOR_I2C_H*/
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_io.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_io.c
index 8eb04ec..d69ff47 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_io.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_io.c
@@ -129,8 +129,8 @@ int32_t camera_io_dev_write_continuous(struct camera_io_master *io_master_info,
return cam_cci_i2c_write_continuous_table(io_master_info,
write_setting, cam_sensor_i2c_write_flag);
} else if (io_master_info->master_type == I2C_MASTER) {
- return cam_qup_i2c_write_table(io_master_info,
- write_setting);
+ return cam_qup_i2c_write_continuous_table(io_master_info,
+ write_setting, cam_sensor_i2c_write_flag);
} else if (io_master_info->master_type == SPI_MASTER) {
return cam_spi_write_table(io_master_info,
write_setting);
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_qup_i2c.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_qup_i2c.c
index 72e51ee..1c6ab0b 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_qup_i2c.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_io/cam_sensor_qup_i2c.c
@@ -353,3 +353,174 @@ int32_t cam_qup_i2c_write_table(struct camera_io_master *client,
return rc;
}
+
+static int32_t cam_qup_i2c_write_seq(struct camera_io_master *client,
+ struct cam_sensor_i2c_reg_setting *write_setting)
+{
+ int i;
+ int32_t rc = 0;
+ struct cam_sensor_i2c_reg_array *reg_setting;
+
+ reg_setting = write_setting->reg_setting;
+
+ for (i = 0; i < write_setting->size; i++) {
+ reg_setting->reg_addr += i;
+ rc = cam_qup_i2c_write(client, reg_setting,
+ write_setting->addr_type, write_setting->data_type);
+ if (rc < 0) {
+ CAM_ERR(CAM_SENSOR,
+ "Sequential i2c write failed: rc: %d", rc);
+ break;
+ }
+ reg_setting++;
+ }
+
+ if (write_setting->delay > 20)
+ msleep(write_setting->delay);
+ else if (write_setting->delay)
+ usleep_range(write_setting->delay * 1000, (write_setting->delay
+ * 1000) + 1000);
+
+ return rc;
+}
+
+static int32_t cam_qup_i2c_write_burst(struct camera_io_master *client,
+ struct cam_sensor_i2c_reg_setting *write_setting)
+{
+ int i;
+ int32_t rc = 0;
+ uint32_t len = 0;
+ unsigned char *buf = NULL;
+ struct cam_sensor_i2c_reg_array *reg_setting;
+ enum camera_sensor_i2c_type addr_type;
+ enum camera_sensor_i2c_type data_type;
+
+ buf = kzalloc((write_setting->addr_type +
+ (write_setting->size * write_setting->data_type)),
+ GFP_KERNEL);
+
+ if (!buf) {
+ CAM_ERR(CAM_SENSOR, "BUF is NULL");
+ return -ENOMEM;
+ }
+
+ reg_setting = write_setting->reg_setting;
+ addr_type = write_setting->addr_type;
+ data_type = write_setting->data_type;
+
+ CAM_DBG(CAM_SENSOR, "reg addr = 0x%x data type: %d",
+ reg_setting->reg_addr, data_type);
+ if (addr_type == CAMERA_SENSOR_I2C_TYPE_BYTE) {
+ buf[0] = reg_setting->reg_addr;
+ CAM_DBG(CAM_SENSOR, "byte %d: 0x%x", len, buf[len]);
+ len = 1;
+ } else if (addr_type == CAMERA_SENSOR_I2C_TYPE_WORD) {
+ buf[0] = reg_setting->reg_addr >> 8;
+ buf[1] = reg_setting->reg_addr;
+ CAM_DBG(CAM_SENSOR, "byte %d: 0x%x", len, buf[len]);
+ CAM_DBG(CAM_SENSOR, "byte %d: 0x%x", len+1, buf[len+1]);
+ len = 2;
+ } else if (addr_type == CAMERA_SENSOR_I2C_TYPE_3B) {
+ buf[0] = reg_setting->reg_addr >> 16;
+ buf[1] = reg_setting->reg_addr >> 8;
+ buf[2] = reg_setting->reg_addr;
+ len = 3;
+ } else if (addr_type == CAMERA_SENSOR_I2C_TYPE_DWORD) {
+ buf[0] = reg_setting->reg_addr >> 24;
+ buf[1] = reg_setting->reg_addr >> 16;
+ buf[2] = reg_setting->reg_addr >> 8;
+ buf[3] = reg_setting->reg_addr;
+ len = 4;
+ } else {
+ CAM_ERR(CAM_SENSOR, "Invalid I2C addr type");
+ rc = -EINVAL;
+ goto free_res;
+ }
+
+ for (i = 0; i < write_setting->size; i++) {
+ if (data_type == CAMERA_SENSOR_I2C_TYPE_BYTE) {
+ buf[len] = reg_setting->reg_data;
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len, buf[len]);
+ len += 1;
+ } else if (data_type == CAMERA_SENSOR_I2C_TYPE_WORD) {
+ buf[len] = reg_setting->reg_data >> 8;
+ buf[len+1] = reg_setting->reg_data;
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len, buf[len]);
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len+1, buf[len+1]);
+ len += 2;
+ } else if (data_type == CAMERA_SENSOR_I2C_TYPE_3B) {
+ buf[len] = reg_setting->reg_data >> 16;
+ buf[len + 1] = reg_setting->reg_data >> 8;
+ buf[len + 2] = reg_setting->reg_data;
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len, buf[len]);
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len+1, buf[len+1]);
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len+2, buf[len+2]);
+ len += 3;
+ } else if (data_type == CAMERA_SENSOR_I2C_TYPE_DWORD) {
+ buf[len] = reg_setting->reg_data >> 24;
+ buf[len + 1] = reg_setting->reg_data >> 16;
+ buf[len + 2] = reg_setting->reg_data >> 8;
+ buf[len + 3] = reg_setting->reg_data;
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len, buf[len]);
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len+1, buf[len+1]);
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len+2, buf[len+2]);
+ CAM_DBG(CAM_SENSOR,
+ "Byte %d: 0x%x", len+3, buf[len+3]);
+ len += 4;
+ } else {
+ CAM_ERR(CAM_SENSOR, "Invalid Data Type");
+ rc = -EINVAL;
+ goto free_res;
+ }
+ reg_setting++;
+ }
+
+ if (len > (write_setting->addr_type +
+ (write_setting->size * write_setting->data_type))) {
+ CAM_ERR(CAM_SENSOR, "Invalid Length: %u | Expected length: %u",
+ len, (write_setting->addr_type +
+ (write_setting->size * write_setting->data_type)));
+ rc = -EINVAL;
+ goto free_res;
+ }
+
+ rc = cam_qup_i2c_txdata(client, buf, len);
+ if (rc < 0)
+ CAM_ERR(CAM_SENSOR, "failed rc: %d", rc);
+
+free_res:
+ kfree(buf);
+ return rc;
+}
+
+int32_t cam_qup_i2c_write_continuous_table(struct camera_io_master *client,
+ struct cam_sensor_i2c_reg_setting *write_settings,
+ uint8_t cam_sensor_i2c_write_flag)
+{
+ int32_t rc = 0;
+
+ if (!client || !write_settings)
+ return -EINVAL;
+
+ if ((write_settings->addr_type <= CAMERA_SENSOR_I2C_TYPE_INVALID
+ || write_settings->addr_type >= CAMERA_SENSOR_I2C_TYPE_MAX
+ || (write_settings->data_type <= CAMERA_SENSOR_I2C_TYPE_INVALID
+ || write_settings->data_type >= CAMERA_SENSOR_I2C_TYPE_MAX)))
+ return -EINVAL;
+
+ if (cam_sensor_i2c_write_flag == CAM_SENSOR_I2C_WRITE_BURST)
+ rc = cam_qup_i2c_write_burst(client, write_settings);
+ else if (cam_sensor_i2c_write_flag == CAM_SENSOR_I2C_WRITE_SEQ)
+ rc = cam_qup_i2c_write_seq(client, write_settings);
+
+ return rc;
+}
diff --git a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_utils/cam_sensor_util.c b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_utils/cam_sensor_util.c
index b3de092..37784b4 100644
--- a/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_utils/cam_sensor_util.c
+++ b/drivers/media/platform/msm/camera/cam_sensor_module/cam_sensor_utils/cam_sensor_util.c
@@ -231,6 +231,8 @@ static int32_t cam_sensor_handle_continuous_write(
cam_cmd_i2c_continuous_wr->header.addr_type;
i2c_list->i2c_settings.data_type =
cam_cmd_i2c_continuous_wr->header.data_type;
+ i2c_list->i2c_settings.size =
+ cam_cmd_i2c_continuous_wr->header.count;
for (cnt = 0; cnt < (cam_cmd_i2c_continuous_wr->header.count);
cnt++) {
@@ -1179,12 +1181,6 @@ int msm_camera_pinctrl_init(
return -EINVAL;
}
- if (cam_res_mgr_shared_pinctrl_init()) {
- CAM_ERR(CAM_SENSOR,
- "Failed to init shared pinctrl");
- return -EINVAL;
- }
-
return 0;
}
@@ -1235,6 +1231,9 @@ int cam_sensor_core_power_up(struct cam_sensor_power_ctrl_t *ctrl,
return -EINVAL;
}
+ if (soc_info->use_shared_clk)
+ cam_res_mgr_shared_clk_config(true);
+
ret = msm_camera_pinctrl_init(&(ctrl->pinctrl_info), ctrl->dev);
if (ret < 0) {
/* Some sensor subdev no pinctrl. */
@@ -1244,6 +1243,12 @@ int cam_sensor_core_power_up(struct cam_sensor_power_ctrl_t *ctrl,
ctrl->cam_pinctrl_status = 1;
}
+ if (cam_res_mgr_shared_pinctrl_init()) {
+ CAM_ERR(CAM_SENSOR,
+ "Failed to init shared pinctrl");
+ return -EINVAL;
+ }
+
rc = cam_sensor_util_request_gpio_table(soc_info, 1);
if (rc < 0)
no_gpio = rc;
@@ -1254,18 +1259,13 @@ int cam_sensor_core_power_up(struct cam_sensor_power_ctrl_t *ctrl,
ctrl->pinctrl_info.gpio_state_active);
if (ret)
CAM_ERR(CAM_SENSOR, "cannot set pin to active state");
-
- ret = cam_res_mgr_shared_pinctrl_select_state(true);
- if (ret)
- CAM_ERR(CAM_SENSOR,
- "Cannot set shared pin to active state");
-
- ret = cam_res_mgr_shared_pinctrl_post_init();
- if (ret)
- CAM_ERR(CAM_SENSOR,
- "Failed to post init shared pinctrl");
}
+ ret = cam_res_mgr_shared_pinctrl_select_state(true);
+ if (ret)
+ CAM_ERR(CAM_SENSOR,
+ "Cannot set shared pin to active state");
+
for (index = 0; index < ctrl->power_setting_size; index++) {
CAM_DBG(CAM_SENSOR, "index: %d", index);
power_setting = &ctrl->power_setting[index];
@@ -1427,6 +1427,11 @@ int cam_sensor_core_power_up(struct cam_sensor_power_ctrl_t *ctrl,
(power_setting->delay * 1000) + 1000);
}
+ ret = cam_res_mgr_shared_pinctrl_post_init();
+ if (ret)
+ CAM_ERR(CAM_SENSOR,
+ "Failed to post init shared pinctrl");
+
return 0;
power_up_failed:
CAM_ERR(CAM_SENSOR, "failed");
@@ -1492,6 +1497,7 @@ int cam_sensor_core_power_up(struct cam_sensor_power_ctrl_t *ctrl,
(power_setting->delay * 1000) + 1000);
}
}
+
if (ctrl->cam_pinctrl_status) {
ret = pinctrl_select_state(
ctrl->pinctrl_info.pinctrl,
@@ -1502,6 +1508,10 @@ int cam_sensor_core_power_up(struct cam_sensor_power_ctrl_t *ctrl,
pinctrl_put(ctrl->pinctrl_info.pinctrl);
cam_res_mgr_shared_pinctrl_put();
}
+
+ if (soc_info->use_shared_clk)
+ cam_res_mgr_shared_clk_config(false);
+
ctrl->cam_pinctrl_status = 0;
cam_sensor_util_request_gpio_table(soc_info, 0);
@@ -1698,6 +1708,9 @@ int msm_camera_power_down(struct cam_sensor_power_ctrl_t *ctrl,
cam_res_mgr_shared_pinctrl_put();
}
+ if (soc_info->use_shared_clk)
+ cam_res_mgr_shared_clk_config(false);
+
ctrl->cam_pinctrl_status = 0;
cam_sensor_util_request_gpio_table(soc_info, 0);
diff --git a/drivers/media/platform/msm/camera/cam_sync/cam_sync.c b/drivers/media/platform/msm/camera/cam_sync/cam_sync.c
index 2422016..ae9f74c 100644
--- a/drivers/media/platform/msm/camera/cam_sync/cam_sync.c
+++ b/drivers/media/platform/msm/camera/cam_sync/cam_sync.c
@@ -55,7 +55,6 @@ int cam_sync_register_callback(sync_callback cb_func,
{
struct sync_callback_info *sync_cb;
struct sync_callback_info *cb_info;
- struct sync_callback_info *temp_cb;
struct sync_table_row *row = NULL;
if (sync_obj >= CAM_SYNC_MAX_OBJS || sync_obj <= 0 || !cb_func)
@@ -72,6 +71,17 @@ int cam_sync_register_callback(sync_callback cb_func,
return -EINVAL;
}
+ /* Don't register if callback was registered earlier */
+ list_for_each_entry(cb_info, &row->callback_list, list) {
+ if (cb_info->callback_func == cb_func &&
+ cb_info->cb_data == userdata) {
+ CAM_ERR(CAM_SYNC, "Duplicate register for sync_obj %d",
+ sync_obj);
+ spin_unlock_bh(&sync_dev->row_spinlocks[sync_obj]);
+ return -EALREADY;
+ }
+ }
+
sync_cb = kzalloc(sizeof(*sync_cb), GFP_ATOMIC);
if (!sync_cb) {
spin_unlock_bh(&sync_dev->row_spinlocks[sync_obj]);
@@ -86,7 +96,6 @@ int cam_sync_register_callback(sync_callback cb_func,
sync_cb->sync_obj = sync_obj;
INIT_WORK(&sync_cb->cb_dispatch_work,
cam_sync_util_cb_dispatch);
- list_add_tail(&sync_cb->list, &row->callback_list);
sync_cb->status = row->state;
queue_work(sync_dev->work_queue,
&sync_cb->cb_dispatch_work);
@@ -95,16 +104,6 @@ int cam_sync_register_callback(sync_callback cb_func,
return 0;
}
- /* Don't register if callback was registered earlier */
- list_for_each_entry_safe(cb_info, temp_cb, &row->callback_list, list) {
- if (cb_info->callback_func == cb_func &&
- cb_info->cb_data == userdata) {
- kfree(sync_cb);
- spin_unlock_bh(&sync_dev->row_spinlocks[sync_obj]);
- return -EALREADY;
- }
- }
-
sync_cb->callback_func = cb_func;
sync_cb->cb_data = userdata;
sync_cb->sync_obj = sync_obj;
@@ -230,12 +229,16 @@ int cam_sync_signal(int32_t sync_obj, uint32_t status)
spin_unlock_bh(
&sync_dev->row_spinlocks[
parent_info->sync_id]);
+ spin_unlock_bh(
+ &sync_dev->row_spinlocks[sync_obj]);
return rc;
}
}
spin_unlock_bh(&sync_dev->row_spinlocks[parent_info->sync_id]);
}
+ spin_unlock_bh(&sync_dev->row_spinlocks[sync_obj]);
+
/*
* Now dispatch the various sync objects collected so far, in our
* list
@@ -249,17 +252,20 @@ int cam_sync_signal(int32_t sync_obj, uint32_t status)
struct sync_user_payload *temp_payload_info;
signalable_row = sync_dev->sync_table + list_info->sync_obj;
+
+ spin_lock_bh(&sync_dev->row_spinlocks[list_info->sync_obj]);
/* Dispatch kernel callbacks if any were registered earlier */
list_for_each_entry_safe(sync_cb,
- temp_sync_cb, &signalable_row->callback_list, list) {
+ temp_sync_cb, &signalable_row->callback_list, list) {
sync_cb->status = list_info->status;
+ list_del_init(&sync_cb->list);
queue_work(sync_dev->work_queue,
&sync_cb->cb_dispatch_work);
}
/* Dispatch user payloads if any were registered earlier */
list_for_each_entry_safe(payload_info, temp_payload_info,
- &signalable_row->user_payload_list, list) {
+ &signalable_row->user_payload_list, list) {
spin_lock_bh(&sync_dev->cam_sync_eventq_lock);
if (!sync_dev->cam_sync_eventq) {
spin_unlock_bh(
@@ -289,12 +295,12 @@ int cam_sync_signal(int32_t sync_obj, uint32_t status)
*/
complete_all(&signalable_row->signaled);
+ spin_unlock_bh(&sync_dev->row_spinlocks[list_info->sync_obj]);
+
list_del_init(&list_info->list);
kfree(list_info);
}
- spin_unlock_bh(&sync_dev->row_spinlocks[sync_obj]);
-
return rc;
}
@@ -344,25 +350,7 @@ int cam_sync_merge(int32_t *sync_obj, uint32_t num_objs, int32_t *merged_obj)
int cam_sync_destroy(int32_t sync_obj)
{
- struct sync_table_row *row = NULL;
-
- if (sync_obj >= CAM_SYNC_MAX_OBJS || sync_obj <= 0)
- return -EINVAL;
-
- spin_lock_bh(&sync_dev->row_spinlocks[sync_obj]);
- row = sync_dev->sync_table + sync_obj;
- if (row->state == CAM_SYNC_STATE_INVALID) {
- CAM_ERR(CAM_SYNC,
- "Error: accessing an uninitialized sync obj: idx = %d",
- sync_obj);
- spin_unlock_bh(&sync_dev->row_spinlocks[sync_obj]);
- return -EINVAL;
- }
-
- cam_sync_deinit_object(sync_dev->sync_table, sync_obj);
- spin_unlock_bh(&sync_dev->row_spinlocks[sync_obj]);
-
- return 0;
+ return cam_sync_deinit_object(sync_dev->sync_table, sync_obj);
}
int cam_sync_wait(int32_t sync_obj, uint64_t timeout_ms)
diff --git a/drivers/media/platform/msm/camera/cam_sync/cam_sync_private.h b/drivers/media/platform/msm/camera/cam_sync/cam_sync_private.h
index ba9bef4..5ae707a 100644
--- a/drivers/media/platform/msm/camera/cam_sync/cam_sync_private.h
+++ b/drivers/media/platform/msm/camera/cam_sync/cam_sync_private.h
@@ -55,6 +55,18 @@ enum sync_type {
};
/**
+ * enum sync_list_clean_type - Enum to indicate the type of list clean action
+ * to be peformed, i.e. specific sync ID or all list sync ids.
+ *
+ * @SYNC_CLEAN_ONE : Specific object to be cleaned in the list
+ * @SYNC_CLEAN_ALL : Clean all objects in the list
+ */
+enum sync_list_clean_type {
+ SYNC_LIST_CLEAN_ONE,
+ SYNC_LIST_CLEAN_ALL
+};
+
+/**
* struct sync_parent_info - Single node of information about a parent
* of a sync object, usually part of the parents linked list
*
diff --git a/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.c b/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.c
index f66b882..826253c 100644
--- a/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.c
+++ b/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.c
@@ -144,8 +144,8 @@ int cam_sync_init_group_object(struct sync_table_row *table,
child_info = kzalloc(sizeof(*child_info), GFP_ATOMIC);
if (!child_info) {
- cam_sync_util_cleanup_children_list(
- &row->children_list);
+ cam_sync_util_cleanup_children_list(row,
+ SYNC_LIST_CLEAN_ALL, 0);
return -ENOMEM;
}
@@ -159,10 +159,10 @@ int cam_sync_init_group_object(struct sync_table_row *table,
spin_lock_bh(&sync_dev->row_spinlocks[sync_objs[i]]);
parent_info = kzalloc(sizeof(*parent_info), GFP_ATOMIC);
if (!parent_info) {
- cam_sync_util_cleanup_parents_list(
- &child_row->parents_list);
- cam_sync_util_cleanup_children_list(
- &row->children_list);
+ cam_sync_util_cleanup_parents_list(child_row,
+ SYNC_LIST_CLEAN_ALL, 0);
+ cam_sync_util_cleanup_children_list(row,
+ SYNC_LIST_CLEAN_ALL, 0);
spin_unlock_bh(&sync_dev->row_spinlocks[sync_objs[i]]);
return -ENOMEM;
}
@@ -196,42 +196,117 @@ int cam_sync_init_group_object(struct sync_table_row *table,
int cam_sync_deinit_object(struct sync_table_row *table, uint32_t idx)
{
- struct sync_table_row *row = table + idx;
- struct sync_child_info *child_info, *temp_child;
- struct sync_callback_info *sync_cb, *temp_cb;
- struct sync_parent_info *parent_info, *temp_parent;
- struct sync_user_payload *upayload_info, *temp_upayload;
+ struct sync_table_row *row = table + idx;
+ struct sync_child_info *child_info, *temp_child;
+ struct sync_callback_info *sync_cb, *temp_cb;
+ struct sync_parent_info *parent_info, *temp_parent;
+ struct sync_user_payload *upayload_info, *temp_upayload;
+ struct sync_table_row *child_row = NULL, *parent_row = NULL;
+ struct list_head temp_child_list, temp_parent_list;
if (!table || idx <= 0 || idx >= CAM_SYNC_MAX_OBJS)
return -EINVAL;
- clear_bit(idx, sync_dev->bitmap);
- list_for_each_entry_safe(child_info, temp_child,
- &row->children_list, list) {
+ spin_lock_bh(&sync_dev->row_spinlocks[idx]);
+ if (row->state == CAM_SYNC_STATE_INVALID) {
+ CAM_ERR(CAM_SYNC,
+ "Error: accessing an uninitialized sync obj: idx = %d",
+ idx);
+ spin_unlock_bh(&sync_dev->row_spinlocks[idx]);
+ return -EINVAL;
+ }
+
+ /* Object's child and parent objects will be added into this list */
+ INIT_LIST_HEAD(&temp_child_list);
+ INIT_LIST_HEAD(&temp_parent_list);
+
+ list_for_each_entry_safe(child_info, temp_child, &row->children_list,
+ list) {
+ if (child_info->sync_id <= 0)
+ continue;
+
+ list_del_init(&child_info->list);
+ list_add_tail(&child_info->list, &temp_child_list);
+ }
+
+ list_for_each_entry_safe(parent_info, temp_parent, &row->parents_list,
+ list) {
+ if (parent_info->sync_id <= 0)
+ continue;
+
+ list_del_init(&parent_info->list);
+ list_add_tail(&parent_info->list, &temp_parent_list);
+ }
+
+ spin_unlock_bh(&sync_dev->row_spinlocks[idx]);
+
+ /* Cleanup the child to parent link from child list */
+ while (!list_empty(&temp_child_list)) {
+ child_info = list_first_entry(&temp_child_list,
+ struct sync_child_info, list);
+ child_row = sync_dev->sync_table + child_info->sync_id;
+
+ spin_lock_bh(&sync_dev->row_spinlocks[child_info->sync_id]);
+
+ if (child_row->state == CAM_SYNC_STATE_INVALID) {
+ spin_unlock_bh(&sync_dev->row_spinlocks[
+ child_info->sync_id]);
+ list_del_init(&child_info->list);
+ kfree(child_info);
+ continue;
+ }
+
+ cam_sync_util_cleanup_parents_list(child_row,
+ SYNC_LIST_CLEAN_ONE, idx);
+
+ spin_unlock_bh(&sync_dev->row_spinlocks[child_info->sync_id]);
+
list_del_init(&child_info->list);
kfree(child_info);
}
- list_for_each_entry_safe(parent_info, temp_parent,
- &row->parents_list, list) {
+ /* Cleanup the parent to child link */
+ while (!list_empty(&temp_parent_list)) {
+ parent_info = list_first_entry(&temp_parent_list,
+ struct sync_parent_info, list);
+ parent_row = sync_dev->sync_table + parent_info->sync_id;
+
+ spin_lock_bh(&sync_dev->row_spinlocks[parent_info->sync_id]);
+
+ if (parent_row->state == CAM_SYNC_STATE_INVALID) {
+ spin_unlock_bh(&sync_dev->row_spinlocks[
+ parent_info->sync_id]);
+ list_del_init(&parent_info->list);
+ kfree(parent_info);
+ continue;
+ }
+
+ cam_sync_util_cleanup_children_list(parent_row,
+ SYNC_LIST_CLEAN_ONE, idx);
+
+ spin_unlock_bh(&sync_dev->row_spinlocks[parent_info->sync_id]);
+
list_del_init(&parent_info->list);
kfree(parent_info);
}
+ spin_lock_bh(&sync_dev->row_spinlocks[idx]);
list_for_each_entry_safe(upayload_info, temp_upayload,
- &row->user_payload_list, list) {
+ &row->user_payload_list, list) {
list_del_init(&upayload_info->list);
kfree(upayload_info);
}
list_for_each_entry_safe(sync_cb, temp_cb,
- &row->callback_list, list) {
+ &row->callback_list, list) {
list_del_init(&sync_cb->list);
kfree(sync_cb);
}
row->state = CAM_SYNC_STATE_INVALID;
memset(row, 0, sizeof(*row));
+ clear_bit(idx, sync_dev->bitmap);
+ spin_unlock_bh(&sync_dev->row_spinlocks[idx]);
return 0;
}
@@ -242,10 +317,6 @@ void cam_sync_util_cb_dispatch(struct work_struct *cb_dispatch_work)
struct sync_callback_info,
cb_dispatch_work);
- spin_lock_bh(&sync_dev->row_spinlocks[cb_info->sync_obj]);
- list_del_init(&cb_info->list);
- spin_unlock_bh(&sync_dev->row_spinlocks[cb_info->sync_obj]);
-
cb_info->callback_func(cb_info->sync_obj,
cb_info->status,
cb_info->cb_data);
@@ -350,26 +421,48 @@ int cam_sync_util_get_state(int current_state,
return result;
}
-void cam_sync_util_cleanup_children_list(struct list_head *list_to_clean)
+void cam_sync_util_cleanup_children_list(struct sync_table_row *row,
+ uint32_t list_clean_type, uint32_t sync_obj)
{
struct sync_child_info *child_info = NULL;
struct sync_child_info *temp_child_info = NULL;
+ uint32_t curr_sync_obj;
list_for_each_entry_safe(child_info,
- temp_child_info, list_to_clean, list) {
+ temp_child_info, &row->children_list, list) {
+ if ((list_clean_type == SYNC_LIST_CLEAN_ONE) &&
+ (child_info->sync_id != sync_obj))
+ continue;
+
+ curr_sync_obj = child_info->sync_id;
list_del_init(&child_info->list);
kfree(child_info);
+
+ if ((list_clean_type == SYNC_LIST_CLEAN_ONE) &&
+ (curr_sync_obj == sync_obj))
+ break;
}
}
-void cam_sync_util_cleanup_parents_list(struct list_head *list_to_clean)
+void cam_sync_util_cleanup_parents_list(struct sync_table_row *row,
+ uint32_t list_clean_type, uint32_t sync_obj)
{
struct sync_parent_info *parent_info = NULL;
struct sync_parent_info *temp_parent_info = NULL;
+ uint32_t curr_sync_obj;
list_for_each_entry_safe(parent_info,
- temp_parent_info, list_to_clean, list) {
+ temp_parent_info, &row->parents_list, list) {
+ if ((list_clean_type == SYNC_LIST_CLEAN_ONE) &&
+ (parent_info->sync_id != sync_obj))
+ continue;
+
+ curr_sync_obj = parent_info->sync_id;
list_del_init(&parent_info->list);
kfree(parent_info);
+
+ if ((list_clean_type == SYNC_LIST_CLEAN_ONE) &&
+ (curr_sync_obj == sync_obj))
+ break;
}
}
diff --git a/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.h b/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.h
index 8b60ce1..ae7d542 100644
--- a/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.h
+++ b/drivers/media/platform/msm/camera/cam_sync/cam_sync_util.h
@@ -140,18 +140,26 @@ int cam_sync_util_get_state(int current_state,
/**
* @brief: Function to clean up the children of a sync object
- * @param list_to_clean : List to clean up
+ * @row : Row whose child list to clean
+ * @list_clean_type : Clean specific object or clean all objects
+ * @sync_obj : Sync object to be clean if list clean type is
+ * SYNC_LIST_CLEAN_ONE
*
* @return None
*/
-void cam_sync_util_cleanup_children_list(struct list_head *list_to_clean);
+void cam_sync_util_cleanup_children_list(struct sync_table_row *row,
+ uint32_t list_clean_type, uint32_t sync_obj);
/**
* @brief: Function to clean up the parents of a sync object
- * @param list_to_clean : List to clean up
+ * @row : Row whose parent list to clean
+ * @list_clean_type : Clean specific object or clean all objects
+ * @sync_obj : Sync object to be clean if list clean type is
+ * SYNC_LIST_CLEAN_ONE
*
* @return None
*/
-void cam_sync_util_cleanup_parents_list(struct list_head *list_to_clean);
+void cam_sync_util_cleanup_parents_list(struct sync_table_row *row,
+ uint32_t list_clean_type, uint32_t sync_obj);
#endif /* __CAM_SYNC_UTIL_H__ */
diff --git a/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.c b/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.c
index 611c4e9..07fb944 100644
--- a/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.c
+++ b/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.c
@@ -410,6 +410,13 @@ static int cam_soc_util_get_dt_clk_info(struct cam_hw_soc_info *soc_info)
of_node = soc_info->dev->of_node;
+ if (!of_property_read_bool(of_node, "use-shared-clk")) {
+ CAM_DBG(CAM_UTIL, "No shared clk parameter defined");
+ soc_info->use_shared_clk = false;
+ } else {
+ soc_info->use_shared_clk = true;
+ }
+
count = of_property_count_strings(of_node, "clock-names");
CAM_DBG(CAM_UTIL, "count = %d", count);
diff --git a/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.h b/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.h
index 5123ec4..4a87d50 100644
--- a/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.h
+++ b/drivers/media/platform/msm/camera/cam_utils/cam_soc_util.h
@@ -180,6 +180,7 @@ struct cam_hw_soc_info {
struct regulator *rgltr[CAM_SOC_MAX_REGULATOR];
uint32_t rgltr_delay[CAM_SOC_MAX_REGULATOR];
+ uint32_t use_shared_clk;
uint32_t num_clk;
const char *clk_name[CAM_SOC_MAX_CLK];
struct clk *clk[CAM_SOC_MAX_CLK];
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_base.c b/drivers/media/platform/msm/sde/rotator/sde_rotator_base.c
index dc041a7..749aa7f 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_base.c
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_base.c
@@ -182,7 +182,7 @@ u32 sde_mdp_get_ot_limit(u32 width, u32 height, u32 pixfmt, u32 fps, u32 is_rd)
struct sde_mdp_format_params *fmt;
u32 ot_lim;
u32 is_yuv;
- u32 res;
+ u64 res;
ot_lim = (is_rd) ? mdata->default_ot_rd_limit :
mdata->default_ot_wr_limit;
@@ -198,7 +198,11 @@ u32 sde_mdp_get_ot_limit(u32 width, u32 height, u32 pixfmt, u32 fps, u32 is_rd)
if (false == test_bit(SDE_QOS_OTLIM, mdata->sde_qos_map))
goto exit;
+ width = min_t(u32, width, SDE_ROT_MAX_IMG_WIDTH);
+ height = min_t(u32, height, SDE_ROT_MAX_IMG_HEIGHT);
+
res = width * height;
+ res = res * fps;
fmt = sde_get_format_params(pixfmt);
@@ -209,17 +213,14 @@ u32 sde_mdp_get_ot_limit(u32 width, u32 height, u32 pixfmt, u32 fps, u32 is_rd)
is_yuv = sde_mdp_is_yuv_format(fmt);
- SDEROT_DBG("w:%d h:%d fps:%d pixfmt:%8.8x yuv:%d res:%d rd:%d\n",
+ SDEROT_DBG("w:%d h:%d fps:%d pixfmt:%8.8x yuv:%d res:%llu rd:%d\n",
width, height, fps, pixfmt, is_yuv, res, is_rd);
- if (!is_yuv)
- goto exit;
-
- if ((res <= RES_1080p) && (fps <= 30))
+ if (res <= (RES_1080p * 30))
ot_lim = 2;
- else if ((res <= RES_1080p) && (fps <= 60))
+ else if (res <= (RES_1080p * 60))
ot_lim = 4;
- else if ((res <= RES_UHD) && (fps <= 30))
+ else if (res <= (RES_UHD * 30))
ot_lim = 8;
exit:
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_core.c b/drivers/media/platform/msm/sde/rotator/sde_rotator_core.c
index c7d1074..a455357 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_core.c
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_core.c
@@ -54,7 +54,7 @@
#define ROT_HW_ACQUIRE_TIMEOUT_IN_MS 100
/* waiting for inline hw start */
-#define ROT_INLINE_START_TIMEOUT_IN_MS 2000
+#define ROT_INLINE_START_TIMEOUT_IN_MS (10000 + 500)
/* default pixel per clock ratio */
#define ROT_PIXEL_PER_CLK_NUMERATOR 36
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_core.h b/drivers/media/platform/msm/sde/rotator/sde_rotator_core.h
index e23ed7a..8421873 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_core.h
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_core.h
@@ -761,6 +761,15 @@ int sde_rotator_validate_request(struct sde_rot_mgr *rot_dev,
*/
int sde_rotator_clk_ctrl(struct sde_rot_mgr *mgr, int enable);
+/* sde_rotator_resource_ctrl_enabled - check if resource control is enabled
+ * @mgr: Pointer to rotator manager
+ * Return: true if enabled; false otherwise
+ */
+static inline int sde_rotator_resource_ctrl_enabled(struct sde_rot_mgr *mgr)
+{
+ return mgr->regulator_enable;
+}
+
/*
* sde_rotator_cancel_all_requests - cancel all outstanding requests
* @mgr: Pointer to rotator manager
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_debug.c b/drivers/media/platform/msm/sde/rotator/sde_rotator_debug.c
index b9158e1..fb74dab 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_debug.c
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_debug.c
@@ -1208,18 +1208,29 @@ static ssize_t sde_rotator_debug_base_reg_write(struct file *file,
mutex_lock(&dbg->buflock);
/* Enable Clock for register access */
+ sde_rot_mgr_lock(dbg->mgr);
+ if (!sde_rotator_resource_ctrl_enabled(dbg->mgr)) {
+ SDEROT_WARN("resource ctrl is not enabled\n");
+ sde_rot_mgr_unlock(dbg->mgr);
+ goto debug_write_error;
+ }
sde_rotator_clk_ctrl(dbg->mgr, true);
writel_relaxed(data, dbg->base + off);
/* Disable Clock after register access */
sde_rotator_clk_ctrl(dbg->mgr, false);
+ sde_rot_mgr_unlock(dbg->mgr);
mutex_unlock(&dbg->buflock);
SDEROT_DBG("addr=%zx data=%x\n", off, data);
return count;
+
+debug_write_error:
+ mutex_unlock(&dbg->buflock);
+ return 0;
}
static ssize_t sde_rotator_debug_base_reg_read(struct file *file,
@@ -1257,6 +1268,12 @@ static ssize_t sde_rotator_debug_base_reg_read(struct file *file,
tot = 0;
/* Enable clock for register access */
+ sde_rot_mgr_lock(dbg->mgr);
+ if (!sde_rotator_resource_ctrl_enabled(dbg->mgr)) {
+ SDEROT_WARN("resource ctrl is not enabled\n");
+ sde_rot_mgr_unlock(dbg->mgr);
+ goto debug_read_error;
+ }
sde_rotator_clk_ctrl(dbg->mgr, true);
for (cnt = dbg->cnt; cnt > 0; cnt -= ROW_BYTES) {
@@ -1276,6 +1293,7 @@ static ssize_t sde_rotator_debug_base_reg_read(struct file *file,
}
/* Disable clock after register access */
sde_rotator_clk_ctrl(dbg->mgr, false);
+ sde_rot_mgr_unlock(dbg->mgr);
dbg->buf_len = tot;
}
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_dev.c b/drivers/media/platform/msm/sde/rotator/sde_rotator_dev.c
index 22d46f5..dd0c04d 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_dev.c
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_dev.c
@@ -1201,6 +1201,8 @@ static void sde_rotator_retire_request(struct sde_rotator_request *request)
list_add_tail(&request->list, &ctx->retired_list);
spin_unlock(&ctx->list_lock);
+ wake_up(&ctx->wait_queue);
+
SDEROT_DBG("retire request s:%d.%d\n",
ctx->session_id, ctx->retired_sequence_id);
}
@@ -1440,6 +1442,61 @@ int sde_rotator_inline_get_pixfmt_caps(struct platform_device *pdev,
EXPORT_SYMBOL(sde_rotator_inline_get_pixfmt_caps);
/*
+ * _sde_rotator_inline_cleanup - perform inline related request cleanup
+ * This function assumes rot_dev->mgr lock has been taken when called.
+ * @handle: Pointer to rotator context
+ * @request: Pointer to rotation request
+ * return: 0 if success; -EAGAIN if cleanup should be retried
+ */
+static int _sde_rotator_inline_cleanup(void *handle,
+ struct sde_rotator_request *request)
+{
+ struct sde_rotator_ctx *ctx;
+ struct sde_rotator_device *rot_dev;
+ int ret;
+
+ if (!handle || !request) {
+ SDEROT_ERR("invalid rotator handle/request\n");
+ return -EINVAL;
+ }
+
+ ctx = handle;
+ rot_dev = ctx->rot_dev;
+
+ if (!rot_dev || !rot_dev->mgr) {
+ SDEROT_ERR("invalid rotator device\n");
+ return -EINVAL;
+ }
+
+ if (request->committed) {
+ /* wait until request is finished */
+ sde_rot_mgr_unlock(rot_dev->mgr);
+ mutex_unlock(&rot_dev->lock);
+ ret = wait_event_timeout(ctx->wait_queue,
+ sde_rotator_is_request_retired(request),
+ msecs_to_jiffies(rot_dev->streamoff_timeout));
+ mutex_lock(&rot_dev->lock);
+ sde_rot_mgr_lock(rot_dev->mgr);
+
+ if (!ret) {
+ SDEROT_ERR("timeout w/o retire s:%d\n",
+ ctx->session_id);
+ SDEROT_EVTLOG(ctx->session_id, SDE_ROT_EVTLOG_ERROR);
+ sde_rotator_abort_inline_request(rot_dev->mgr,
+ ctx->private, request->req);
+ return -EAGAIN;
+ } else if (ret == 1) {
+ SDEROT_ERR("timeout w/ retire s:%d\n", ctx->session_id);
+ SDEROT_EVTLOG(ctx->session_id, SDE_ROT_EVTLOG_ERROR);
+ }
+ }
+
+ sde_rotator_req_finish(rot_dev->mgr, ctx->private, request->req);
+ sde_rotator_retire_request(request);
+ return 0;
+}
+
+/*
* sde_rotator_inline_commit - commit given rotator command
* @handle: Pointer to rotator context
* @cmd: Pointer to rotator command
@@ -1466,7 +1523,7 @@ int sde_rotator_inline_commit(void *handle, struct sde_rotator_inline_cmd *cmd,
ctx = handle;
rot_dev = ctx->rot_dev;
- if (!rot_dev) {
+ if (!rot_dev || !rot_dev->mgr) {
SDEROT_ERR("invalid rotator device\n");
return -EINVAL;
}
@@ -1498,6 +1555,7 @@ int sde_rotator_inline_commit(void *handle, struct sde_rotator_inline_cmd *cmd,
(cmd->video_mode << 5) |
(cmd_type << 24));
+ mutex_lock(&rot_dev->lock);
sde_rot_mgr_lock(rot_dev->mgr);
if (cmd_type == SDE_ROTATOR_INLINE_CMD_VALIDATE ||
@@ -1707,30 +1765,12 @@ int sde_rotator_inline_commit(void *handle, struct sde_rotator_inline_cmd *cmd,
}
request = cmd->priv_handle;
- req = request->req;
- if (request->committed) {
- /* wait until request is finished */
- sde_rot_mgr_unlock(rot_dev->mgr);
- ret = wait_event_timeout(ctx->wait_queue,
- sde_rotator_is_request_retired(request),
- msecs_to_jiffies(rot_dev->streamoff_timeout));
- if (!ret) {
- SDEROT_ERR("timeout w/o retire s:%d\n",
- ctx->session_id);
- SDEROT_EVTLOG(ctx->session_id,
- SDE_ROT_EVTLOG_ERROR);
- } else if (ret == 1) {
- SDEROT_ERR("timeout w/ retire s:%d\n",
- ctx->session_id);
- SDEROT_EVTLOG(ctx->session_id,
- SDE_ROT_EVTLOG_ERROR);
- }
- sde_rot_mgr_lock(rot_dev->mgr);
- }
+ /* attempt single retry if first cleanup attempt failed */
+ if (_sde_rotator_inline_cleanup(handle, request) == -EAGAIN)
+ _sde_rotator_inline_cleanup(handle, request);
- sde_rotator_req_finish(rot_dev->mgr, ctx->private, req);
- sde_rotator_retire_request(request);
+ cmd->priv_handle = NULL;
} else if (cmd_type == SDE_ROTATOR_INLINE_CMD_ABORT) {
if (!cmd->priv_handle) {
ret = -EINVAL;
@@ -1739,11 +1779,13 @@ int sde_rotator_inline_commit(void *handle, struct sde_rotator_inline_cmd *cmd,
}
request = cmd->priv_handle;
- sde_rotator_abort_inline_request(rot_dev->mgr,
- ctx->private, request->req);
+ if (!sde_rotator_is_request_retired(request))
+ sde_rotator_abort_inline_request(rot_dev->mgr,
+ ctx->private, request->req);
}
sde_rot_mgr_unlock(rot_dev->mgr);
+ mutex_unlock(&rot_dev->lock);
return 0;
error_handle_request:
@@ -1756,6 +1798,7 @@ int sde_rotator_inline_commit(void *handle, struct sde_rotator_inline_cmd *cmd,
error_invalid_handle:
error_init_request:
sde_rot_mgr_unlock(rot_dev->mgr);
+ mutex_unlock(&rot_dev->lock);
return ret;
}
EXPORT_SYMBOL(sde_rotator_inline_commit);
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_formats.c b/drivers/media/platform/msm/sde/rotator/sde_rotator_formats.c
index 7585a6b..86e63c6 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_formats.c
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_formats.c
@@ -84,6 +84,16 @@
.is_ubwc = isubwc, \
}
+#define FMT_YUV10_COMMON(fmt) \
+ .format = (fmt), \
+ .is_yuv = 1, \
+ .bits = { \
+ [C2_R_Cr] = SDE_COLOR_8BIT, \
+ [C0_G_Y] = SDE_COLOR_8BIT, \
+ [C1_B_Cb] = SDE_COLOR_8BIT, \
+ }, \
+ .alpha_enable = 0
+
#define FMT_YUV_COMMON(fmt) \
.format = (fmt), \
.is_yuv = 1, \
@@ -643,7 +653,7 @@ static struct sde_mdp_format_params sde_mdp_format_map[] = {
0, C2_R_Cr, C1_B_Cb, SDE_MDP_COMPRESS_NONE),
{
- FMT_YUV_COMMON(SDE_PIX_FMT_Y_CBCR_H2V2_P010),
+ FMT_YUV10_COMMON(SDE_PIX_FMT_Y_CBCR_H2V2_P010),
.description = "SDE/Y_CBCR_H2V2_P010",
.flag = 0,
.fetch_planes = SDE_MDP_PLANE_PSEUDO_PLANAR,
@@ -658,6 +668,21 @@ static struct sde_mdp_format_params sde_mdp_format_map[] = {
.is_ubwc = SDE_MDP_COMPRESS_NONE,
},
{
+ FMT_YUV10_COMMON(SDE_PIX_FMT_Y_CBCR_H2V2_P010_VENUS),
+ .description = "SDE/Y_CBCR_H2V2_P010_VENUS",
+ .flag = 0,
+ .fetch_planes = SDE_MDP_PLANE_PSEUDO_PLANAR,
+ .chroma_sample = SDE_MDP_CHROMA_420,
+ .unpack_count = 2,
+ .bpp = 2,
+ .frame_format = SDE_MDP_FMT_LINEAR,
+ .pixel_mode = SDE_MDP_PIXEL_10BIT,
+ .element = { C1_B_Cb, C2_R_Cr },
+ .unpack_tight = 0,
+ .unpack_align_msb = 1,
+ .is_ubwc = SDE_MDP_COMPRESS_NONE,
+ },
+ {
FMT_YUV_COMMON(SDE_PIX_FMT_Y_CBCR_H2V2_TP10),
.description = "SDE/Y_CBCR_H2V2_TP10",
.flag = 0,
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_r3.c b/drivers/media/platform/msm/sde/rotator/sde_rotator_r3.c
index f950de2..01aa1e4 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_r3.c
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_r3.c
@@ -133,6 +133,9 @@
#define SDE_ROTREG_READ(base, off) \
readl_relaxed(base + (off))
+#define SDE_ROTTOP_IN_OFFLINE_MODE(_rottop_op_mode_) \
+ (((_rottop_op_mode_) & ROTTOP_OP_MODE_ROT_OUT_MASK) == 0)
+
static const u32 sde_hw_rotator_v3_inpixfmts[] = {
SDE_PIX_FMT_XRGB_8888,
SDE_PIX_FMT_ARGB_8888,
@@ -309,6 +312,7 @@ static const u32 sde_hw_rotator_v4_inpixfmts[] = {
SDE_PIX_FMT_RGBA_1010102_UBWC,
SDE_PIX_FMT_RGBX_1010102_UBWC,
SDE_PIX_FMT_Y_CBCR_H2V2_P010,
+ SDE_PIX_FMT_Y_CBCR_H2V2_P010_VENUS,
SDE_PIX_FMT_Y_CBCR_H2V2_TP10,
SDE_PIX_FMT_Y_CBCR_H2V2_TP10_UBWC,
SDE_PIX_FMT_Y_CBCR_H2V2_P010_UBWC,
@@ -389,6 +393,7 @@ static const u32 sde_hw_rotator_v4_outpixfmts[] = {
SDE_PIX_FMT_RGBA_1010102_UBWC,
SDE_PIX_FMT_RGBX_1010102_UBWC,
SDE_PIX_FMT_Y_CBCR_H2V2_P010,
+ SDE_PIX_FMT_Y_CBCR_H2V2_P010_VENUS,
SDE_PIX_FMT_Y_CBCR_H2V2_TP10,
SDE_PIX_FMT_Y_CBCR_H2V2_TP10_UBWC,
SDE_PIX_FMT_Y_CBCR_H2V2_P010_UBWC,
@@ -696,7 +701,7 @@ static void sde_hw_rotator_halt_vbif_xin_client(void)
/**
* sde_hw_rotator_reset - Reset rotator hardware
* @rot: pointer to hw rotator
- * @ctx: pointer to current rotator context during the hw hang
+ * @ctx: pointer to current rotator context during the hw hang (optional)
*/
static int sde_hw_rotator_reset(struct sde_hw_rotator *rot,
struct sde_hw_rotator_context *ctx)
@@ -710,13 +715,8 @@ static int sde_hw_rotator_reset(struct sde_hw_rotator *rot,
int i, j;
unsigned long flags;
- if (!rot || !ctx) {
- SDEROT_ERR("NULL rotator context\n");
- return -EINVAL;
- }
-
- if (ctx->q_id >= ROT_QUEUE_MAX) {
- SDEROT_ERR("context q_id out of range: %d\n", ctx->q_id);
+ if (!rot) {
+ SDEROT_ERR("NULL rotator\n");
return -EINVAL;
}
@@ -728,6 +728,15 @@ static int sde_hw_rotator_reset(struct sde_hw_rotator *rot,
/* halt vbif xin client to ensure no pending transaction */
sde_hw_rotator_halt_vbif_xin_client();
+ /* if no ctx is specified, skip ctx wake up */
+ if (!ctx)
+ return 0;
+
+ if (ctx->q_id >= ROT_QUEUE_MAX) {
+ SDEROT_ERR("context q_id out of range: %d\n", ctx->q_id);
+ return -EINVAL;
+ }
+
spin_lock_irqsave(&rot->rotisr_lock, flags);
/* update timestamp register with current context */
@@ -819,6 +828,11 @@ static void _sde_hw_rotator_dump_status(struct sde_hw_rotator *rot,
SDE_ROTREG_READ(rot->mdss_base,
REGDMA_CSR_REGDMA_FSM_STATE));
+ SDEROT_ERR("rottop: op_mode = %x, status = %x, clk_status = %x\n",
+ SDE_ROTREG_READ(rot->mdss_base, ROTTOP_OP_MODE),
+ SDE_ROTREG_READ(rot->mdss_base, ROTTOP_STATUS),
+ SDE_ROTREG_READ(rot->mdss_base, ROTTOP_CLK_STATUS));
+
reg = SDE_ROTREG_READ(rot->mdss_base, ROT_SSPP_UBWC_ERROR_STATUS);
if (ubwcerr)
*ubwcerr = reg;
@@ -1622,7 +1636,7 @@ static void sde_hw_rotator_setup_wbengine(struct sde_hw_rotator_context *ctx,
/* use prefill bandwidth instead if specified */
if (cfg->prefill_bw)
- bw = DIV_ROUND_UP(cfg->prefill_bw,
+ bw = DIV_ROUND_UP_SECTOR_T(cfg->prefill_bw,
TRAFFIC_SHAPE_VSYNC_CLK);
if (bw > 0xFF)
@@ -2191,7 +2205,7 @@ void sde_hw_rotator_pre_pmevent(struct sde_rot_mgr *mgr, bool pmon)
{
struct sde_hw_rotator *rot;
u32 l_ts, h_ts, swts, hwts;
- u32 rotsts, regdmasts;
+ u32 rotsts, regdmasts, rotopmode;
/*
* Check last HW timestamp with SW timestamp before power off event.
@@ -2216,19 +2230,37 @@ void sde_hw_rotator_pre_pmevent(struct sde_rot_mgr *mgr, bool pmon)
regdmasts = SDE_ROTREG_READ(rot->mdss_base,
REGDMA_CSR_REGDMA_BLOCK_STATUS);
rotsts = SDE_ROTREG_READ(rot->mdss_base, ROTTOP_STATUS);
+ rotopmode = SDE_ROTREG_READ(rot->mdss_base, ROTTOP_OP_MODE);
SDEROT_DBG(
- "swts:0x%x, hwts:0x%x, regdma-sts:0x%x, rottop-sts:0x%x\n",
- swts, hwts, regdmasts, rotsts);
- SDEROT_EVTLOG(swts, hwts, regdmasts, rotsts);
+ "swts:0x%x, hwts:0x%x, regdma-sts:0x%x, rottop-sts:0x%x, rottop-opmode:0x%x\n",
+ swts, hwts, regdmasts, rotsts, rotopmode);
+ SDEROT_EVTLOG(swts, hwts, regdmasts, rotsts, rotopmode);
if ((swts != hwts) && ((regdmasts & REGDMA_BUSY) ||
(rotsts & ROT_STATUS_MASK))) {
SDEROT_ERR(
"Mismatch SWTS with HWTS: swts:0x%x, hwts:0x%x, regdma-sts:0x%x, rottop-sts:0x%x\n",
swts, hwts, regdmasts, rotsts);
+ _sde_hw_rotator_dump_status(rot, NULL);
SDEROT_EVTLOG_TOUT_HANDLER("rot", "rot_dbg_bus",
"vbif_dbg_bus", "panic");
+ } else if (!SDE_ROTTOP_IN_OFFLINE_MODE(rotopmode) &&
+ ((regdmasts & REGDMA_BUSY) ||
+ (rotsts & ROT_BUSY_BIT))) {
+ /*
+ * rotator can stuck in inline while mdp is detached
+ */
+ SDEROT_WARN(
+ "Inline Rot busy: regdma-sts:0x%x, rottop-sts:0x%x, rottop-opmode:0x%x\n",
+ regdmasts, rotsts, rotopmode);
+ sde_hw_rotator_reset(rot, NULL);
+ } else if ((regdmasts & REGDMA_BUSY) ||
+ (rotsts & ROT_BUSY_BIT)) {
+ _sde_hw_rotator_dump_status(rot, NULL);
+ SDEROT_EVTLOG_TOUT_HANDLER("rot", "rot_dbg_bus",
+ "vbif_dbg_bus", "panic");
+ sde_hw_rotator_reset(rot, NULL);
}
/* Turn off rotator clock after checking rotator registers */
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_r3_hwio.h b/drivers/media/platform/msm/sde/rotator/sde_rotator_r3_hwio.h
index 2afd032..aaaa28c 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_r3_hwio.h
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_r3_hwio.h
@@ -50,6 +50,8 @@
#define ROTTOP_START_CTRL_TRIG_SEL_REGDMA 2
#define ROTTOP_START_CTRL_TRIG_SEL_MDP 3
+#define ROTTOP_OP_MODE_ROT_OUT_MASK (0x3 << 4)
+
/* SDE_ROT_SSPP:
* OFFSET=0x0A8900
*/
diff --git a/drivers/media/platform/msm/sde/rotator/sde_rotator_util.c b/drivers/media/platform/msm/sde/rotator/sde_rotator_util.c
index ac4ab54..6eb2ab2 100644
--- a/drivers/media/platform/msm/sde/rotator/sde_rotator_util.c
+++ b/drivers/media/platform/msm/sde/rotator/sde_rotator_util.c
@@ -350,10 +350,27 @@ int sde_mdp_get_plane_sizes(struct sde_mdp_format_params *fmt, u32 w, u32 h,
ps->plane_size[0] = w * h * bpp;
ps->ystride[0] = w * bpp;
} else if (fmt->format == SDE_PIX_FMT_Y_CBCR_H2V2_VENUS ||
- fmt->format == SDE_PIX_FMT_Y_CRCB_H2V2_VENUS) {
+ fmt->format == SDE_PIX_FMT_Y_CRCB_H2V2_VENUS ||
+ fmt->format == SDE_PIX_FMT_Y_CBCR_H2V2_P010_VENUS) {
- int cf = (fmt->format == SDE_PIX_FMT_Y_CBCR_H2V2_VENUS)
- ? COLOR_FMT_NV12 : COLOR_FMT_NV21;
+ int cf;
+
+ switch (fmt->format) {
+ case SDE_PIX_FMT_Y_CBCR_H2V2_VENUS:
+ cf = COLOR_FMT_NV12;
+ break;
+ case SDE_PIX_FMT_Y_CRCB_H2V2_VENUS:
+ cf = COLOR_FMT_NV21;
+ break;
+ case SDE_PIX_FMT_Y_CBCR_H2V2_P010_VENUS:
+ cf = COLOR_FMT_P010;
+ break;
+ default:
+ SDEROT_ERR("unknown color format %d\n",
+ fmt->format);
+ return -EINVAL;
+ }
+
ps->num_planes = 2;
ps->ystride[0] = VENUS_Y_STRIDE(cf, w);
ps->ystride[1] = VENUS_UV_STRIDE(cf, w);
diff --git a/drivers/media/platform/msm/vidc/governors/msm_vidc_dyn_gov.c b/drivers/media/platform/msm/vidc/governors/msm_vidc_dyn_gov.c
index 83b80d7..cdcfa96 100644
--- a/drivers/media/platform/msm/vidc/governors/msm_vidc_dyn_gov.c
+++ b/drivers/media/platform/msm/vidc/governors/msm_vidc_dyn_gov.c
@@ -312,6 +312,7 @@ static int __bpp(enum hal_uncompressed_format f)
case HAL_COLOR_FORMAT_NV12_UBWC:
return 8;
case HAL_COLOR_FORMAT_NV12_TP10_UBWC:
+ case HAL_COLOR_FORMAT_P010:
return 10;
default:
dprintk(VIDC_ERR,
diff --git a/drivers/media/platform/msm/vidc/msm_vdec.c b/drivers/media/platform/msm/vidc/msm_vdec.c
index 9238176..1c9c91d 100644
--- a/drivers/media/platform/msm/vidc/msm_vdec.c
+++ b/drivers/media/platform/msm/vidc/msm_vdec.c
@@ -491,7 +491,7 @@ struct msm_vidc_format vdec_formats[] = {
{
.name = "YCbCr Semiplanar 4:2:0 10bit",
.description = "Y/CbCr 4:2:0 10bit",
- .fourcc = V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010,
+ .fourcc = V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010_VENUS,
.get_frame_size = get_frame_size_p010,
.type = CAPTURE_PORT,
},
diff --git a/drivers/media/platform/msm/vidc/msm_venc.c b/drivers/media/platform/msm/vidc/msm_venc.c
index dd62fb7..ba49f24 100644
--- a/drivers/media/platform/msm/vidc/msm_venc.c
+++ b/drivers/media/platform/msm/vidc/msm_venc.c
@@ -1275,7 +1275,7 @@ static struct msm_vidc_format venc_formats[] = {
{
.name = "YCbCr Semiplanar 4:2:0 10bit",
.description = "Y/CbCr 4:2:0 10bit",
- .fourcc = V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010,
+ .fourcc = V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010_VENUS,
.get_frame_size = get_frame_size_p010,
.type = OUTPUT_PORT,
},
@@ -1868,7 +1868,7 @@ int msm_venc_s_ctrl(struct msm_vidc_inst *inst, struct v4l2_ctrl *ctrl)
break;
case V4L2_CID_MPEG_VIDC_VIDEO_USELTRFRAME:
property_id = HAL_CONFIG_VENC_USELTRFRAME;
- use_ltr.ref_ltr = 0x1 << ctrl->val;
+ use_ltr.ref_ltr = ctrl->val;
use_ltr.use_constraint = false;
use_ltr.frames = 0;
pdata = &use_ltr;
diff --git a/drivers/media/platform/msm/vidc/msm_vidc.c b/drivers/media/platform/msm/vidc/msm_vidc.c
index dabe667..349b982 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc.c
@@ -281,7 +281,7 @@ int msm_vidc_g_fmt(void *instance, struct v4l2_format *f)
case V4L2_PIX_FMT_NV12_TP10_UBWC:
color_format = COLOR_FMT_NV12_BPP10_UBWC;
break;
- case V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010:
+ case V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010_VENUS:
color_format = COLOR_FMT_P010;
break;
default:
diff --git a/drivers/media/platform/msm/vidc/msm_vidc_clocks.c b/drivers/media/platform/msm/vidc/msm_vidc_clocks.c
index 32b548a..1d22077 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc_clocks.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc_clocks.c
@@ -304,11 +304,18 @@ static inline int get_bufs_outside_fw(struct msm_vidc_inst *inst)
*/
if (inst->session_type == MSM_VIDC_DECODER) {
+ struct vb2_v4l2_buffer *vbuf = NULL;
+
q = &inst->bufq[CAPTURE_PORT].vb2_bufq;
for (i = 0; i < q->num_buffers; i++) {
vb = q->bufs[i];
- if (vb && vb->state != VB2_BUF_STATE_ACTIVE &&
- vb->planes[0].bytesused)
+ if (!vb)
+ continue;
+ vbuf = to_vb2_v4l2_buffer(vb);
+ if (vbuf &&
+ vb->state != VB2_BUF_STATE_ACTIVE &&
+ !(vbuf->flags &
+ V4L2_QCOM_BUF_FLAG_DECODEONLY))
fw_out_qsize++;
}
} else {
diff --git a/drivers/media/platform/msm/vidc/msm_vidc_common.c b/drivers/media/platform/msm/vidc/msm_vidc_common.c
index 9dce3f9..4c000b7 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc_common.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc_common.c
@@ -961,7 +961,7 @@ enum hal_uncompressed_format msm_comm_get_hal_uncompressed(int fourcc)
case V4L2_PIX_FMT_NV12_TP10_UBWC:
format = HAL_COLOR_FORMAT_NV12_TP10_UBWC;
break;
- case V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010:
+ case V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010_VENUS:
format = HAL_COLOR_FORMAT_P010;
break;
default:
@@ -5761,8 +5761,9 @@ int msm_comm_dqbuf_cache_operations(struct msm_vidc_inst *inst,
skip = true;
} else if (b->type ==
V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
- if (!i) /* yuv */
- skip = true;
+ if (!i) { /* yuv */
+ /* all values are correct */
+ }
}
} else if (inst->session_type == MSM_VIDC_ENCODER) {
if (b->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
diff --git a/drivers/media/platform/msm/vidc/msm_vidc_platform.c b/drivers/media/platform/msm/vidc/msm_vidc_platform.c
index 5e5d030..d7641c3 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc_platform.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc_platform.c
@@ -170,6 +170,10 @@ static struct msm_vidc_common_data sdm670_common_data_v0[] = {
.value = 1,
},
{
+ .key = "qcom,domain-attr-cache-pagetables",
+ .value = 1,
+ },
+ {
.key = "qcom,max-secure-instances",
.value = 5,
},
@@ -217,6 +221,10 @@ static struct msm_vidc_common_data sdm670_common_data_v1[] = {
.value = 1,
},
{
+ .key = "qcom,domain-attr-cache-pagetables",
+ .value = 1,
+ },
+ {
.key = "qcom,max-secure-instances",
.value = 5,
},
diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
index 2d4b836..f072bf2 100644
--- a/drivers/media/rc/imon.c
+++ b/drivers/media/rc/imon.c
@@ -2412,6 +2412,11 @@ static int imon_probe(struct usb_interface *interface,
mutex_lock(&driver_lock);
first_if = usb_ifnum_to_if(usbdev, 0);
+ if (!first_if) {
+ ret = -ENODEV;
+ goto fail;
+ }
+
first_if_ctx = usb_get_intfdata(first_if);
if (ifnum == 0) {
diff --git a/drivers/media/usb/cx231xx/cx231xx-core.c b/drivers/media/usb/cx231xx/cx231xx-core.c
index 8b099fe..71b65ab 100644
--- a/drivers/media/usb/cx231xx/cx231xx-core.c
+++ b/drivers/media/usb/cx231xx/cx231xx-core.c
@@ -356,7 +356,12 @@ int cx231xx_send_vendor_cmd(struct cx231xx *dev,
*/
if ((ven_req->wLength > 4) && ((ven_req->bRequest == 0x4) ||
(ven_req->bRequest == 0x5) ||
- (ven_req->bRequest == 0x6))) {
+ (ven_req->bRequest == 0x6) ||
+
+ /* Internal Master 3 Bus can send
+ * and receive only 4 bytes per time
+ */
+ (ven_req->bRequest == 0x2))) {
unsend_size = 0;
pdata = ven_req->pBuff;
diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c
index ef1b8ee..caa5540 100644
--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
+++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
@@ -292,7 +292,7 @@ static int stk7700P2_frontend_attach(struct dvb_usb_adapter *adap)
stk7700d_dib7000p_mt2266_config)
!= 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n", __func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
}
@@ -326,7 +326,7 @@ static int stk7700d_frontend_attach(struct dvb_usb_adapter *adap)
stk7700d_dib7000p_mt2266_config)
!= 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n", __func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
}
@@ -479,7 +479,7 @@ static int stk7700ph_frontend_attach(struct dvb_usb_adapter *adap)
&stk7700ph_dib7700_xc3028_config) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n",
__func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
@@ -1011,7 +1011,7 @@ static int stk7070p_frontend_attach(struct dvb_usb_adapter *adap)
&dib7070p_dib7000p_config) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n",
__func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
@@ -1069,7 +1069,7 @@ static int stk7770p_frontend_attach(struct dvb_usb_adapter *adap)
&dib7770p_dib7000p_config) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n",
__func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
@@ -3056,7 +3056,7 @@ static int nim7090_frontend_attach(struct dvb_usb_adapter *adap)
if (state->dib7000p_ops.i2c_enumeration(&adap->dev->i2c_adap, 1, 0x10, &nim7090_dib7000p_config) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n", __func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
adap->fe_adap[0].fe = state->dib7000p_ops.init(&adap->dev->i2c_adap, 0x80, &nim7090_dib7000p_config);
@@ -3109,7 +3109,7 @@ static int tfe7090pvr_frontend0_attach(struct dvb_usb_adapter *adap)
/* initialize IC 0 */
if (state->dib7000p_ops.i2c_enumeration(&adap->dev->i2c_adap, 1, 0x20, &tfe7090pvr_dib7000p_config[0]) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n", __func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
@@ -3139,7 +3139,7 @@ static int tfe7090pvr_frontend1_attach(struct dvb_usb_adapter *adap)
i2c = state->dib7000p_ops.get_i2c_master(adap->dev->adapter[0].fe_adap[0].fe, DIBX000_I2C_INTERFACE_GPIO_6_7, 1);
if (state->dib7000p_ops.i2c_enumeration(i2c, 1, 0x10, &tfe7090pvr_dib7000p_config[1]) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n", __func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
@@ -3214,7 +3214,7 @@ static int tfe7790p_frontend_attach(struct dvb_usb_adapter *adap)
1, 0x10, &tfe7790p_dib7000p_config) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n",
__func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
adap->fe_adap[0].fe = state->dib7000p_ops.init(&adap->dev->i2c_adap,
@@ -3309,7 +3309,7 @@ static int stk7070pd_frontend_attach0(struct dvb_usb_adapter *adap)
stk7070pd_dib7000p_config) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n",
__func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
@@ -3384,7 +3384,7 @@ static int novatd_frontend_attach(struct dvb_usb_adapter *adap)
stk7070pd_dib7000p_config) != 0) {
err("%s: state->dib7000p_ops.i2c_enumeration failed. Cannot continue\n",
__func__);
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
}
@@ -3620,7 +3620,7 @@ static int pctv340e_frontend_attach(struct dvb_usb_adapter *adap)
if (state->dib7000p_ops.dib7000pc_detection(&adap->dev->i2c_adap) == 0) {
/* Demodulator not found for some reason? */
- dvb_detach(&state->dib7000p_ops);
+ dvb_detach(state->dib7000p_ops.set_wbd_ref);
return -ENODEV;
}
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 5e7595c..0e48938 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -1320,6 +1320,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
descr = "Y/CbCr 4:2:0 TP10"; break;
case V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010:
descr = "Y/CbCr 4:2:0 P10"; break;
+ case V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010_VENUS:
+ descr = "Y/CbCr 4:2:0 P10 Venus"; break;
case V4L2_PIX_FMT_NV12_TP10_UBWC:
descr = "Y/CbCr 4:2:0 TP10 UBWC"; break;
case V4L2_PIX_FMT_NV12_P010_UBWC:
diff --git a/drivers/mfd/ab8500-sysctrl.c b/drivers/mfd/ab8500-sysctrl.c
index 207cc49..8062d37 100644
--- a/drivers/mfd/ab8500-sysctrl.c
+++ b/drivers/mfd/ab8500-sysctrl.c
@@ -98,7 +98,7 @@ int ab8500_sysctrl_read(u16 reg, u8 *value)
u8 bank;
if (sysctrl_dev == NULL)
- return -EINVAL;
+ return -EPROBE_DEFER;
bank = (reg >> 8);
if (!valid_bank(bank))
@@ -114,11 +114,13 @@ int ab8500_sysctrl_write(u16 reg, u8 mask, u8 value)
u8 bank;
if (sysctrl_dev == NULL)
- return -EINVAL;
+ return -EPROBE_DEFER;
bank = (reg >> 8);
- if (!valid_bank(bank))
+ if (!valid_bank(bank)) {
+ pr_err("invalid bank\n");
return -EINVAL;
+ }
return abx500_mask_and_set_register_interruptible(sysctrl_dev, bank,
(u8)(reg & 0xFF), mask, value);
@@ -145,9 +147,15 @@ static int ab8500_sysctrl_remove(struct platform_device *pdev)
return 0;
}
+static const struct of_device_id ab8500_sysctrl_match[] = {
+ { .compatible = "stericsson,ab8500-sysctrl", },
+ {}
+};
+
static struct platform_driver ab8500_sysctrl_driver = {
.driver = {
.name = "ab8500-sysctrl",
+ .of_match_table = ab8500_sysctrl_match,
},
.probe = ab8500_sysctrl_probe,
.remove = ab8500_sysctrl_remove,
diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
index ba130be..9617fc3 100644
--- a/drivers/mfd/axp20x.c
+++ b/drivers/mfd/axp20x.c
@@ -205,14 +205,14 @@ static struct resource axp22x_pek_resources[] = {
static struct resource axp288_power_button_resources[] = {
{
.name = "PEK_DBR",
- .start = AXP288_IRQ_POKN,
- .end = AXP288_IRQ_POKN,
+ .start = AXP288_IRQ_POKP,
+ .end = AXP288_IRQ_POKP,
.flags = IORESOURCE_IRQ,
},
{
.name = "PEK_DBF",
- .start = AXP288_IRQ_POKP,
- .end = AXP288_IRQ_POKP,
+ .start = AXP288_IRQ_POKN,
+ .end = AXP288_IRQ_POKN,
.flags = IORESOURCE_IRQ,
},
};
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index fa4fe02..eef202d 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -1620,6 +1620,9 @@ static void cxl_pci_remove_adapter(struct cxl *adapter)
cxl_sysfs_adapter_remove(adapter);
cxl_debugfs_adapter_remove(adapter);
+ /* Flush adapter datacache as its about to be removed */
+ cxl_data_cache_flush(adapter);
+
cxl_deconfigure_adapter(adapter);
device_unregister(&adapter->dev);
diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
index e2af61f..451d417 100644
--- a/drivers/misc/mei/client.c
+++ b/drivers/misc/mei/client.c
@@ -1320,6 +1320,9 @@ int mei_cl_notify_request(struct mei_cl *cl,
return -EOPNOTSUPP;
}
+ if (!mei_cl_is_connected(cl))
+ return -ENODEV;
+
rets = pm_runtime_get(dev->dev);
if (rets < 0 && rets != -EINPROGRESS) {
pm_runtime_put_noidle(dev->dev);
diff --git a/drivers/misc/panel.c b/drivers/misc/panel.c
index 6030ac5..a9fa4c0 100644
--- a/drivers/misc/panel.c
+++ b/drivers/misc/panel.c
@@ -1423,17 +1423,25 @@ static ssize_t lcd_write(struct file *file,
static int lcd_open(struct inode *inode, struct file *file)
{
- if (!atomic_dec_and_test(&lcd_available))
- return -EBUSY; /* open only once at a time */
+ int ret;
+ ret = -EBUSY;
+ if (!atomic_dec_and_test(&lcd_available))
+ goto fail; /* open only once at a time */
+
+ ret = -EPERM;
if (file->f_mode & FMODE_READ) /* device is write-only */
- return -EPERM;
+ goto fail;
if (lcd.must_clear) {
lcd_clear_display();
lcd.must_clear = false;
}
return nonseekable_open(inode, file);
+
+ fail:
+ atomic_inc(&lcd_available);
+ return ret;
}
static int lcd_release(struct inode *inode, struct file *file)
@@ -1696,14 +1704,21 @@ static ssize_t keypad_read(struct file *file,
static int keypad_open(struct inode *inode, struct file *file)
{
- if (!atomic_dec_and_test(&keypad_available))
- return -EBUSY; /* open only once at a time */
+ int ret;
+ ret = -EBUSY;
+ if (!atomic_dec_and_test(&keypad_available))
+ goto fail; /* open only once at a time */
+
+ ret = -EPERM;
if (file->f_mode & FMODE_WRITE) /* device is read-only */
- return -EPERM;
+ goto fail;
keypad_buflen = 0; /* flush the buffer on opening */
return 0;
+ fail:
+ atomic_inc(&keypad_available);
+ return ret;
}
static int keypad_release(struct inode *inode, struct file *file)
diff --git a/drivers/misc/qseecom.c b/drivers/misc/qseecom.c
index afb8d72..4c4835d 100644
--- a/drivers/misc/qseecom.c
+++ b/drivers/misc/qseecom.c
@@ -1900,20 +1900,22 @@ static int __qseecom_process_blocked_on_listener_legacy(
ptr_app->blocked_on_listener_id = resp->data;
/* sleep until listener is available */
- qseecom.app_block_ref_cnt++;
- ptr_app->app_blocked = true;
- mutex_unlock(&app_access_lock);
- if (wait_event_freezable(
+ do {
+ qseecom.app_block_ref_cnt++;
+ ptr_app->app_blocked = true;
+ mutex_unlock(&app_access_lock);
+ if (wait_event_freezable(
list_ptr->listener_block_app_wq,
!list_ptr->listener_in_use)) {
- pr_err("Interrupted: listener_id %d, app_id %d\n",
+ pr_err("Interrupted: listener_id %d, app_id %d\n",
resp->data, ptr_app->app_id);
- ret = -ERESTARTSYS;
- goto exit;
- }
- mutex_lock(&app_access_lock);
- ptr_app->app_blocked = false;
- qseecom.app_block_ref_cnt--;
+ ret = -ERESTARTSYS;
+ goto exit;
+ }
+ mutex_lock(&app_access_lock);
+ ptr_app->app_blocked = false;
+ qseecom.app_block_ref_cnt--;
+ } while (list_ptr->listener_in_use);
ptr_app->blocked_on_listener_id = 0;
/* notify the blocked app that listener is available */
@@ -1964,18 +1966,20 @@ static int __qseecom_process_blocked_on_listener_smcinvoke(
pr_debug("lsntr %d in_use = %d\n",
resp->data, list_ptr->listener_in_use);
/* sleep until listener is available */
- qseecom.app_block_ref_cnt++;
- mutex_unlock(&app_access_lock);
- if (wait_event_freezable(
+ do {
+ qseecom.app_block_ref_cnt++;
+ mutex_unlock(&app_access_lock);
+ if (wait_event_freezable(
list_ptr->listener_block_app_wq,
!list_ptr->listener_in_use)) {
- pr_err("Interrupted: listener_id %d, session_id %d\n",
+ pr_err("Interrupted: listener_id %d, session_id %d\n",
resp->data, session_id);
- ret = -ERESTARTSYS;
- goto exit;
- }
- mutex_lock(&app_access_lock);
- qseecom.app_block_ref_cnt--;
+ ret = -ERESTARTSYS;
+ goto exit;
+ }
+ mutex_lock(&app_access_lock);
+ qseecom.app_block_ref_cnt--;
+ } while (list_ptr->listener_in_use);
/* notify TZ that listener is available */
pr_warn("Lsntr %d is available, unblock session(%d) in TZ\n",
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 538a8d9..e4af5c3 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -3184,6 +3184,16 @@ static struct mmc_cmdq_req *mmc_blk_cmdq_rw_prep(
return &mqrq->cmdq_req;
}
+static void mmc_blk_cmdq_requeue_rw_rq(struct mmc_queue *mq,
+ struct request *req)
+{
+ struct mmc_card *card = mq->card;
+ struct mmc_host *host = card->host;
+
+ blk_requeue_request(req->q, req);
+ mmc_put_card(host->card);
+}
+
static int mmc_blk_cmdq_issue_rw_rq(struct mmc_queue *mq, struct request *req)
{
struct mmc_queue_req *active_mqrq;
@@ -3231,6 +3241,15 @@ static int mmc_blk_cmdq_issue_rw_rq(struct mmc_queue *mq, struct request *req)
wait_event_interruptible(ctx->queue_empty_wq,
(!ctx->active_reqs));
+ if (ret) {
+ /* clear pending request */
+ WARN_ON(!test_and_clear_bit(req->tag,
+ &host->cmdq_ctx.data_active_reqs));
+ WARN_ON(!test_and_clear_bit(req->tag,
+ &host->cmdq_ctx.active_reqs));
+ mmc_cmdq_clk_scaling_stop_busy(host, true, false);
+ }
+
return ret;
}
@@ -4058,6 +4077,13 @@ static int mmc_blk_cmdq_issue_rq(struct mmc_queue *mq, struct request *req)
ret = mmc_blk_cmdq_issue_flush_rq(mq, req);
} else {
ret = mmc_blk_cmdq_issue_rw_rq(mq, req);
+ /*
+ * If issuing of the request fails with eitehr EBUSY or
+ * EAGAIN error, re-queue the request.
+ * This case would occur with ICE calls.
+ */
+ if (ret == -EBUSY || ret == -EAGAIN)
+ mmc_blk_cmdq_requeue_rw_rq(mq, req);
}
}
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 300e9e1c..c172be9 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -1209,9 +1209,51 @@ static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
return 0;
}
-static void mmc_start_cmdq_request(struct mmc_host *host,
+static int mmc_cmdq_check_retune(struct mmc_host *host)
+{
+ bool cmdq_mode;
+ int err = 0;
+
+ if (!host->need_retune || host->doing_retune || !host->card ||
+ mmc_card_hs400es(host->card) ||
+ (host->ios.clock <= MMC_HIGH_DDR_MAX_DTR))
+ return 0;
+
+ cmdq_mode = mmc_card_cmdq(host->card);
+ if (cmdq_mode) {
+ err = mmc_cmdq_halt(host, true);
+ if (err) {
+ pr_err("%s: %s: failed halting queue (%d)\n",
+ mmc_hostname(host), __func__, err);
+ host->cmdq_ops->dumpstate(host);
+ goto halt_failed;
+ }
+ }
+
+ mmc_retune_hold(host);
+ err = mmc_retune(host);
+ mmc_retune_release(host);
+
+ if (cmdq_mode) {
+ if (mmc_cmdq_halt(host, false)) {
+ pr_err("%s: %s: cmdq unhalt failed\n",
+ mmc_hostname(host), __func__);
+ host->cmdq_ops->dumpstate(host);
+ }
+ }
+
+halt_failed:
+ pr_debug("%s: %s: Retuning done err: %d\n",
+ mmc_hostname(host), __func__, err);
+
+ return err;
+}
+
+static int mmc_start_cmdq_request(struct mmc_host *host,
struct mmc_request *mrq)
{
+ int ret = 0;
+
if (mrq->data) {
pr_debug("%s: blksz %d blocks %d flags %08x tsac %lu ms nsac %d\n",
mmc_hostname(host), mrq->data->blksz,
@@ -1233,11 +1275,22 @@ static void mmc_start_cmdq_request(struct mmc_host *host,
}
mmc_host_clk_hold(host);
- if (likely(host->cmdq_ops->request))
- host->cmdq_ops->request(host, mrq);
- else
- pr_err("%s: %s: issue request failed\n", mmc_hostname(host),
- __func__);
+ mmc_cmdq_check_retune(host);
+ if (likely(host->cmdq_ops->request)) {
+ ret = host->cmdq_ops->request(host, mrq);
+ } else {
+ ret = -ENOENT;
+ pr_err("%s: %s: cmdq request host op is not available\n",
+ mmc_hostname(host), __func__);
+ }
+
+ if (ret) {
+ mmc_host_clk_release(host);
+ pr_err("%s: %s: issue request failed, err=%d\n",
+ mmc_hostname(host), __func__, ret);
+ }
+
+ return ret;
}
/**
@@ -1769,8 +1822,7 @@ int mmc_cmdq_start_req(struct mmc_host *host, struct mmc_cmdq_req *cmdq_req)
mrq->cmd->error = -ENOMEDIUM;
return -ENOMEDIUM;
}
- mmc_start_cmdq_request(host, mrq);
- return 0;
+ return mmc_start_cmdq_request(host, mrq);
}
EXPORT_SYMBOL(mmc_cmdq_start_req);
@@ -3420,6 +3472,9 @@ static void _mmc_detect_change(struct mmc_host *host, unsigned long delay,
if (cd_irq && mmc_bus_manual_resume(host))
host->ignore_bus_resume_flags = true;
+ if (delayed_work_pending(&host->detect))
+ cancel_delayed_work(&host->detect);
+
mmc_schedule_delayed_work(&host->detect, delay);
}
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 64c8743..3e0ba75 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -434,7 +434,8 @@ int mmc_retune(struct mmc_host *host)
else
return 0;
- if (!host->need_retune || host->doing_retune || !host->card)
+ if (!host->need_retune || host->doing_retune || !host->card ||
+ mmc_card_hs400es(host->card))
return 0;
host->need_retune = 0;
@@ -736,19 +737,19 @@ static ssize_t store_enable(struct device *dev,
mmc_get_card(host->card);
if (!value) {
- /*turning off clock scaling*/
- mmc_exit_clk_scaling(host);
+ /* Suspend the clock scaling and mask host capability */
+ if (host->clk_scaling.enable)
+ mmc_suspend_clk_scaling(host);
host->caps2 &= ~MMC_CAP2_CLK_SCALE;
host->clk_scaling.state = MMC_LOAD_HIGH;
/* Set to max. frequency when disabling */
mmc_clk_update_freq(host, host->card->clk_scaling_highest,
host->clk_scaling.state);
} else if (value) {
- /* starting clock scaling, will restart in case started */
+ /* Unmask host capability and resume scaling */
host->caps2 |= MMC_CAP2_CLK_SCALE;
- if (host->clk_scaling.enable)
- mmc_exit_clk_scaling(host);
- mmc_init_clk_scaling(host);
+ if (!host->clk_scaling.enable)
+ mmc_resume_clk_scaling(host);
}
mmc_put_card(host->card);
diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
index 10d55b8..e3bbc2c 100644
--- a/drivers/mmc/core/sd.c
+++ b/drivers/mmc/core/sd.c
@@ -1309,7 +1309,7 @@ static int _mmc_sd_resume(struct mmc_host *host)
while (retries) {
err = mmc_sd_init_card(host, host->card->ocr, host->card);
- if (err) {
+ if (err && err != -ENOENT) {
printk(KERN_ERR "%s: Re-init card rc = %d (retries = %d)\n",
mmc_hostname(host), err, retries);
retries--;
@@ -1324,6 +1324,12 @@ static int _mmc_sd_resume(struct mmc_host *host)
#else
err = mmc_sd_init_card(host, host->card->ocr, host->card);
#endif
+ if (err == -ENOENT) {
+ pr_debug("%s: %s: found a different card(%d), do detect change\n",
+ mmc_hostname(host), __func__, err);
+ mmc_card_set_removed(host->card);
+ mmc_detect_change(host, msecs_to_jiffies(200));
+ }
mmc_card_clr_suspended(host->card);
if (host->card->sdr104_blocked)
diff --git a/drivers/mmc/host/cmdq_hci.c b/drivers/mmc/host/cmdq_hci.c
index f16a999..55ce946 100644
--- a/drivers/mmc/host/cmdq_hci.c
+++ b/drivers/mmc/host/cmdq_hci.c
@@ -806,7 +806,7 @@ static int cmdq_request(struct mmc_host *mmc, struct mmc_request *mrq)
mmc->err_stats[MMC_ERR_ICE_CFG]++;
pr_err("%s: failed to configure crypto: err %d tag %d\n",
mmc_hostname(mmc), err, tag);
- goto out;
+ goto ice_err;
}
}
@@ -824,7 +824,7 @@ static int cmdq_request(struct mmc_host *mmc, struct mmc_request *mrq)
if (err) {
pr_err("%s: %s: failed to setup tx desc: %d\n",
mmc_hostname(mmc), __func__, err);
- goto out;
+ goto desc_err;
}
cq_host->mrq_slot[tag] = mrq;
@@ -844,6 +844,22 @@ static int cmdq_request(struct mmc_host *mmc, struct mmc_request *mrq)
/* Commit the doorbell write immediately */
wmb();
+ return err;
+
+desc_err:
+ if (cq_host->ops->crypto_cfg_end) {
+ err = cq_host->ops->crypto_cfg_end(mmc, mrq);
+ if (err) {
+ pr_err("%s: failed to end ice config: err %d tag %d\n",
+ mmc_hostname(mmc), err, tag);
+ }
+ }
+ if (!(cq_host->caps & CMDQ_CAP_CRYPTO_SUPPORT) &&
+ cq_host->ops->crypto_cfg_reset)
+ cq_host->ops->crypto_cfg_reset(mmc, tag);
+ice_err:
+ if (err)
+ cmdq_runtime_pm_put(cq_host);
out:
return err;
}
diff --git a/drivers/mmc/host/s3cmci.c b/drivers/mmc/host/s3cmci.c
index c531dee..8f27fe3 100644
--- a/drivers/mmc/host/s3cmci.c
+++ b/drivers/mmc/host/s3cmci.c
@@ -21,6 +21,7 @@
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#include <linux/gpio.h>
+#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/io.h>
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index e817a02..7880405 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -1900,6 +1900,8 @@ struct sdhci_msm_pltfm_data *sdhci_msm_populate_pdata(struct device *dev,
u32 *ice_clk_table = NULL;
enum of_gpio_flags flags = OF_GPIO_ACTIVE_LOW;
const char *lower_bus_speed = NULL;
+ int bus_clk_table_len;
+ u32 *bus_clk_table = NULL;
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) {
@@ -1955,6 +1957,14 @@ struct sdhci_msm_pltfm_data *sdhci_msm_populate_pdata(struct device *dev,
pdata->sup_clk_table = clk_table;
pdata->sup_clk_cnt = clk_table_len;
+ if (!sdhci_msm_dt_get_array(dev, "qcom,bus-aggr-clk-rates",
+ &bus_clk_table, &bus_clk_table_len, 0)) {
+ if (bus_clk_table && bus_clk_table_len) {
+ pdata->bus_clk_table = bus_clk_table;
+ pdata->bus_clk_cnt = bus_clk_table_len;
+ }
+ }
+
if (msm_host->ice.pdev) {
if (sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
&ice_clk_table, &ice_clk_table_len, 0)) {
@@ -2962,6 +2972,34 @@ static unsigned int sdhci_msm_get_sup_clk_rate(struct sdhci_host *host,
return sel_clk;
}
+static long sdhci_msm_get_bus_aggr_clk_rate(struct sdhci_host *host,
+ u32 apps_clk)
+{
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct sdhci_msm_host *msm_host = pltfm_host->priv;
+ long sel_clk = -1;
+ unsigned char cnt;
+
+ if (msm_host->pdata->bus_clk_cnt != msm_host->pdata->sup_clk_cnt) {
+ pr_err("%s: %s: mismatch between bus_clk_cnt(%u) and apps_clk_cnt(%u)\n",
+ mmc_hostname(host->mmc), __func__,
+ (unsigned int)msm_host->pdata->bus_clk_cnt,
+ (unsigned int)msm_host->pdata->sup_clk_cnt);
+ return msm_host->pdata->bus_clk_table[0];
+ }
+ if (apps_clk == sdhci_msm_get_min_clock(host)) {
+ sel_clk = msm_host->pdata->bus_clk_table[0];
+ return sel_clk;
+ }
+
+ for (cnt = 0; cnt < msm_host->pdata->bus_clk_cnt; cnt++) {
+ if (msm_host->pdata->sup_clk_table[cnt] > apps_clk)
+ break;
+ sel_clk = msm_host->pdata->bus_clk_table[cnt];
+ }
+ return sel_clk;
+}
+
static void sdhci_msm_registers_save(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
@@ -3251,6 +3289,7 @@ static void sdhci_msm_set_clock(struct sdhci_host *host, unsigned int clock)
struct mmc_card *card = host->mmc->card;
struct mmc_ios curr_ios = host->mmc->ios;
u32 sup_clock, ddr_clock, dll_lock;
+ long bus_clk_rate;
bool curr_pwrsave;
if (!clock) {
@@ -3405,6 +3444,26 @@ static void sdhci_msm_set_clock(struct sdhci_host *host, unsigned int clock)
msm_host->clk_rate = sup_clock;
host->clock = clock;
+ if (!IS_ERR(msm_host->bus_aggr_clk) &&
+ msm_host->pdata->bus_clk_cnt) {
+ bus_clk_rate = sdhci_msm_get_bus_aggr_clk_rate(host,
+ sup_clock);
+ if (bus_clk_rate >= 0) {
+ rc = clk_set_rate(msm_host->bus_aggr_clk,
+ bus_clk_rate);
+ if (rc) {
+ pr_err("%s: %s: Failed to set rate %ld for bus-aggr-clk : %d\n",
+ mmc_hostname(host->mmc),
+ __func__, bus_clk_rate, rc);
+ goto out;
+ }
+ } else {
+ pr_err("%s: %s: Unsupported apps clk rate %u for bus-aggr-clk, err: %ld\n",
+ mmc_hostname(host->mmc), __func__,
+ sup_clock, bus_clk_rate);
+ }
+ }
+
/* Configure pinctrl drive type according to
* current clock rate
*/
diff --git a/drivers/mmc/host/sdhci-msm.h b/drivers/mmc/host/sdhci-msm.h
index 6e15a73..7c737cc 100644
--- a/drivers/mmc/host/sdhci-msm.h
+++ b/drivers/mmc/host/sdhci-msm.h
@@ -159,6 +159,8 @@ struct sdhci_msm_pltfm_data {
u32 ice_clk_min;
u32 ddr_config;
bool rclk_wa;
+ u32 *bus_clk_table;
+ unsigned char bus_clk_cnt;
};
struct sdhci_msm_bus_vote {
diff --git a/drivers/mtd/nand/nand_ids.c b/drivers/mtd/nand/nand_ids.c
index c821cca..d6ca73b 100644
--- a/drivers/mtd/nand/nand_ids.c
+++ b/drivers/mtd/nand/nand_ids.c
@@ -58,6 +58,10 @@ struct nand_flash_dev nand_flash_ids[] = {
{"TC58NYG2S0HBAI4 4G 1.8V 8-bit",
{ .id = {0x98, 0xac, 0x90, 0x26, 0x76, 0x00, 0x00, 0x00} },
SZ_4K, SZ_512, SZ_256K, 0, 5, 256, NAND_ECC_INFO(8, SZ_512) },
+ {"MT29F4G08ABBFA3W 4G 1.8V 8-bit",
+ { .id = {0x2c, 0xac, 0x80, 0x26, 0x00, 0x00, 0x00, 0x00} },
+ SZ_4K, SZ_512, SZ_256K, 0, 4, 256, NAND_ECC_INFO(8, SZ_512) },
+
LEGACY_ID_NAND("NAND 4MiB 5V 8-bit", 0x6B, 4, SZ_8K, SP_OPTIONS),
LEGACY_ID_NAND("NAND 4MiB 3,3V 8-bit", 0xE3, 4, SZ_8K, SP_OPTIONS),
diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 8b8470c..f9b2a77 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -320,6 +320,10 @@ static int sunxi_nfc_wait_events(struct sunxi_nfc *nfc, u32 events,
ret = wait_for_completion_timeout(&nfc->complete,
msecs_to_jiffies(timeout_ms));
+ if (!ret)
+ ret = -ETIMEDOUT;
+ else
+ ret = 0;
writel(0, nfc->regs + NFC_REG_INT);
} else {
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 5fa36eb..63d61c0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -3217,7 +3217,7 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb)
hash ^= (hash >> 16);
hash ^= (hash >> 8);
- return hash;
+ return hash >> 1;
}
/*-------------------------- Device entry points ----------------------------*/
diff --git a/drivers/net/can/c_can/c_can_pci.c b/drivers/net/can/c_can/c_can_pci.c
index cf7c189..d065c0e 100644
--- a/drivers/net/can/c_can/c_can_pci.c
+++ b/drivers/net/can/c_can/c_can_pci.c
@@ -178,7 +178,6 @@ static int c_can_pci_probe(struct pci_dev *pdev,
break;
case BOSCH_D_CAN:
priv->regs = reg_map_d_can;
- priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
break;
default:
ret = -EINVAL;
diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c
index e36d105..717530e 100644
--- a/drivers/net/can/c_can/c_can_platform.c
+++ b/drivers/net/can/c_can/c_can_platform.c
@@ -320,7 +320,6 @@ static int c_can_plat_probe(struct platform_device *pdev)
break;
case BOSCH_D_CAN:
priv->regs = reg_map_d_can;
- priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
priv->read_reg = c_can_plat_read_reg_aligned_to_16bit;
priv->write_reg = c_can_plat_write_reg_aligned_to_16bit;
priv->read_reg32 = d_can_plat_read_reg32;
diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
index 481895b..c06ef43 100644
--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
+++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
@@ -670,9 +670,9 @@ static void ifi_canfd_set_bittiming(struct net_device *ndev)
priv->base + IFI_CANFD_FTIME);
/* Configure transmitter delay */
- tdc = (dbt->brp * (dbt->phase_seg1 + 1)) & IFI_CANFD_TDELAY_MASK;
- writel(IFI_CANFD_TDELAY_EN | IFI_CANFD_TDELAY_ABS | tdc,
- priv->base + IFI_CANFD_TDELAY);
+ tdc = dbt->brp * (dbt->prop_seg + dbt->phase_seg1);
+ tdc &= IFI_CANFD_TDELAY_MASK;
+ writel(IFI_CANFD_TDELAY_EN | tdc, priv->base + IFI_CANFD_TDELAY);
}
static void ifi_canfd_set_filter(struct net_device *ndev, const u32 id,
diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
index b0c8085..1ac2090 100644
--- a/drivers/net/can/sun4i_can.c
+++ b/drivers/net/can/sun4i_can.c
@@ -539,6 +539,13 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
}
stats->rx_over_errors++;
stats->rx_errors++;
+
+ /* reset the CAN IP by entering reset mode
+ * ignoring timeout error
+ */
+ set_reset_mode(dev);
+ set_normal_mode(dev);
+
/* clear bit */
sun4i_can_write_cmdreg(priv, SUN4I_CMD_CLEAR_OR_FLAG);
}
@@ -653,8 +660,9 @@ static irqreturn_t sun4i_can_interrupt(int irq, void *dev_id)
netif_wake_queue(dev);
can_led_event(dev, CAN_LED_EVENT_TX);
}
- if (isrc & SUN4I_INT_RBUF_VLD) {
- /* receive interrupt */
+ if ((isrc & SUN4I_INT_RBUF_VLD) &&
+ !(isrc & SUN4I_INT_DATA_OR)) {
+ /* receive interrupt - don't read if overrun occurred */
while (status & SUN4I_STA_RBUF_RDY) {
/* RX buffer is not empty */
sun4i_can_rx(dev);
diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
index 3066d9c..e2512ab 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_com.c
@@ -36,9 +36,9 @@
/*****************************************************************************/
/* Timeout in micro-sec */
-#define ADMIN_CMD_TIMEOUT_US (1000000)
+#define ADMIN_CMD_TIMEOUT_US (3000000)
-#define ENA_ASYNC_QUEUE_DEPTH 4
+#define ENA_ASYNC_QUEUE_DEPTH 16
#define ENA_ADMIN_QUEUE_DEPTH 32
#define MIN_ENA_VER (((ENA_COMMON_SPEC_VERSION_MAJOR) << \
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 69d7e9e..c5eaf76 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -100,7 +100,7 @@
/* Number of queues to check for missing queues per timer service */
#define ENA_MONITORED_TX_QUEUES 4
/* Max timeout packets before device reset */
-#define MAX_NUM_OF_TIMEOUTED_PACKETS 32
+#define MAX_NUM_OF_TIMEOUTED_PACKETS 128
#define ENA_TX_RING_IDX_NEXT(idx, ring_size) (((idx) + 1) & ((ring_size) - 1))
@@ -116,9 +116,9 @@
#define ENA_IO_IRQ_IDX(q) (ENA_IO_IRQ_FIRST_IDX + (q))
/* ENA device should send keep alive msg every 1 sec.
- * We wait for 3 sec just to be on the safe side.
+ * We wait for 6 sec just to be on the safe side.
*/
-#define ENA_DEVICE_KALIVE_TIMEOUT (3 * HZ)
+#define ENA_DEVICE_KALIVE_TIMEOUT (6 * HZ)
#define ENA_MMIO_DISABLE_REG_READ BIT(0)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 20e569b..333df54 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -97,6 +97,8 @@ enum board_idx {
BCM57407_NPAR,
BCM57414_NPAR,
BCM57416_NPAR,
+ BCM57452,
+ BCM57454,
NETXTREME_E_VF,
NETXTREME_C_VF,
};
@@ -131,6 +133,8 @@ static const struct {
{ "Broadcom BCM57407 NetXtreme-E Ethernet Partition" },
{ "Broadcom BCM57414 NetXtreme-E Ethernet Partition" },
{ "Broadcom BCM57416 NetXtreme-E Ethernet Partition" },
+ { "Broadcom BCM57452 NetXtreme-E 10Gb/25Gb/40Gb/50Gb Ethernet" },
+ { "Broadcom BCM57454 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb Ethernet" },
{ "Broadcom NetXtreme-E Ethernet Virtual Function" },
{ "Broadcom NetXtreme-C Ethernet Virtual Function" },
};
@@ -166,6 +170,8 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
{ PCI_VDEVICE(BROADCOM, 0x16ed), .driver_data = BCM57414_NPAR },
{ PCI_VDEVICE(BROADCOM, 0x16ee), .driver_data = BCM57416_NPAR },
{ PCI_VDEVICE(BROADCOM, 0x16ef), .driver_data = BCM57416_NPAR },
+ { PCI_VDEVICE(BROADCOM, 0x16f1), .driver_data = BCM57452 },
+ { PCI_VDEVICE(BROADCOM, 0x1614), .driver_data = BCM57454 },
#ifdef CONFIG_BNXT_SRIOV
{ PCI_VDEVICE(BROADCOM, 0x16c1), .driver_data = NETXTREME_E_VF },
{ PCI_VDEVICE(BROADCOM, 0x16cb), .driver_data = NETXTREME_C_VF },
diff --git a/drivers/net/ethernet/fealnx.c b/drivers/net/ethernet/fealnx.c
index c08bd76..a300ed4 100644
--- a/drivers/net/ethernet/fealnx.c
+++ b/drivers/net/ethernet/fealnx.c
@@ -257,8 +257,8 @@ enum rx_desc_status_bits {
RXFSD = 0x00000800, /* first descriptor */
RXLSD = 0x00000400, /* last descriptor */
ErrorSummary = 0x80, /* error summary */
- RUNT = 0x40, /* runt packet received */
- LONG = 0x20, /* long packet received */
+ RUNTPKT = 0x40, /* runt packet received */
+ LONGPKT = 0x20, /* long packet received */
FAE = 0x10, /* frame align error */
CRC = 0x08, /* crc error */
RXER = 0x04, /* receive error */
@@ -1633,7 +1633,7 @@ static int netdev_rx(struct net_device *dev)
dev->name, rx_status);
dev->stats.rx_errors++; /* end of a packet. */
- if (rx_status & (LONG | RUNT))
+ if (rx_status & (LONGPKT | RUNTPKT))
dev->stats.rx_length_errors++;
if (rx_status & RXER)
dev->stats.rx_frame_errors++;
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
index c9dfa65..334088a 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
@@ -2011,9 +2011,10 @@ static void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw,
* function can also be used to respond to an error as the connection
* resetting would also be a means of dealing with errors.
**/
-static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
- struct fm10k_mbx_info *mbx)
+static s32 fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
{
+ s32 err = 0;
const enum fm10k_mbx_state state = mbx->state;
switch (state) {
@@ -2026,6 +2027,7 @@ static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
case FM10K_STATE_OPEN:
/* flush any incomplete work */
fm10k_sm_mbx_connect_reset(mbx);
+ err = FM10K_ERR_RESET_REQUESTED;
break;
case FM10K_STATE_CONNECT:
/* Update remote value to match local value */
@@ -2035,6 +2037,8 @@ static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
}
fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail);
+
+ return err;
}
/**
@@ -2115,7 +2119,7 @@ static s32 fm10k_sm_mbx_process(struct fm10k_hw *hw,
switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) {
case 0:
- fm10k_sm_mbx_process_reset(hw, mbx);
+ err = fm10k_sm_mbx_process_reset(hw, mbx);
break;
case FM10K_SM_MBX_VERSION:
err = fm10k_sm_mbx_process_version_1(hw, mbx);
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
index b1a2f84..e372a58 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
@@ -1144,6 +1144,7 @@ static irqreturn_t fm10k_msix_mbx_pf(int __always_unused irq, void *data)
struct fm10k_hw *hw = &interface->hw;
struct fm10k_mbx_info *mbx = &hw->mbx;
u32 eicr;
+ s32 err = 0;
/* unmask any set bits related to this interrupt */
eicr = fm10k_read_reg(hw, FM10K_EICR);
@@ -1159,12 +1160,15 @@ static irqreturn_t fm10k_msix_mbx_pf(int __always_unused irq, void *data)
/* service mailboxes */
if (fm10k_mbx_trylock(interface)) {
- mbx->ops.process(hw, mbx);
+ err = mbx->ops.process(hw, mbx);
/* handle VFLRE events */
fm10k_iov_event(interface);
fm10k_mbx_unlock(interface);
}
+ if (err == FM10K_ERR_RESET_REQUESTED)
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
/* if switch toggled state we should reset GLORTs */
if (eicr & FM10K_EICR_SWITCHNOTREADY) {
/* force link down for at least 4 seconds */
diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c
index 1264a36..4a50870 100644
--- a/drivers/net/ethernet/intel/igb/e1000_82575.c
+++ b/drivers/net/ethernet/intel/igb/e1000_82575.c
@@ -245,6 +245,17 @@ static s32 igb_init_phy_params_82575(struct e1000_hw *hw)
hw->bus.func = (rd32(E1000_STATUS) & E1000_STATUS_FUNC_MASK) >>
E1000_STATUS_FUNC_SHIFT;
+ /* Make sure the PHY is in a good state. Several people have reported
+ * firmware leaving the PHY's page select register set to something
+ * other than the default of zero, which causes the PHY ID read to
+ * access something other than the intended register.
+ */
+ ret_val = hw->phy.ops.reset(hw);
+ if (ret_val) {
+ hw_dbg("Error resetting the PHY.\n");
+ goto out;
+ }
+
/* Set phy->phy_addr and phy->id. */
igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, 0);
ret_val = igb_get_phy_id_82575(hw);
diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c
index 8aa7987..07d48f2 100644
--- a/drivers/net/ethernet/intel/igb/e1000_i210.c
+++ b/drivers/net/ethernet/intel/igb/e1000_i210.c
@@ -699,9 +699,9 @@ static s32 igb_update_flash_i210(struct e1000_hw *hw)
ret_val = igb_pool_flash_update_done_i210(hw);
if (ret_val)
- hw_dbg("Flash update complete\n");
- else
hw_dbg("Flash update time out\n");
+ else
+ hw_dbg("Flash update complete\n");
out:
return ret_val;
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 6a62447..c6c2562 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -3271,7 +3271,9 @@ static int __igb_close(struct net_device *netdev, bool suspending)
int igb_close(struct net_device *netdev)
{
- return __igb_close(netdev, false);
+ if (netif_device_present(netdev))
+ return __igb_close(netdev, false);
+ return 0;
}
/**
@@ -7548,6 +7550,7 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
int retval = 0;
#endif
+ rtnl_lock();
netif_device_detach(netdev);
if (netif_running(netdev))
@@ -7556,6 +7559,7 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
igb_ptp_suspend(adapter);
igb_clear_interrupt_scheme(adapter);
+ rtnl_unlock();
#ifdef CONFIG_PM
retval = pci_save_state(pdev);
@@ -7674,16 +7678,15 @@ static int igb_resume(struct device *dev)
wr32(E1000_WUS, ~0);
- if (netdev->flags & IFF_UP) {
- rtnl_lock();
+ rtnl_lock();
+ if (!err && netif_running(netdev))
err = __igb_open(netdev, true);
- rtnl_unlock();
- if (err)
- return err;
- }
- netif_device_attach(netdev);
- return 0;
+ if (!err)
+ netif_device_attach(netdev);
+ rtnl_unlock();
+
+ return err;
}
static int igb_runtime_idle(struct device *dev)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index f49f803..a137e06 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -199,7 +199,7 @@ static int ixgbe_get_settings(struct net_device *netdev,
if (supported_link & IXGBE_LINK_SPEED_100_FULL)
ecmd->supported |= ixgbe_isbackplane(hw->phy.media_type) ?
SUPPORTED_1000baseKX_Full :
- SUPPORTED_1000baseT_Full;
+ SUPPORTED_100baseT_Full;
/* default advertised speed if phy.autoneg_advertised isn't set */
ecmd->advertising = ecmd->supported;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index 15ab337..10d2967 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -308,6 +308,7 @@ static void ixgbe_cache_ring_register(struct ixgbe_adapter *adapter)
ixgbe_cache_ring_rss(adapter);
}
+#define IXGBE_RSS_64Q_MASK 0x3F
#define IXGBE_RSS_16Q_MASK 0xF
#define IXGBE_RSS_8Q_MASK 0x7
#define IXGBE_RSS_4Q_MASK 0x3
@@ -604,6 +605,7 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
**/
static bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
{
+ struct ixgbe_hw *hw = &adapter->hw;
struct ixgbe_ring_feature *f;
u16 rss_i;
@@ -612,7 +614,11 @@ static bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
rss_i = f->limit;
f->indices = rss_i;
- f->mask = IXGBE_RSS_16Q_MASK;
+
+ if (hw->mac.type < ixgbe_mac_X550)
+ f->mask = IXGBE_RSS_16Q_MASK;
+ else
+ f->mask = IXGBE_RSS_64Q_MASK;
/* disable ATR by default, it will be configured below */
adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index fee1f29..334eb96 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6194,7 +6194,8 @@ int ixgbe_close(struct net_device *netdev)
ixgbe_ptp_stop(adapter);
- ixgbe_close_suspend(adapter);
+ if (netif_device_present(netdev))
+ ixgbe_close_suspend(adapter);
ixgbe_fdir_filter_exit(adapter);
@@ -6239,14 +6240,12 @@ static int ixgbe_resume(struct pci_dev *pdev)
if (!err && netif_running(netdev))
err = ixgbe_open(netdev);
+
+ if (!err)
+ netif_device_attach(netdev);
rtnl_unlock();
- if (err)
- return err;
-
- netif_device_attach(netdev);
-
- return 0;
+ return err;
}
#endif /* CONFIG_PM */
@@ -6261,14 +6260,14 @@ static int __ixgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
int retval = 0;
#endif
+ rtnl_lock();
netif_device_detach(netdev);
- rtnl_lock();
if (netif_running(netdev))
ixgbe_close_suspend(adapter);
- rtnl_unlock();
ixgbe_clear_interrupt_scheme(adapter);
+ rtnl_unlock();
#ifdef CONFIG_PM
retval = pci_save_state(pdev);
@@ -10027,7 +10026,7 @@ static pci_ers_result_t ixgbe_io_error_detected(struct pci_dev *pdev,
}
if (netif_running(netdev))
- ixgbe_down(adapter);
+ ixgbe_close_suspend(adapter);
if (!test_and_set_bit(__IXGBE_DISABLED, &adapter->state))
pci_disable_device(pdev);
@@ -10097,10 +10096,12 @@ static void ixgbe_io_resume(struct pci_dev *pdev)
}
#endif
+ rtnl_lock();
if (netif_running(netdev))
- ixgbe_up(adapter);
+ ixgbe_open(netdev);
netif_device_attach(netdev);
+ rtnl_unlock();
}
static const struct pci_error_handlers ixgbe_err_handler = {
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
index 021ab9b..b17464e 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
@@ -113,7 +113,7 @@ static s32 ixgbe_read_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
u16 reg, u16 *val, bool lock)
{
u32 swfw_mask = hw->phy.phy_semaphore_mask;
- int max_retry = 10;
+ int max_retry = 3;
int retry = 0;
u8 csum_byte;
u8 high_bits;
@@ -1764,6 +1764,8 @@ static s32 ixgbe_read_i2c_byte_generic_int(struct ixgbe_hw *hw, u8 byte_offset,
u32 swfw_mask = hw->phy.phy_semaphore_mask;
bool nack = true;
+ if (hw->mac.type >= ixgbe_mac_X550)
+ max_retry = 3;
if (ixgbe_is_sfp_probe(hw, byte_offset, dev_addr))
max_retry = IXGBE_SFP_DETECT_RETRIES;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
index 7e6b926..60f0bf7 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
@@ -1932,8 +1932,6 @@ static s32 ixgbe_setup_kr_speed_x550em(struct ixgbe_hw *hw,
return status;
reg_val |= IXGBE_KRM_LINK_CTRL_1_TETH_AN_ENABLE;
- reg_val &= ~(IXGBE_KRM_LINK_CTRL_1_TETH_AN_FEC_REQ |
- IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_FEC);
reg_val &= ~(IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KR |
IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KX);
@@ -1995,12 +1993,11 @@ static s32 ixgbe_setup_kx4_x550em(struct ixgbe_hw *hw)
/**
* ixgbe_setup_kr_x550em - Configure the KR PHY
* @hw: pointer to hardware structure
- *
- * Configures the integrated KR PHY for X550EM_x.
**/
static s32 ixgbe_setup_kr_x550em(struct ixgbe_hw *hw)
{
- if (hw->mac.type != ixgbe_mac_X550EM_x)
+ /* leave link alone for 2.5G */
+ if (hw->phy.autoneg_advertised & IXGBE_LINK_SPEED_2_5GB_FULL)
return 0;
return ixgbe_setup_kr_speed_x550em(hw, hw->phy.autoneg_advertised);
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 707bc46..6ea10a9 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -28,6 +28,7 @@
#include <linux/of_mdio.h>
#include <linux/of_net.h>
#include <linux/phy.h>
+#include <linux/phy_fixed.h>
#include <linux/platform_device.h>
#include <linux/skbuff.h>
#include <net/hwbm.h>
diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
index adea6f5..9da9db1 100644
--- a/drivers/net/macvtap.c
+++ b/drivers/net/macvtap.c
@@ -559,6 +559,10 @@ static int macvtap_open(struct inode *inode, struct file *file)
&macvtap_proto, 0);
if (!q)
goto err;
+ if (skb_array_init(&q->skb_array, dev->tx_queue_len, GFP_KERNEL)) {
+ sk_free(&q->sk);
+ goto err;
+ }
RCU_INIT_POINTER(q->sock.wq, &q->wq);
init_waitqueue_head(&q->wq.wait);
@@ -582,22 +586,18 @@ static int macvtap_open(struct inode *inode, struct file *file)
if ((dev->features & NETIF_F_HIGHDMA) && (dev->features & NETIF_F_SG))
sock_set_flag(&q->sk, SOCK_ZEROCOPY);
- err = -ENOMEM;
- if (skb_array_init(&q->skb_array, dev->tx_queue_len, GFP_KERNEL))
- goto err_array;
-
err = macvtap_set_queue(dev, file, q);
- if (err)
- goto err_queue;
+ if (err) {
+ /* macvtap_sock_destruct() will take care of freeing skb_array */
+ goto err_put;
+ }
dev_put(dev);
rtnl_unlock();
return err;
-err_queue:
- skb_array_cleanup(&q->skb_array);
-err_array:
+err_put:
sock_put(&q->sk);
err:
if (dev)
@@ -1077,6 +1077,8 @@ static long macvtap_ioctl(struct file *file, unsigned int cmd,
case TUNSETSNDBUF:
if (get_user(s, sp))
return -EFAULT;
+ if (s <= 0)
+ return -EINVAL;
q->sk.sk_sndbuf = s;
return 0;
diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
index 01cf094..8f84961 100644
--- a/drivers/net/phy/dp83867.c
+++ b/drivers/net/phy/dp83867.c
@@ -33,6 +33,7 @@
/* Extended Registers */
#define DP83867_RGMIICTL 0x0032
+#define DP83867_STRAP_STS1 0x006E
#define DP83867_RGMIIDCTL 0x0086
#define DP83867_SW_RESET BIT(15)
@@ -56,9 +57,13 @@
#define DP83867_RGMII_TX_CLK_DELAY_EN BIT(1)
#define DP83867_RGMII_RX_CLK_DELAY_EN BIT(0)
+/* STRAP_STS1 bits */
+#define DP83867_STRAP_STS1_RESERVED BIT(11)
+
/* PHY CTRL bits */
#define DP83867_PHYCR_FIFO_DEPTH_SHIFT 14
#define DP83867_PHYCR_FIFO_DEPTH_MASK (3 << 14)
+#define DP83867_PHYCR_RESERVED_MASK BIT(11)
/* RGMIIDCTL bits */
#define DP83867_RGMII_TX_CLK_DELAY_SHIFT 4
@@ -141,7 +146,7 @@ static int dp83867_of_init(struct phy_device *phydev)
static int dp83867_config_init(struct phy_device *phydev)
{
struct dp83867_private *dp83867;
- int ret, val;
+ int ret, val, bs;
u16 delay;
if (!phydev->priv) {
@@ -164,6 +169,22 @@ static int dp83867_config_init(struct phy_device *phydev)
return val;
val &= ~DP83867_PHYCR_FIFO_DEPTH_MASK;
val |= (dp83867->fifo_depth << DP83867_PHYCR_FIFO_DEPTH_SHIFT);
+
+ /* The code below checks if "port mirroring" N/A MODE4 has been
+ * enabled during power on bootstrap.
+ *
+ * Such N/A mode enabled by mistake can put PHY IC in some
+ * internal testing mode and disable RGMII transmission.
+ *
+ * In this particular case one needs to check STRAP_STS1
+ * register's bit 11 (marked as RESERVED).
+ */
+
+ bs = phy_read_mmd_indirect(phydev, DP83867_STRAP_STS1,
+ DP83867_DEVADDR);
+ if (bs & DP83867_STRAP_STS1_RESERVED)
+ val &= ~DP83867_PHYCR_RESERVED_MASK;
+
ret = phy_write(phydev, MII_DP83867_PHYCTRL, val);
if (ret)
return ret;
diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
index 96fa0e6..440d5f4 100644
--- a/drivers/net/ppp/ppp_generic.c
+++ b/drivers/net/ppp/ppp_generic.c
@@ -1338,7 +1338,17 @@ ppp_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats64)
static int ppp_dev_init(struct net_device *dev)
{
+ struct ppp *ppp;
+
netdev_lockdep_set_classes(dev);
+
+ ppp = netdev_priv(dev);
+ /* Let the netdevice take a reference on the ppp file. This ensures
+ * that ppp_destroy_interface() won't run before the device gets
+ * unregistered.
+ */
+ atomic_inc(&ppp->file.refcnt);
+
return 0;
}
@@ -1361,6 +1371,15 @@ static void ppp_dev_uninit(struct net_device *dev)
wake_up_interruptible(&ppp->file.rwait);
}
+static void ppp_dev_priv_destructor(struct net_device *dev)
+{
+ struct ppp *ppp;
+
+ ppp = netdev_priv(dev);
+ if (atomic_dec_and_test(&ppp->file.refcnt))
+ ppp_destroy_interface(ppp);
+}
+
static const struct net_device_ops ppp_netdev_ops = {
.ndo_init = ppp_dev_init,
.ndo_uninit = ppp_dev_uninit,
@@ -1386,6 +1405,7 @@ static void ppp_setup(struct net_device *dev)
dev->tx_queue_len = 3;
dev->type = ARPHRD_PPP;
dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
+ dev->destructor = ppp_dev_priv_destructor;
netif_keep_dst(dev);
}
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 7e5ae26..6f9fc27 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1791,6 +1791,9 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
if (!dev)
return -ENOMEM;
+ err = dev_get_valid_name(net, dev, name);
+ if (err < 0)
+ goto err_free_dev;
dev_net_set(dev, net);
dev->rtnl_link_ops = &tun_link_ops;
@@ -2190,6 +2193,10 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
ret = -EFAULT;
break;
}
+ if (sndbuf <= 0) {
+ ret = -EINVAL;
+ break;
+ }
tun->sndbuf = sndbuf;
tun_set_sndbuf(tun);
diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
index 50737de..32e9ec8 100644
--- a/drivers/net/usb/asix_devices.c
+++ b/drivers/net/usb/asix_devices.c
@@ -624,7 +624,7 @@ static int asix_suspend(struct usb_interface *intf, pm_message_t message)
struct usbnet *dev = usb_get_intfdata(intf);
struct asix_common_private *priv = dev->driver_priv;
- if (priv->suspend)
+ if (priv && priv->suspend)
priv->suspend(dev);
return usbnet_suspend(intf, message);
@@ -676,7 +676,7 @@ static int asix_resume(struct usb_interface *intf)
struct usbnet *dev = usb_get_intfdata(intf);
struct asix_common_private *priv = dev->driver_priv;
- if (priv->resume)
+ if (priv && priv->resume)
priv->resume(dev);
return usbnet_resume(intf);
diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
index b82be81..1fca002 100644
--- a/drivers/net/usb/cdc_ether.c
+++ b/drivers/net/usb/cdc_ether.c
@@ -221,7 +221,7 @@ int usbnet_generic_cdc_bind(struct usbnet *dev, struct usb_interface *intf)
goto bad_desc;
}
- if (header.usb_cdc_ether_desc) {
+ if (header.usb_cdc_ether_desc && info->ether->wMaxSegmentSize) {
dev->hard_mtu = le16_to_cpu(info->ether->wMaxSegmentSize);
/* because of Zaurus, we may be ignoring the host
* side link address we were given.
diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
index afbfc0f..dc6d3b0 100644
--- a/drivers/net/usb/cdc_ncm.c
+++ b/drivers/net/usb/cdc_ncm.c
@@ -769,8 +769,10 @@ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_
u8 *buf;
int len;
int temp;
+ int err;
u8 iface_no;
struct usb_cdc_parsed_header hdr;
+ u16 curr_ntb_format;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
@@ -875,6 +877,32 @@ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_
goto error2;
}
+ /*
+ * Some Huawei devices have been observed to come out of reset in NDP32 mode.
+ * Let's check if this is the case, and set the device to NDP16 mode again if
+ * needed.
+ */
+ if (ctx->drvflags & CDC_NCM_FLAG_RESET_NTB16) {
+ err = usbnet_read_cmd(dev, USB_CDC_GET_NTB_FORMAT,
+ USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE,
+ 0, iface_no, &curr_ntb_format, 2);
+ if (err < 0) {
+ goto error2;
+ }
+
+ if (curr_ntb_format == USB_CDC_NCM_NTB32_FORMAT) {
+ dev_info(&intf->dev, "resetting NTB format to 16-bit");
+ err = usbnet_write_cmd(dev, USB_CDC_SET_NTB_FORMAT,
+ USB_TYPE_CLASS | USB_DIR_OUT
+ | USB_RECIP_INTERFACE,
+ USB_CDC_NCM_NTB16_FORMAT,
+ iface_no, NULL, 0);
+
+ if (err < 0)
+ goto error2;
+ }
+ }
+
cdc_ncm_find_endpoints(dev, ctx->data);
cdc_ncm_find_endpoints(dev, ctx->control);
if (!dev->in || !dev->out || !dev->status) {
diff --git a/drivers/net/usb/huawei_cdc_ncm.c b/drivers/net/usb/huawei_cdc_ncm.c
index 2680a65..63f28908 100644
--- a/drivers/net/usb/huawei_cdc_ncm.c
+++ b/drivers/net/usb/huawei_cdc_ncm.c
@@ -80,6 +80,12 @@ static int huawei_cdc_ncm_bind(struct usbnet *usbnet_dev,
* be at the end of the frame.
*/
drvflags |= CDC_NCM_FLAG_NDP_TO_END;
+
+ /* Additionally, it has been reported that some Huawei E3372H devices, with
+ * firmware version 21.318.01.00.541, come out of reset in NTB32 format mode, hence
+ * needing to be set to the NTB16 one again.
+ */
+ drvflags |= CDC_NCM_FLAG_RESET_NTB16;
ret = cdc_ncm_bind_common(usbnet_dev, intf, 1, drvflags);
if (ret)
goto err;
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
index 49a27dc..9cf11c8 100644
--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -205,6 +205,7 @@ static int qmi_wwan_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
return 1;
}
if (rawip) {
+ skb_reset_mac_header(skb);
skb->dev = dev->net; /* normally set by eth_type_trans */
skb->protocol = proto;
return 1;
@@ -386,7 +387,7 @@ static int qmi_wwan_bind(struct usbnet *dev, struct usb_interface *intf)
}
/* errors aren't fatal - we can live with the dynamic address */
- if (cdc_ether) {
+ if (cdc_ether && cdc_ether->wMaxSegmentSize) {
dev->hard_mtu = le16_to_cpu(cdc_ether->wMaxSegmentSize);
usbnet_get_ethernet_addr(dev, cdc_ether->iMACAddress);
}
diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
index 578bd50..346e486 100644
--- a/drivers/net/vrf.c
+++ b/drivers/net/vrf.c
@@ -1129,7 +1129,7 @@ static int vrf_fib_rule(const struct net_device *dev, __u8 family, bool add_it)
frh->family = family;
frh->action = FR_ACT_TO_TBL;
- if (nla_put_u32(skb, FRA_L3MDEV, 1))
+ if (nla_put_u8(skb, FRA_L3MDEV, 1))
goto nla_put_failure;
if (nla_put_u32(skb, FRA_PRIORITY, FIB_RULE_PREF))
diff --git a/drivers/net/wireless/ath/ath10k/ahb.c b/drivers/net/wireless/ath/ath10k/ahb.c
index 766c63b..45226db 100644
--- a/drivers/net/wireless/ath/ath10k/ahb.c
+++ b/drivers/net/wireless/ath/ath10k/ahb.c
@@ -33,6 +33,9 @@ static const struct of_device_id ath10k_ahb_of_match[] = {
MODULE_DEVICE_TABLE(of, ath10k_ahb_of_match);
+#define QCA4019_SRAM_ADDR 0x000C0000
+#define QCA4019_SRAM_LEN 0x00040000 /* 256 kb */
+
static inline struct ath10k_ahb *ath10k_ahb_priv(struct ath10k *ar)
{
return &((struct ath10k_pci *)ar->drv_priv)->ahb[0];
@@ -699,6 +702,25 @@ static int ath10k_ahb_hif_power_up(struct ath10k *ar)
return ret;
}
+static u32 ath10k_ahb_qca4019_targ_cpu_to_ce_addr(struct ath10k *ar, u32 addr)
+{
+ u32 val = 0, region = addr & 0xfffff;
+
+ val = ath10k_pci_read32(ar, PCIE_BAR_REG_ADDRESS);
+
+ if (region >= QCA4019_SRAM_ADDR && region <=
+ (QCA4019_SRAM_ADDR + QCA4019_SRAM_LEN)) {
+ /* SRAM contents for QCA4019 can be directly accessed and
+ * no conversions are required
+ */
+ val |= region;
+ } else {
+ val |= 0x100000 | region;
+ }
+
+ return val;
+}
+
static const struct ath10k_hif_ops ath10k_ahb_hif_ops = {
.tx_sg = ath10k_pci_hif_tx_sg,
.diag_read = ath10k_pci_hif_diag_read,
@@ -766,6 +788,7 @@ static int ath10k_ahb_probe(struct platform_device *pdev)
ar_pci->mem_len = ar_ahb->mem_len;
ar_pci->ar = ar;
ar_pci->bus_ops = &ath10k_ahb_bus_ops;
+ ar_pci->targ_cpu_to_ce_addr = ath10k_ahb_qca4019_targ_cpu_to_ce_addr;
ret = ath10k_pci_setup_resource(ar);
if (ret) {
diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
index 410bcda..25b8d50 100644
--- a/drivers/net/wireless/ath/ath10k/pci.c
+++ b/drivers/net/wireless/ath/ath10k/pci.c
@@ -840,29 +840,33 @@ void ath10k_pci_rx_replenish_retry(unsigned long ptr)
ath10k_pci_rx_post(ar);
}
+static u32 ath10k_pci_qca988x_targ_cpu_to_ce_addr(struct ath10k *ar, u32 addr)
+{
+ u32 val = 0, region = addr & 0xfffff;
+
+ val = (ath10k_pci_read32(ar, SOC_CORE_BASE_ADDRESS + CORE_CTRL_ADDRESS)
+ & 0x7ff) << 21;
+ val |= 0x100000 | region;
+ return val;
+}
+
+static u32 ath10k_pci_qca99x0_targ_cpu_to_ce_addr(struct ath10k *ar, u32 addr)
+{
+ u32 val = 0, region = addr & 0xfffff;
+
+ val = ath10k_pci_read32(ar, PCIE_BAR_REG_ADDRESS);
+ val |= 0x100000 | region;
+ return val;
+}
+
static u32 ath10k_pci_targ_cpu_to_ce_addr(struct ath10k *ar, u32 addr)
{
- u32 val = 0;
+ struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- switch (ar->hw_rev) {
- case ATH10K_HW_QCA988X:
- case ATH10K_HW_QCA9887:
- case ATH10K_HW_QCA6174:
- case ATH10K_HW_QCA9377:
- val = (ath10k_pci_read32(ar, SOC_CORE_BASE_ADDRESS +
- CORE_CTRL_ADDRESS) &
- 0x7ff) << 21;
- break;
- case ATH10K_HW_QCA9888:
- case ATH10K_HW_QCA99X0:
- case ATH10K_HW_QCA9984:
- case ATH10K_HW_QCA4019:
- val = ath10k_pci_read32(ar, PCIE_BAR_REG_ADDRESS);
- break;
- }
+ if (WARN_ON_ONCE(!ar_pci->targ_cpu_to_ce_addr))
+ return -ENOTSUPP;
- val |= 0x100000 | (addr & 0xfffff);
- return val;
+ return ar_pci->targ_cpu_to_ce_addr(ar, addr);
}
/*
@@ -3171,6 +3175,7 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
bool pci_ps;
int (*pci_soft_reset)(struct ath10k *ar);
int (*pci_hard_reset)(struct ath10k *ar);
+ u32 (*targ_cpu_to_ce_addr)(struct ath10k *ar, u32 addr);
switch (pci_dev->device) {
case QCA988X_2_0_DEVICE_ID:
@@ -3178,12 +3183,14 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
pci_ps = false;
pci_soft_reset = ath10k_pci_warm_reset;
pci_hard_reset = ath10k_pci_qca988x_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca988x_targ_cpu_to_ce_addr;
break;
case QCA9887_1_0_DEVICE_ID:
hw_rev = ATH10K_HW_QCA9887;
pci_ps = false;
pci_soft_reset = ath10k_pci_warm_reset;
pci_hard_reset = ath10k_pci_qca988x_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca988x_targ_cpu_to_ce_addr;
break;
case QCA6164_2_1_DEVICE_ID:
case QCA6174_2_1_DEVICE_ID:
@@ -3191,30 +3198,35 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
pci_ps = true;
pci_soft_reset = ath10k_pci_warm_reset;
pci_hard_reset = ath10k_pci_qca6174_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca988x_targ_cpu_to_ce_addr;
break;
case QCA99X0_2_0_DEVICE_ID:
hw_rev = ATH10K_HW_QCA99X0;
pci_ps = false;
pci_soft_reset = ath10k_pci_qca99x0_soft_chip_reset;
pci_hard_reset = ath10k_pci_qca99x0_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca99x0_targ_cpu_to_ce_addr;
break;
case QCA9984_1_0_DEVICE_ID:
hw_rev = ATH10K_HW_QCA9984;
pci_ps = false;
pci_soft_reset = ath10k_pci_qca99x0_soft_chip_reset;
pci_hard_reset = ath10k_pci_qca99x0_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca99x0_targ_cpu_to_ce_addr;
break;
case QCA9888_2_0_DEVICE_ID:
hw_rev = ATH10K_HW_QCA9888;
pci_ps = false;
pci_soft_reset = ath10k_pci_qca99x0_soft_chip_reset;
pci_hard_reset = ath10k_pci_qca99x0_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca99x0_targ_cpu_to_ce_addr;
break;
case QCA9377_1_0_DEVICE_ID:
hw_rev = ATH10K_HW_QCA9377;
pci_ps = true;
pci_soft_reset = NULL;
pci_hard_reset = ath10k_pci_qca6174_chip_reset;
+ targ_cpu_to_ce_addr = ath10k_pci_qca988x_targ_cpu_to_ce_addr;
break;
default:
WARN_ON(1);
@@ -3241,6 +3253,7 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
ar_pci->bus_ops = &ath10k_pci_bus_ops;
ar_pci->pci_soft_reset = pci_soft_reset;
ar_pci->pci_hard_reset = pci_hard_reset;
+ ar_pci->targ_cpu_to_ce_addr = targ_cpu_to_ce_addr;
ar->id.vendor = pdev->vendor;
ar->id.device = pdev->device;
diff --git a/drivers/net/wireless/ath/ath10k/pci.h b/drivers/net/wireless/ath/ath10k/pci.h
index 9854ad5..577bb87 100644
--- a/drivers/net/wireless/ath/ath10k/pci.h
+++ b/drivers/net/wireless/ath/ath10k/pci.h
@@ -238,6 +238,11 @@ struct ath10k_pci {
/* Chip specific pci full reset function */
int (*pci_hard_reset)(struct ath10k *ar);
+ /* chip specific methods for converting target CPU virtual address
+ * space to CE address space
+ */
+ u32 (*targ_cpu_to_ce_addr)(struct ath10k *ar, u32 addr);
+
/* Keep this entry in the last, memory for struct ath10k_ahb is
* allocated (ahb support enabled case) in the continuation of
* this struct.
diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
index e1d59da..ca8797c 100644
--- a/drivers/net/wireless/ath/wcn36xx/main.c
+++ b/drivers/net/wireless/ath/wcn36xx/main.c
@@ -1165,11 +1165,12 @@ static int wcn36xx_remove(struct platform_device *pdev)
wcn36xx_dbg(WCN36XX_DBG_MAC, "platform remove\n");
release_firmware(wcn->nv);
- mutex_destroy(&wcn->hal_mutex);
ieee80211_unregister_hw(hw);
iounmap(wcn->dxe_base);
iounmap(wcn->ccu_base);
+
+ mutex_destroy(&wcn->hal_mutex);
ieee80211_free_hw(hw);
return 0;
diff --git a/drivers/net/wireless/ath/wil6210/main.c b/drivers/net/wireless/ath/wil6210/main.c
index cadb36a..ae5a1b6 100644
--- a/drivers/net/wireless/ath/wil6210/main.c
+++ b/drivers/net/wireless/ath/wil6210/main.c
@@ -1143,6 +1143,10 @@ int wil_reset(struct wil6210_priv *wil, bool load_fw)
if (wil->tt_data_set)
wmi_set_tt_cfg(wil, &wil->tt_data);
+ if (wil->snr_thresh.enabled)
+ wmi_set_snr_thresh(wil, wil->snr_thresh.omni,
+ wil->snr_thresh.direct);
+
if (wil->platform_ops.notify) {
rc = wil->platform_ops.notify(wil->platform_handle,
WIL_PLATFORM_EVT_FW_RDY);
diff --git a/drivers/net/wireless/ath/wil6210/sysfs.c b/drivers/net/wireless/ath/wil6210/sysfs.c
index b91bf51..7c9a790 100644
--- a/drivers/net/wireless/ath/wil6210/sysfs.c
+++ b/drivers/net/wireless/ath/wil6210/sysfs.c
@@ -268,10 +268,49 @@ static DEVICE_ATTR(fst_link_loss, 0644,
wil_fst_link_loss_sysfs_show,
wil_fst_link_loss_sysfs_store);
+static ssize_t
+wil_snr_thresh_sysfs_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct wil6210_priv *wil = dev_get_drvdata(dev);
+ ssize_t len = 0;
+
+ if (wil->snr_thresh.enabled)
+ len = snprintf(buf, PAGE_SIZE, "omni=%d, direct=%d\n",
+ wil->snr_thresh.omni, wil->snr_thresh.direct);
+
+ return len;
+}
+
+static ssize_t
+wil_snr_thresh_sysfs_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct wil6210_priv *wil = dev_get_drvdata(dev);
+ int rc;
+ short omni, direct;
+
+ /* to disable snr threshold, set both omni and direct to 0 */
+ if (sscanf(buf, "%hd %hd", &omni, &direct) != 2)
+ return -EINVAL;
+
+ rc = wmi_set_snr_thresh(wil, omni, direct);
+ if (!rc)
+ rc = count;
+
+ return rc;
+}
+
+static DEVICE_ATTR(snr_thresh, 0644,
+ wil_snr_thresh_sysfs_show,
+ wil_snr_thresh_sysfs_store);
+
static struct attribute *wil6210_sysfs_entries[] = {
&dev_attr_ftm_txrx_offset.attr,
&dev_attr_thermal_throttling.attr,
&dev_attr_fst_link_loss.attr,
+ &dev_attr_snr_thresh.attr,
NULL
};
diff --git a/drivers/net/wireless/ath/wil6210/wil6210.h b/drivers/net/wireless/ath/wil6210/wil6210.h
index 52321f4..bb43f3f 100644
--- a/drivers/net/wireless/ath/wil6210/wil6210.h
+++ b/drivers/net/wireless/ath/wil6210/wil6210.h
@@ -751,6 +751,11 @@ struct wil6210_priv {
struct wil_ftm_priv ftm;
bool tt_data_set;
struct wmi_tt_data tt_data;
+ struct {
+ bool enabled;
+ short omni;
+ short direct;
+ } snr_thresh;
int fw_calib_result;
@@ -1070,4 +1075,5 @@ int wmi_link_maintain_cfg_write(struct wil6210_priv *wil,
const u8 *addr,
bool fst_link_loss);
+int wmi_set_snr_thresh(struct wil6210_priv *wil, short omni, short direct);
#endif /* __WIL6210_H__ */
diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
index 205c3ab..9520c39 100644
--- a/drivers/net/wireless/ath/wil6210/wmi.c
+++ b/drivers/net/wireless/ath/wil6210/wmi.c
@@ -378,7 +378,7 @@ static void wmi_evt_rx_mgmt(struct wil6210_priv *wil, int id, void *d, int len)
s32 signal;
__le16 fc;
u32 d_len;
- u16 d_status;
+ s16 snr;
if (flen < 0) {
wil_err(wil, "MGMT Rx: short event, len %d\n", len);
@@ -400,13 +400,13 @@ static void wmi_evt_rx_mgmt(struct wil6210_priv *wil, int id, void *d, int len)
signal = 100 * data->info.rssi;
else
signal = data->info.sqi;
- d_status = le16_to_cpu(data->info.status);
+ snr = le16_to_cpu(data->info.snr); /* 1/4 dB units */
fc = rx_mgmt_frame->frame_control;
wil_dbg_wmi(wil, "MGMT Rx: channel %d MCS %d RSSI %d SQI %d%%\n",
data->info.channel, data->info.mcs, data->info.rssi,
data->info.sqi);
- wil_dbg_wmi(wil, "status 0x%04x len %d fc 0x%04x\n", d_status, d_len,
+ wil_dbg_wmi(wil, "snr %ddB len %d fc 0x%04x\n", snr / 4, d_len,
le16_to_cpu(fc));
wil_dbg_wmi(wil, "qid %d mid %d cid %d\n",
data->info.qid, data->info.mid, data->info.cid);
@@ -434,6 +434,11 @@ static void wmi_evt_rx_mgmt(struct wil6210_priv *wil, int id, void *d, int len)
wil_dbg_wmi(wil, "Capability info : 0x%04x\n", cap);
+ if (wil->snr_thresh.enabled && snr < wil->snr_thresh.omni) {
+ wil_dbg_wmi(wil, "snr below threshold. dropping\n");
+ return;
+ }
+
bss = cfg80211_inform_bss_frame(wiphy, channel, rx_mgmt_frame,
d_len, signal, GFP_KERNEL);
if (bss) {
@@ -2165,3 +2170,32 @@ bool wil_is_wmi_idle(struct wil6210_priv *wil)
spin_unlock_irqrestore(&wil->wmi_ev_lock, flags);
return rc;
}
+
+int wmi_set_snr_thresh(struct wil6210_priv *wil, short omni, short direct)
+{
+ int rc;
+ struct wmi_set_connect_snr_thr_cmd cmd = {
+ .enable = true,
+ .omni_snr_thr = cpu_to_le16(omni),
+ .direct_snr_thr = cpu_to_le16(direct),
+ };
+
+ if (!test_bit(WMI_FW_CAPABILITY_CONNECT_SNR_THR, wil->fw_capabilities))
+ return -ENOTSUPP;
+
+ if (omni == 0 && direct == 0)
+ cmd.enable = false;
+
+ wil_dbg_wmi(wil, "%s snr thresh omni=%d, direct=%d (1/4 dB units)\n",
+ cmd.enable ? "enable" : "disable", omni, direct);
+
+ rc = wmi_send(wil, WMI_SET_CONNECT_SNR_THR_CMDID, &cmd, sizeof(cmd));
+ if (rc)
+ return rc;
+
+ wil->snr_thresh.enabled = cmd.enable;
+ wil->snr_thresh.omni = omni;
+ wil->snr_thresh.direct = direct;
+
+ return 0;
+}
diff --git a/drivers/net/wireless/ath/wil6210/wmi.h b/drivers/net/wireless/ath/wil6210/wmi.h
index fcefdd1..809e320 100644
--- a/drivers/net/wireless/ath/wil6210/wmi.h
+++ b/drivers/net/wireless/ath/wil6210/wmi.h
@@ -71,6 +71,7 @@ enum wmi_fw_capability {
WMI_FW_CAPABILITY_RSSI_REPORTING = 12,
WMI_FW_CAPABILITY_SET_SILENT_RSSI_TABLE = 13,
WMI_FW_CAPABILITY_LO_POWER_CALIB_FROM_OTP = 14,
+ WMI_FW_CAPABILITY_CONNECT_SNR_THR = 16,
WMI_FW_CAPABILITY_REF_CLOCK_CONTROL = 18,
WMI_FW_CAPABILITY_MAX,
};
@@ -1822,7 +1823,7 @@ struct wmi_rx_mgmt_info {
u8 range;
u8 sqi;
__le16 stype;
- __le16 status;
+ __le16 snr;
__le32 len;
/* Not resolved when == 0xFFFFFFFF == > Broadcast to all MIDS */
u8 qid;
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
index 1082f66..54354a3 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
@@ -147,7 +147,6 @@ static struct ieee80211_rate __wl_rates[] = {
.band = NL80211_BAND_2GHZ, \
.center_freq = (_freq), \
.hw_value = (_channel), \
- .flags = IEEE80211_CHAN_DISABLED, \
.max_antenna_gain = 0, \
.max_power = 30, \
}
@@ -156,7 +155,6 @@ static struct ieee80211_rate __wl_rates[] = {
.band = NL80211_BAND_5GHZ, \
.center_freq = 5000 + (5 * (_channel)), \
.hw_value = (_channel), \
- .flags = IEEE80211_CHAN_DISABLED, \
.max_antenna_gain = 0, \
.max_power = 30, \
}
@@ -4756,9 +4754,6 @@ static int brcmf_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *ndev)
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_AP, 0);
if (err < 0)
brcmf_err("setting AP mode failed %d\n", err);
- err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_INFRA, 0);
- if (err < 0)
- brcmf_err("setting INFRA mode failed %d\n", err);
if (brcmf_feat_is_enabled(ifp, BRCMF_FEAT_MBSS))
brcmf_fil_iovar_int_set(ifp, "mbss", 0);
brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_REGULATORY,
@@ -6581,8 +6576,7 @@ static int brcmf_setup_wiphy(struct wiphy *wiphy, struct brcmf_if *ifp)
wiphy->bands[NL80211_BAND_5GHZ] = band;
}
}
- err = brcmf_setup_wiphybands(wiphy);
- return err;
+ return 0;
}
static s32 brcmf_config_dongle(struct brcmf_cfg80211_info *cfg)
@@ -6947,6 +6941,12 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr,
goto priv_out;
}
+ err = brcmf_setup_wiphybands(wiphy);
+ if (err) {
+ brcmf_err("Setting wiphy bands failed (%d)\n", err);
+ goto wiphy_unreg_out;
+ }
+
/* If cfg80211 didn't disable 40MHz HT CAP in wiphy_register(),
* setup 40MHz in 2GHz band and enable OBSS scanning.
*/
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c
index e64557c..6f8a4b0 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c
@@ -32,16 +32,25 @@ static int brcmf_debug_create_memdump(struct brcmf_bus *bus, const void *data,
{
void *dump;
size_t ramsize;
+ int err;
ramsize = brcmf_bus_get_ramsize(bus);
- if (ramsize) {
- dump = vzalloc(len + ramsize);
- if (!dump)
- return -ENOMEM;
- memcpy(dump, data, len);
- brcmf_bus_get_memdump(bus, dump + len, ramsize);
- dev_coredumpv(bus->dev, dump, len + ramsize, GFP_KERNEL);
+ if (!ramsize)
+ return -ENOTSUPP;
+
+ dump = vzalloc(len + ramsize);
+ if (!dump)
+ return -ENOMEM;
+
+ memcpy(dump, data, len);
+ err = brcmf_bus_get_memdump(bus, dump + len, ramsize);
+ if (err) {
+ vfree(dump);
+ return err;
}
+
+ dev_coredumpv(bus->dev, dump, len + ramsize, GFP_KERNEL);
+
return 0;
}
diff --git a/drivers/net/wireless/cnss_utils/cnss_utils.c b/drivers/net/wireless/cnss_utils/cnss_utils.c
index d73846e..4955130 100644
--- a/drivers/net/wireless/cnss_utils/cnss_utils.c
+++ b/drivers/net/wireless/cnss_utils/cnss_utils.c
@@ -34,6 +34,11 @@ struct cnss_wlan_mac_addr {
u32 no_of_mac_addr_set;
};
+enum mac_type {
+ CNSS_MAC_PROVISIONED,
+ CNSS_MAC_DERIVED,
+};
+
static struct cnss_utils_priv {
struct cnss_unsafe_channel_list unsafe_channel_list;
struct cnss_dfs_nol_info dfs_nol_info;
@@ -42,8 +47,8 @@ static struct cnss_utils_priv {
/* generic spin-lock for dfs_nol info */
spinlock_t dfs_nol_info_lock;
int driver_load_cnt;
- bool is_wlan_mac_set;
struct cnss_wlan_mac_addr wlan_mac_addr;
+ struct cnss_wlan_mac_addr wlan_der_mac_addr;
enum cnss_utils_cc_src cc_source;
} *cnss_utils_priv;
@@ -189,7 +194,8 @@ int cnss_utils_get_driver_load_cnt(struct device *dev)
}
EXPORT_SYMBOL(cnss_utils_get_driver_load_cnt);
-int cnss_utils_set_wlan_mac_address(const u8 *in, const uint32_t len)
+static int set_wlan_mac_address(const u8 *mac_list, const uint32_t len,
+ enum mac_type type)
{
struct cnss_utils_priv *priv = cnss_utils_priv;
u32 no_of_mac_addr;
@@ -200,11 +206,6 @@ int cnss_utils_set_wlan_mac_address(const u8 *in, const uint32_t len)
if (!priv)
return -EINVAL;
- if (priv->is_wlan_mac_set) {
- pr_debug("WLAN MAC address is already set\n");
- return 0;
- }
-
if (len == 0 || (len % ETH_ALEN) != 0) {
pr_err("Invalid length %d\n", len);
return -EINVAL;
@@ -217,24 +218,45 @@ int cnss_utils_set_wlan_mac_address(const u8 *in, const uint32_t len)
return -EINVAL;
}
- priv->is_wlan_mac_set = true;
- addr = &priv->wlan_mac_addr;
+ if (type == CNSS_MAC_PROVISIONED)
+ addr = &priv->wlan_mac_addr;
+ else
+ addr = &priv->wlan_der_mac_addr;
+
+ if (addr->no_of_mac_addr_set) {
+ pr_err("WLAN MAC address is already set, num %d type %d\n",
+ addr->no_of_mac_addr_set, type);
+ return 0;
+ }
+
addr->no_of_mac_addr_set = no_of_mac_addr;
temp = &addr->mac_addr[0][0];
for (iter = 0; iter < no_of_mac_addr;
- ++iter, temp += ETH_ALEN, in += ETH_ALEN) {
- ether_addr_copy(temp, in);
+ ++iter, temp += ETH_ALEN, mac_list += ETH_ALEN) {
+ ether_addr_copy(temp, mac_list);
pr_debug("MAC_ADDR:%02x:%02x:%02x:%02x:%02x:%02x\n",
temp[0], temp[1], temp[2],
temp[3], temp[4], temp[5]);
}
-
return 0;
}
+
+int cnss_utils_set_wlan_mac_address(const u8 *mac_list, const uint32_t len)
+{
+ return set_wlan_mac_address(mac_list, len, CNSS_MAC_PROVISIONED);
+}
EXPORT_SYMBOL(cnss_utils_set_wlan_mac_address);
-u8 *cnss_utils_get_wlan_mac_address(struct device *dev, uint32_t *num)
+int cnss_utils_set_wlan_derived_mac_address(
+ const u8 *mac_list, const uint32_t len)
+{
+ return set_wlan_mac_address(mac_list, len, CNSS_MAC_DERIVED);
+}
+EXPORT_SYMBOL(cnss_utils_set_wlan_derived_mac_address);
+
+static u8 *get_wlan_mac_address(struct device *dev,
+ u32 *num, enum mac_type type)
{
struct cnss_utils_priv *priv = cnss_utils_priv;
struct cnss_wlan_mac_addr *addr = NULL;
@@ -242,20 +264,36 @@ u8 *cnss_utils_get_wlan_mac_address(struct device *dev, uint32_t *num)
if (!priv)
goto out;
- if (!priv->is_wlan_mac_set) {
- pr_debug("WLAN MAC address is not set\n");
+ if (type == CNSS_MAC_PROVISIONED)
+ addr = &priv->wlan_mac_addr;
+ else
+ addr = &priv->wlan_der_mac_addr;
+
+ if (!addr->no_of_mac_addr_set) {
+ pr_err("WLAN MAC address is not set, type %d\n", type);
goto out;
}
-
- addr = &priv->wlan_mac_addr;
*num = addr->no_of_mac_addr_set;
return &addr->mac_addr[0][0];
+
out:
*num = 0;
return NULL;
}
+
+u8 *cnss_utils_get_wlan_mac_address(struct device *dev, uint32_t *num)
+{
+ return get_wlan_mac_address(dev, num, CNSS_MAC_PROVISIONED);
+}
EXPORT_SYMBOL(cnss_utils_get_wlan_mac_address);
+u8 *cnss_utils_get_wlan_derived_mac_address(
+ struct device *dev, uint32_t *num)
+{
+ return get_wlan_mac_address(dev, num, CNSS_MAC_DERIVED);
+}
+EXPORT_SYMBOL(cnss_utils_get_wlan_derived_mac_address);
+
void cnss_utils_set_cc_source(struct device *dev,
enum cnss_utils_cc_src cc_source)
{
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
index 0556d13..092ae00 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
@@ -499,15 +499,17 @@ static int iwl_mvm_get_ctrl_vif_queue(struct iwl_mvm *mvm,
switch (info->control.vif->type) {
case NL80211_IFTYPE_AP:
/*
- * handle legacy hostapd as well, where station may be added
- * only after assoc.
+ * Handle legacy hostapd as well, where station may be added
+ * only after assoc. Take care of the case where we send a
+ * deauth to a station that we don't have.
*/
- if (ieee80211_is_probe_resp(fc) || ieee80211_is_auth(fc))
+ if (ieee80211_is_probe_resp(fc) || ieee80211_is_auth(fc) ||
+ ieee80211_is_deauth(fc))
return IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
if (info->hw_queue == info->control.vif->cab_queue)
return info->hw_queue;
- WARN_ON_ONCE(1);
+ WARN_ONCE(1, "fc=0x%02x", le16_to_cpu(fc));
return IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
case NL80211_IFTYPE_P2P_DEVICE:
if (ieee80211_is_mgmt(fc))
diff --git a/drivers/net/wireless/marvell/libertas/cmd.c b/drivers/net/wireless/marvell/libertas/cmd.c
index 301170c..033ff88 100644
--- a/drivers/net/wireless/marvell/libertas/cmd.c
+++ b/drivers/net/wireless/marvell/libertas/cmd.c
@@ -305,7 +305,7 @@ int lbs_cmd_802_11_sleep_params(struct lbs_private *priv, uint16_t cmd_action,
}
lbs_deb_leave_args(LBS_DEB_CMD, "ret %d", ret);
- return 0;
+ return ret;
}
static int lbs_wait_for_ds_awake(struct lbs_private *priv)
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800usb.c b/drivers/net/wireless/ralink/rt2x00/rt2800usb.c
index 4b0bb6b..c636e60 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800usb.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800usb.c
@@ -646,10 +646,9 @@ static void rt2800usb_txdone_nostatus(struct rt2x00_dev *rt2x00dev)
!test_bit(ENTRY_DATA_STATUS_PENDING, &entry->flags))
break;
- if (test_bit(ENTRY_DATA_IO_FAILED, &entry->flags))
+ if (test_bit(ENTRY_DATA_IO_FAILED, &entry->flags) ||
+ rt2800usb_entry_txstatus_timeout(entry))
rt2x00lib_txdone_noinfo(entry, TXDONE_FAILURE);
- else if (rt2800usb_entry_txstatus_timeout(entry))
- rt2x00lib_txdone_noinfo(entry, TXDONE_UNKNOWN);
else
break;
}
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index d9b5b73..a7bdb1f 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -67,6 +67,7 @@ module_param(rx_drain_timeout_msecs, uint, 0444);
unsigned int rx_stall_timeout_msecs = 60000;
module_param(rx_stall_timeout_msecs, uint, 0444);
+#define MAX_QUEUES_DEFAULT 8
unsigned int xenvif_max_queues;
module_param_named(max_queues, xenvif_max_queues, uint, 0644);
MODULE_PARM_DESC(max_queues,
@@ -1626,11 +1627,12 @@ static int __init netback_init(void)
if (!xen_domain())
return -ENODEV;
- /* Allow as many queues as there are CPUs if user has not
+ /* Allow as many queues as there are CPUs but max. 8 if user has not
* specified a value.
*/
if (xenvif_max_queues == 0)
- xenvif_max_queues = num_online_cpus();
+ xenvif_max_queues = min_t(unsigned int, MAX_QUEUES_DEFAULT,
+ num_online_cpus());
if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
diff --git a/drivers/pci/access.c b/drivers/pci/access.c
index d11cdbb..7b5cf6d 100644
--- a/drivers/pci/access.c
+++ b/drivers/pci/access.c
@@ -672,8 +672,9 @@ void pci_cfg_access_unlock(struct pci_dev *dev)
WARN_ON(!dev->block_cfg_access);
dev->block_cfg_access = 0;
- wake_up_all(&pci_cfg_wait);
raw_spin_unlock_irqrestore(&pci_lock, flags);
+
+ wake_up_all(&pci_cfg_wait);
}
EXPORT_SYMBOL_GPL(pci_cfg_access_unlock);
diff --git a/drivers/pci/host/pci-mvebu.c b/drivers/pci/host/pci-mvebu.c
index 45a89d9..90e0b6f 100644
--- a/drivers/pci/host/pci-mvebu.c
+++ b/drivers/pci/host/pci-mvebu.c
@@ -133,6 +133,12 @@ struct mvebu_pcie {
int nports;
};
+struct mvebu_pcie_window {
+ phys_addr_t base;
+ phys_addr_t remap;
+ size_t size;
+};
+
/* Structure representing one PCIe interface */
struct mvebu_pcie_port {
char *name;
@@ -150,10 +156,8 @@ struct mvebu_pcie_port {
struct mvebu_sw_pci_bridge bridge;
struct device_node *dn;
struct mvebu_pcie *pcie;
- phys_addr_t memwin_base;
- size_t memwin_size;
- phys_addr_t iowin_base;
- size_t iowin_size;
+ struct mvebu_pcie_window memwin;
+ struct mvebu_pcie_window iowin;
u32 saved_pcie_stat;
};
@@ -379,23 +383,45 @@ static void mvebu_pcie_add_windows(struct mvebu_pcie_port *port,
}
}
+static void mvebu_pcie_set_window(struct mvebu_pcie_port *port,
+ unsigned int target, unsigned int attribute,
+ const struct mvebu_pcie_window *desired,
+ struct mvebu_pcie_window *cur)
+{
+ if (desired->base == cur->base && desired->remap == cur->remap &&
+ desired->size == cur->size)
+ return;
+
+ if (cur->size != 0) {
+ mvebu_pcie_del_windows(port, cur->base, cur->size);
+ cur->size = 0;
+ cur->base = 0;
+
+ /*
+ * If something tries to change the window while it is enabled
+ * the change will not be done atomically. That would be
+ * difficult to do in the general case.
+ */
+ }
+
+ if (desired->size == 0)
+ return;
+
+ mvebu_pcie_add_windows(port, target, attribute, desired->base,
+ desired->size, desired->remap);
+ *cur = *desired;
+}
+
static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
{
- phys_addr_t iobase;
+ struct mvebu_pcie_window desired = {};
/* Are the new iobase/iolimit values invalid? */
if (port->bridge.iolimit < port->bridge.iobase ||
port->bridge.iolimitupper < port->bridge.iobaseupper ||
!(port->bridge.command & PCI_COMMAND_IO)) {
-
- /* If a window was configured, remove it */
- if (port->iowin_base) {
- mvebu_pcie_del_windows(port, port->iowin_base,
- port->iowin_size);
- port->iowin_base = 0;
- port->iowin_size = 0;
- }
-
+ mvebu_pcie_set_window(port, port->io_target, port->io_attr,
+ &desired, &port->iowin);
return;
}
@@ -412,32 +438,27 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
* specifications. iobase is the bus address, port->iowin_base
* is the CPU address.
*/
- iobase = ((port->bridge.iobase & 0xF0) << 8) |
- (port->bridge.iobaseupper << 16);
- port->iowin_base = port->pcie->io.start + iobase;
- port->iowin_size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) |
- (port->bridge.iolimitupper << 16)) -
- iobase) + 1;
+ desired.remap = ((port->bridge.iobase & 0xF0) << 8) |
+ (port->bridge.iobaseupper << 16);
+ desired.base = port->pcie->io.start + desired.remap;
+ desired.size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) |
+ (port->bridge.iolimitupper << 16)) -
+ desired.remap) +
+ 1;
- mvebu_pcie_add_windows(port, port->io_target, port->io_attr,
- port->iowin_base, port->iowin_size,
- iobase);
+ mvebu_pcie_set_window(port, port->io_target, port->io_attr, &desired,
+ &port->iowin);
}
static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
{
+ struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP};
+
/* Are the new membase/memlimit values invalid? */
if (port->bridge.memlimit < port->bridge.membase ||
!(port->bridge.command & PCI_COMMAND_MEMORY)) {
-
- /* If a window was configured, remove it */
- if (port->memwin_base) {
- mvebu_pcie_del_windows(port, port->memwin_base,
- port->memwin_size);
- port->memwin_base = 0;
- port->memwin_size = 0;
- }
-
+ mvebu_pcie_set_window(port, port->mem_target, port->mem_attr,
+ &desired, &port->memwin);
return;
}
@@ -447,14 +468,12 @@ static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
* window to setup, according to the PCI-to-PCI bridge
* specifications.
*/
- port->memwin_base = ((port->bridge.membase & 0xFFF0) << 16);
- port->memwin_size =
- (((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) -
- port->memwin_base + 1;
+ desired.base = ((port->bridge.membase & 0xFFF0) << 16);
+ desired.size = (((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) -
+ desired.base + 1;
- mvebu_pcie_add_windows(port, port->mem_target, port->mem_attr,
- port->memwin_base, port->memwin_size,
- MVEBU_MBUS_NO_REMAP);
+ mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired,
+ &port->memwin);
}
/*
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 3455f75..0e9a9db 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -730,7 +730,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
ret = 0;
out:
kfree(masks);
- return 0;
+ return ret;
}
static void msix_program_entries(struct pci_dev *dev,
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index ad3e1e7..ffcd001 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -743,6 +743,15 @@ static void cpu_pm_pmu_setup(struct arm_pmu *armpmu, unsigned long cmd)
continue;
event = hw_events->events[idx];
+ if (!event)
+ continue;
+
+ /*
+ * Check if an attempt was made to free this event during
+ * the CPU went offline.
+ */
+ if (event->state == PERF_EVENT_STATE_ZOMBIE)
+ continue;
switch (cmd) {
case CPU_PM_ENTER:
diff --git a/drivers/pinctrl/intel/pinctrl-baytrail.c b/drivers/pinctrl/intel/pinctrl-baytrail.c
index 5419de8..0a96502 100644
--- a/drivers/pinctrl/intel/pinctrl-baytrail.c
+++ b/drivers/pinctrl/intel/pinctrl-baytrail.c
@@ -1466,7 +1466,7 @@ static void byt_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip)
val & BYT_INPUT_EN ? " " : "in",
val & BYT_OUTPUT_EN ? " " : "out",
val & BYT_LEVEL ? "hi" : "lo",
- comm->pad_map[i], comm->pad_map[i] * 32,
+ comm->pad_map[i], comm->pad_map[i] * 16,
conf0 & 0x7,
conf0 & BYT_TRIG_NEG ? " fall" : " ",
conf0 & BYT_TRIG_POS ? " rise" : " ",
diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
index e63f1a0..c8f8813 100644
--- a/drivers/pinctrl/qcom/pinctrl-msm.c
+++ b/drivers/pinctrl/qcom/pinctrl-msm.c
@@ -818,6 +818,9 @@ static void msm_dirconn_irq_ack(struct irq_data *d)
struct irq_desc *desc = irq_data_to_desc(d);
struct irq_data *parent_data = irq_get_irq_data(desc->parent_irq);
+ if (!parent_data)
+ return;
+
if (parent_data->chip->irq_ack)
parent_data->chip->irq_ack(parent_data);
}
@@ -827,6 +830,9 @@ static void msm_dirconn_irq_eoi(struct irq_data *d)
struct irq_desc *desc = irq_data_to_desc(d);
struct irq_data *parent_data = irq_get_irq_data(desc->parent_irq);
+ if (!parent_data)
+ return;
+
if (parent_data->chip->irq_eoi)
parent_data->chip->irq_eoi(parent_data);
}
@@ -852,6 +858,9 @@ static int msm_dirconn_irq_set_vcpu_affinity(struct irq_data *d,
struct irq_desc *desc = irq_data_to_desc(d);
struct irq_data *parent_data = irq_get_irq_data(desc->parent_irq);
+ if (!parent_data)
+ return 0;
+
if (parent_data->chip->irq_set_vcpu_affinity)
return parent_data->chip->irq_set_vcpu_affinity(parent_data,
vcpu_info);
diff --git a/drivers/pinctrl/qcom/pinctrl-sdm670.c b/drivers/pinctrl/qcom/pinctrl-sdm670.c
index 6145c75..f7af6da 100644
--- a/drivers/pinctrl/qcom/pinctrl-sdm670.c
+++ b/drivers/pinctrl/qcom/pinctrl-sdm670.c
@@ -54,6 +54,8 @@
.intr_cfg_reg = base + 0x8 + REG_SIZE * id, \
.intr_status_reg = base + 0xc + REG_SIZE * id, \
.intr_target_reg = base + 0x8 + REG_SIZE * id, \
+ .dir_conn_reg = (base == NORTH) ? base + 0xa3000 : \
+ ((base == SOUTH) ? base + 0xa6000 : base + 0xa4000), \
.mux_bit = 2, \
.pull_bit = 0, \
.drv_bit = 6, \
@@ -68,6 +70,7 @@
.intr_polarity_bit = 1, \
.intr_detection_bit = 2, \
.intr_detection_width = 2, \
+ .dir_conn_en_bit = 8, \
}
#define SDC_QDSD_PINGROUP(pg_name, ctl, pull, drv) \
@@ -1651,6 +1654,14 @@ static const struct msm_dir_conn sdm670_dir_conn[] = {
{132, 621},
{133, 622},
{145, 623},
+ {0, 216},
+ {0, 215},
+ {0, 214},
+ {0, 213},
+ {0, 212},
+ {0, 211},
+ {0, 210},
+ {0, 209},
};
static const struct msm_pinctrl_soc_data sdm670_pinctrl = {
@@ -1663,6 +1674,7 @@ static const struct msm_pinctrl_soc_data sdm670_pinctrl = {
.ngpios = 150,
.dir_conn = sdm670_dir_conn,
.n_dir_conns = ARRAY_SIZE(sdm670_dir_conn),
+ .dir_conn_irq_base = 216,
};
static int sdm670_pinctrl_probe(struct platform_device *pdev)
diff --git a/drivers/platform/msm/Kconfig b/drivers/platform/msm/Kconfig
index aef0db2..6117d4d 100644
--- a/drivers/platform/msm/Kconfig
+++ b/drivers/platform/msm/Kconfig
@@ -112,6 +112,27 @@
help
No-Data-Path BAM is used to improve BAM performance.
+config EP_PCIE
+ bool "PCIe Endpoint mode support"
+ select GENERIC_ALLOCATOR
+ help
+ PCIe controller is in endpoint mode.
+ It supports the APIs to clients as a service layer, and allows
+ clients to enable/disable PCIe link, configure the address
+ mapping for the access to host memory, trigger wake interrupt
+ on host side to wake up host, and trigger MSI to host side.
+
+config EP_PCIE_HW
+ bool "PCIe Endpoint HW driver"
+ depends on EP_PCIE
+ help
+ PCIe endpoint HW specific implementation.
+ It supports:
+ 1. link training with Root Complex.
+ 2. Address mapping.
+ 3. Sideband signaling.
+ 4. Power management.
+
config QPNP_COINCELL
tristate "QPNP coincell charger support"
depends on SPMI
diff --git a/drivers/platform/msm/Makefile b/drivers/platform/msm/Makefile
index 27179b9..bee32c2 100644
--- a/drivers/platform/msm/Makefile
+++ b/drivers/platform/msm/Makefile
@@ -7,6 +7,7 @@
obj-$(CONFIG_SPS) += sps/
obj-$(CONFIG_QPNP_COINCELL) += qpnp-coincell.o
obj-$(CONFIG_QPNP_REVID) += qpnp-revid.o
+obj-$(CONFIG_EP_PCIE) += ep_pcie/
obj-$(CONFIG_MSM_MHI_DEV) += mhi_dev/
obj-$(CONFIG_USB_BAM) += usb_bam.o
obj-$(CONFIG_MSM_11AD) += msm_11ad/
diff --git a/drivers/platform/msm/ep_pcie/Makefile b/drivers/platform/msm/ep_pcie/Makefile
new file mode 100644
index 0000000..0567e15
--- /dev/null
+++ b/drivers/platform/msm/ep_pcie/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_EP_PCIE) += ep_pcie.o
+obj-$(CONFIG_EP_PCIE_HW) += ep_pcie_core.o ep_pcie_phy.o ep_pcie_dbg.o
diff --git a/drivers/platform/msm/ep_pcie/ep_pcie.c b/drivers/platform/msm/ep_pcie/ep_pcie.c
new file mode 100644
index 0000000..ecff4c4
--- /dev/null
+++ b/drivers/platform/msm/ep_pcie/ep_pcie.c
@@ -0,0 +1,230 @@
+/* Copyright (c) 2015, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * MSM PCIe endpoint service layer.
+ */
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/compiler.h>
+#include <linux/kernel.h>
+#include <linux/bitops.h>
+#include <linux/errno.h>
+#include "ep_pcie_com.h"
+
+LIST_HEAD(head);
+
+int ep_pcie_register_drv(struct ep_pcie_hw *handle)
+{
+ struct ep_pcie_hw *present;
+ bool new = true;
+
+ if (!handle) {
+ pr_err("ep_pcie:%s: the input handle is NULL.",
+ __func__);
+ return -EINVAL;
+ }
+
+ list_for_each_entry(present, &head, node) {
+ if (present->device_id == handle->device_id) {
+ new = false;
+ break;
+ }
+ }
+
+ if (new) {
+ list_add(&handle->node, &head);
+ pr_debug("ep_pcie:%s: register a new driver for device 0x%x.",
+ __func__, handle->device_id);
+ return 0;
+ }
+ pr_debug(
+ "ep_pcie:%s: driver to register for device 0x%x has already existed.",
+ __func__, handle->device_id);
+ return -EEXIST;
+}
+EXPORT_SYMBOL(ep_pcie_register_drv);
+
+int ep_pcie_deregister_drv(struct ep_pcie_hw *handle)
+{
+ struct ep_pcie_hw *present;
+ bool found = false;
+
+ if (!handle) {
+ pr_err("ep_pcie:%s: the input handle is NULL.",
+ __func__);
+ return -EINVAL;
+ }
+
+ list_for_each_entry(present, &head, node) {
+ if (present->device_id == handle->device_id) {
+ found = true;
+ list_del(&handle->node);
+ break;
+ }
+ }
+
+ if (found) {
+ pr_debug("ep_pcie:%s: deregistered driver for device 0x%x.",
+ __func__, handle->device_id);
+ return 0;
+ }
+ pr_err("ep_pcie:%s: driver for device 0x%x does not exist.",
+ __func__, handle->device_id);
+ return -EEXIST;
+}
+EXPORT_SYMBOL(ep_pcie_deregister_drv);
+
+struct ep_pcie_hw *ep_pcie_get_phandle(u32 id)
+{
+ struct ep_pcie_hw *present;
+
+ list_for_each_entry(present, &head, node) {
+ if (present->device_id == id) {
+ pr_debug("ep_pcie:%s: found driver for device 0x%x.",
+ __func__, id);
+ return present;
+ }
+ }
+
+ pr_debug("ep_pcie:%s: driver for device 0x%x does not exist.",
+ __func__, id);
+ return NULL;
+}
+EXPORT_SYMBOL(ep_pcie_get_phandle);
+
+int ep_pcie_register_event(struct ep_pcie_hw *phandle,
+ struct ep_pcie_register_event *reg)
+{
+ if (phandle)
+ return phandle->register_event(reg);
+
+ return ep_pcie_core_register_event(reg);
+}
+EXPORT_SYMBOL(ep_pcie_register_event);
+
+int ep_pcie_deregister_event(struct ep_pcie_hw *phandle)
+{
+ if (phandle)
+ return phandle->deregister_event();
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_deregister_event);
+
+enum ep_pcie_link_status ep_pcie_get_linkstatus(struct ep_pcie_hw *phandle)
+{
+ if (phandle)
+ return phandle->get_linkstatus();
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_get_linkstatus);
+
+int ep_pcie_config_outbound_iatu(struct ep_pcie_hw *phandle,
+ struct ep_pcie_iatu entries[],
+ u32 num_entries)
+{
+ if (phandle)
+ return phandle->config_outbound_iatu(entries, num_entries);
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_config_outbound_iatu);
+
+int ep_pcie_get_msi_config(struct ep_pcie_hw *phandle,
+ struct ep_pcie_msi_config *cfg)
+{
+ if (phandle)
+ return phandle->get_msi_config(cfg);
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_get_msi_config);
+
+int ep_pcie_trigger_msi(struct ep_pcie_hw *phandle, u32 idx)
+{
+ if (phandle)
+ return phandle->trigger_msi(idx);
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_trigger_msi);
+
+int ep_pcie_wakeup_host(struct ep_pcie_hw *phandle)
+{
+ if (phandle)
+ return phandle->wakeup_host();
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_wakeup_host);
+
+int ep_pcie_config_db_routing(struct ep_pcie_hw *phandle,
+ struct ep_pcie_db_config chdb_cfg,
+ struct ep_pcie_db_config erdb_cfg)
+{
+ if (phandle)
+ return phandle->config_db_routing(chdb_cfg, erdb_cfg);
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_config_db_routing);
+
+int ep_pcie_enable_endpoint(struct ep_pcie_hw *phandle,
+ enum ep_pcie_options opt)
+{
+ if (phandle)
+ return phandle->enable_endpoint(opt);
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_enable_endpoint);
+
+int ep_pcie_disable_endpoint(struct ep_pcie_hw *phandle)
+{
+ if (phandle)
+ return phandle->disable_endpoint();
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.",
+ __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_disable_endpoint);
+
+int ep_pcie_mask_irq_event(struct ep_pcie_hw *phandle,
+ enum ep_pcie_irq_event event,
+ bool enable)
+{
+ if (phandle)
+ return phandle->mask_irq_event(event, enable);
+
+ pr_err("ep_pcie:%s: the input driver handle is NULL.", __func__);
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ep_pcie_mask_irq_event);
diff --git a/drivers/platform/msm/ep_pcie/ep_pcie_com.h b/drivers/platform/msm/ep_pcie/ep_pcie_com.h
new file mode 100644
index 0000000..c02aabf
--- /dev/null
+++ b/drivers/platform/msm/ep_pcie/ep_pcie_com.h
@@ -0,0 +1,394 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __EP_PCIE_COM_H
+#define __EP_PCIE_COM_H
+
+#include <linux/io.h>
+#include <linux/clk.h>
+#include <linux/compiler.h>
+#include <linux/ipc_logging.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/consumer.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+#include <linux/delay.h>
+#include <linux/msm_ep_pcie.h>
+
+#define PCIE20_PARF_SYS_CTRL 0x00
+#define PCIE20_PARF_DB_CTRL 0x10
+#define PCIE20_PARF_PM_CTRL 0x20
+#define PCIE20_PARF_PM_STTS 0x24
+#define PCIE20_PARF_PHY_CTRL 0x40
+#define PCIE20_PARF_PHY_REFCLK 0x4C
+#define PCIE20_PARF_CONFIG_BITS 0x50
+#define PCIE20_PARF_TEST_BUS 0xE4
+#define PCIE20_PARF_MHI_BASE_ADDR_LOWER 0x178
+#define PCIE20_PARF_MHI_BASE_ADDR_UPPER 0x17c
+#define PCIE20_PARF_MSI_GEN 0x188
+#define PCIE20_PARF_DEBUG_INT_EN 0x190
+#define PCIE20_PARF_MHI_IPA_DBS 0x198
+#define PCIE20_PARF_MHI_IPA_CDB_TARGET_LOWER 0x19C
+#define PCIE20_PARF_MHI_IPA_EDB_TARGET_LOWER 0x1A0
+#define PCIE20_PARF_AXI_MSTR_RD_HALT_NO_WRITES 0x1A4
+#define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT 0x1A8
+#define PCIE20_PARF_Q2A_FLUSH 0x1AC
+#define PCIE20_PARF_LTSSM 0x1B0
+#define PCIE20_PARF_CFG_BITS 0x210
+#define PCIE20_PARF_LTR_MSI_EXIT_L1SS 0x214
+#define PCIE20_PARF_INT_ALL_STATUS 0x224
+#define PCIE20_PARF_INT_ALL_CLEAR 0x228
+#define PCIE20_PARF_INT_ALL_MASK 0x22C
+#define PCIE20_PARF_SLV_ADDR_MSB_CTRL 0x2C0
+#define PCIE20_PARF_DBI_BASE_ADDR 0x350
+#define PCIE20_PARF_DBI_BASE_ADDR_HI 0x354
+#define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x358
+#define PCIE20_PARF_SLV_ADDR_SPACE_SIZE_HI 0x35C
+#define PCIE20_PARF_DEVICE_TYPE 0x1000
+
+#define PCIE20_ELBI_VERSION 0x00
+#define PCIE20_ELBI_SYS_CTRL 0x04
+#define PCIE20_ELBI_SYS_STTS 0x08
+#define PCIE20_ELBI_CS2_ENABLE 0xA4
+
+#define PCIE20_DEVICE_ID_VENDOR_ID 0x00
+#define PCIE20_COMMAND_STATUS 0x04
+#define PCIE20_CLASS_CODE_REVISION_ID 0x08
+#define PCIE20_BIST_HDR_TYPE 0x0C
+#define PCIE20_BAR0 0x10
+#define PCIE20_SUBSYSTEM 0x2c
+#define PCIE20_CAP_ID_NXT_PTR 0x40
+#define PCIE20_CON_STATUS 0x44
+#define PCIE20_MSI_CAP_ID_NEXT_CTRL 0x50
+#define PCIE20_MSI_LOWER 0x54
+#define PCIE20_MSI_UPPER 0x58
+#define PCIE20_MSI_DATA 0x5C
+#define PCIE20_MSI_MASK 0x60
+#define PCIE20_DEVICE_CAPABILITIES 0x74
+#define PCIE20_MASK_EP_L1_ACCPT_LATENCY 0xE00
+#define PCIE20_MASK_EP_L0S_ACCPT_LATENCY 0x1C0
+#define PCIE20_LINK_CAPABILITIES 0x7C
+#define PCIE20_MASK_CLOCK_POWER_MAN 0x40000
+#define PCIE20_MASK_L1_EXIT_LATENCY 0x38000
+#define PCIE20_MASK_L0S_EXIT_LATENCY 0x7000
+#define PCIE20_CAP_LINKCTRLSTATUS 0x80
+#define PCIE20_DEVICE_CONTROL2_STATUS2 0x98
+#define PCIE20_LINK_CONTROL2_LINK_STATUS2 0xA0
+#define PCIE20_L1SUB_CAPABILITY 0x154
+#define PCIE20_L1SUB_CONTROL1 0x158
+#define PCIE20_ACK_F_ASPM_CTRL_REG 0x70C
+#define PCIE20_MASK_ACK_N_FTS 0xff00
+#define PCIE20_MISC_CONTROL_1 0x8BC
+
+#define PCIE20_PLR_IATU_VIEWPORT 0x900
+#define PCIE20_PLR_IATU_CTRL1 0x904
+#define PCIE20_PLR_IATU_CTRL2 0x908
+#define PCIE20_PLR_IATU_LBAR 0x90C
+#define PCIE20_PLR_IATU_UBAR 0x910
+#define PCIE20_PLR_IATU_LAR 0x914
+#define PCIE20_PLR_IATU_LTAR 0x918
+#define PCIE20_PLR_IATU_UTAR 0x91c
+
+#define PCIE20_MHICFG 0x110
+#define PCIE20_BHI_EXECENV 0x228
+
+#define PCIE20_AUX_CLK_FREQ_REG 0xB40
+
+#define PERST_TIMEOUT_US_MIN 1000
+#define PERST_TIMEOUT_US_MAX 1000
+#define PERST_CHECK_MAX_COUNT 30000
+#define LINK_UP_TIMEOUT_US_MIN 1000
+#define LINK_UP_TIMEOUT_US_MAX 1000
+#define LINK_UP_CHECK_MAX_COUNT 30000
+#define BME_TIMEOUT_US_MIN 1000
+#define BME_TIMEOUT_US_MAX 1000
+#define BME_CHECK_MAX_COUNT 30000
+#define PHY_STABILIZATION_DELAY_US_MIN 1000
+#define PHY_STABILIZATION_DELAY_US_MAX 1000
+#define REFCLK_STABILIZATION_DELAY_US_MIN 1000
+#define REFCLK_STABILIZATION_DELAY_US_MAX 1000
+#define PHY_READY_TIMEOUT_COUNT 30000
+#define MSI_EXIT_L1SS_WAIT 10
+#define MSI_EXIT_L1SS_WAIT_MAX_COUNT 100
+#define XMLH_LINK_UP 0x400
+#define PARF_XMLH_LINK_UP 0x40000000
+
+#define MAX_PROP_SIZE 32
+#define MAX_MSG_LEN 80
+#define MAX_NAME_LEN 80
+#define MAX_IATU_ENTRY_NUM 2
+
+#define EP_PCIE_LOG_PAGES 50
+#define EP_PCIE_MAX_VREG 2
+#define EP_PCIE_MAX_CLK 5
+#define EP_PCIE_MAX_PIPE_CLK 1
+#define EP_PCIE_MAX_RESET 2
+
+#define EP_PCIE_ERROR -30655
+#define EP_PCIE_LINK_DOWN 0xFFFFFFFF
+
+#define EP_PCIE_OATU_INDEX_MSI 1
+#define EP_PCIE_OATU_INDEX_CTRL 2
+#define EP_PCIE_OATU_INDEX_DATA 3
+
+#define EP_PCIE_OATU_UPPER 0x100
+
+#define EP_PCIE_GEN_DBG(x...) do { \
+ if (ep_pcie_get_debug_mask()) \
+ pr_alert(x); \
+ else \
+ pr_debug(x); \
+ } while (0)
+
+#define EP_PCIE_DBG(dev, fmt, arg...) do { \
+ if ((dev)->ipc_log_ful) \
+ ipc_log_string((dev)->ipc_log_ful, "%s: " fmt, __func__, arg); \
+ if (ep_pcie_get_debug_mask()) \
+ pr_alert("%s: " fmt, __func__, arg); \
+ } while (0)
+
+#define EP_PCIE_DBG2(dev, fmt, arg...) do { \
+ if ((dev)->ipc_log_sel) \
+ ipc_log_string((dev)->ipc_log_sel, \
+ "DBG1:%s: " fmt, __func__, arg); \
+ if ((dev)->ipc_log_ful) \
+ ipc_log_string((dev)->ipc_log_ful, \
+ "DBG2:%s: " fmt, __func__, arg); \
+ if (ep_pcie_get_debug_mask()) \
+ pr_alert("%s: " fmt, __func__, arg); \
+ } while (0)
+
+#define EP_PCIE_DBG_FS(fmt, arg...) pr_alert("%s: " fmt, __func__, arg)
+
+#define EP_PCIE_DUMP(dev, fmt, arg...) do { \
+ if ((dev)->ipc_log_dump) \
+ ipc_log_string((dev)->ipc_log_dump, \
+ "DUMP:%s: " fmt, __func__, arg); \
+ if (ep_pcie_get_debug_mask()) \
+ pr_alert("%s: " fmt, __func__, arg); \
+ } while (0)
+
+#define EP_PCIE_INFO(dev, fmt, arg...) do { \
+ if ((dev)->ipc_log_sel) \
+ ipc_log_string((dev)->ipc_log_sel, \
+ "INFO:%s: " fmt, __func__, arg); \
+ if ((dev)->ipc_log_ful) \
+ ipc_log_string((dev)->ipc_log_ful, "%s: " fmt, __func__, arg); \
+ pr_info("%s: " fmt, __func__, arg); \
+ } while (0)
+
+#define EP_PCIE_ERR(dev, fmt, arg...) do { \
+ if ((dev)->ipc_log_sel) \
+ ipc_log_string((dev)->ipc_log_sel, \
+ "ERR:%s: " fmt, __func__, arg); \
+ if ((dev)->ipc_log_ful) \
+ ipc_log_string((dev)->ipc_log_ful, "%s: " fmt, __func__, arg); \
+ pr_err("%s: " fmt, __func__, arg); \
+ } while (0)
+
+enum ep_pcie_res {
+ EP_PCIE_RES_PARF,
+ EP_PCIE_RES_PHY,
+ EP_PCIE_RES_MMIO,
+ EP_PCIE_RES_MSI,
+ EP_PCIE_RES_DM_CORE,
+ EP_PCIE_RES_ELBI,
+ EP_PCIE_MAX_RES,
+};
+
+enum ep_pcie_irq {
+ EP_PCIE_INT_PM_TURNOFF,
+ EP_PCIE_INT_DSTATE_CHANGE,
+ EP_PCIE_INT_L1SUB_TIMEOUT,
+ EP_PCIE_INT_LINK_UP,
+ EP_PCIE_INT_LINK_DOWN,
+ EP_PCIE_INT_BRIDGE_FLUSH_N,
+ EP_PCIE_INT_BME,
+ EP_PCIE_INT_GLOBAL,
+ EP_PCIE_MAX_IRQ,
+};
+
+enum ep_pcie_gpio {
+ EP_PCIE_GPIO_PERST,
+ EP_PCIE_GPIO_WAKE,
+ EP_PCIE_GPIO_CLKREQ,
+ EP_PCIE_GPIO_MDM2AP,
+ EP_PCIE_MAX_GPIO,
+};
+
+struct ep_pcie_gpio_info_t {
+ char *name;
+ u32 num;
+ bool out;
+ u32 on;
+ u32 init;
+};
+
+struct ep_pcie_vreg_info_t {
+ struct regulator *hdl;
+ char *name;
+ u32 max_v;
+ u32 min_v;
+ u32 opt_mode;
+ bool required;
+};
+
+struct ep_pcie_clk_info_t {
+ struct clk *hdl;
+ char *name;
+ u32 freq;
+ bool required;
+};
+
+struct ep_pcie_reset_info_t {
+ struct reset_control *hdl;
+ char *name;
+ bool required;
+};
+
+struct ep_pcie_res_info_t {
+ char *name;
+ struct resource *resource;
+ void __iomem *base;
+};
+
+struct ep_pcie_irq_info_t {
+ char *name;
+ u32 num;
+};
+
+/* phy info structure */
+struct ep_pcie_phy_info_t {
+ u32 offset;
+ u32 val;
+ u32 delay;
+ u32 direction;
+};
+
+/* pcie endpoint device structure */
+struct ep_pcie_dev_t {
+ struct platform_device *pdev;
+ struct regulator *gdsc;
+ struct ep_pcie_vreg_info_t vreg[EP_PCIE_MAX_VREG];
+ struct ep_pcie_gpio_info_t gpio[EP_PCIE_MAX_GPIO];
+ struct ep_pcie_clk_info_t clk[EP_PCIE_MAX_CLK];
+ struct ep_pcie_clk_info_t pipeclk[EP_PCIE_MAX_PIPE_CLK];
+ struct ep_pcie_reset_info_t reset[EP_PCIE_MAX_RESET];
+ struct ep_pcie_irq_info_t irq[EP_PCIE_MAX_IRQ];
+ struct ep_pcie_res_info_t res[EP_PCIE_MAX_RES];
+
+ void __iomem *parf;
+ void __iomem *phy;
+ void __iomem *mmio;
+ void __iomem *msi;
+ void __iomem *dm_core;
+ void __iomem *elbi;
+
+ struct msm_bus_scale_pdata *bus_scale_table;
+ u32 bus_client;
+ u32 link_speed;
+ bool active_config;
+ bool aggregated_irq;
+ bool mhi_a7_irq;
+ u32 dbi_base_reg;
+ u32 slv_space_reg;
+ u32 phy_status_reg;
+ u32 phy_init_len;
+ struct ep_pcie_phy_info_t *phy_init;
+ bool perst_enum;
+
+ u32 rev;
+ u32 phy_rev;
+ void *ipc_log_sel;
+ void *ipc_log_ful;
+ void *ipc_log_dump;
+ struct mutex setup_mtx;
+ struct mutex ext_mtx;
+ spinlock_t ext_lock;
+ unsigned long ext_save_flags;
+
+ spinlock_t isr_lock;
+ unsigned long isr_save_flags;
+ ulong linkdown_counter;
+ ulong linkup_counter;
+ ulong bme_counter;
+ ulong pm_to_counter;
+ ulong d0_counter;
+ ulong d3_counter;
+ ulong perst_ast_counter;
+ ulong perst_deast_counter;
+ ulong wake_counter;
+ ulong msi_counter;
+ ulong global_irq_counter;
+
+ bool dump_conf;
+
+ bool enumerated;
+ enum ep_pcie_link_status link_status;
+ bool perst_deast;
+ bool power_on;
+ bool suspending;
+ bool l23_ready;
+ bool l1ss_enabled;
+ struct ep_pcie_msi_config msi_cfg;
+ bool no_notify;
+ bool client_ready;
+
+ struct ep_pcie_register_event *event_reg;
+ struct work_struct handle_perst_work;
+ struct work_struct handle_bme_work;
+ struct work_struct handle_d3cold_work;
+};
+
+extern struct ep_pcie_dev_t ep_pcie_dev;
+extern struct ep_pcie_hw hw_drv;
+
+static inline void ep_pcie_write_mask(void __iomem *addr,
+ u32 clear_mask, u32 set_mask)
+{
+ u32 val;
+
+ val = (readl_relaxed(addr) & ~clear_mask) | set_mask;
+ writel_relaxed(val, addr);
+ /* ensure register write goes through before next regiser operation */
+ wmb();
+}
+
+static inline void ep_pcie_write_reg(void __iomem *base, u32 offset, u32 value)
+{
+ writel_relaxed(value, base + offset);
+ /* ensure register write goes through before next regiser operation */
+ wmb();
+}
+
+static inline void ep_pcie_write_reg_field(void __iomem *base, u32 offset,
+ const u32 mask, u32 val)
+{
+ u32 shift = find_first_bit((void *)&mask, 32);
+ u32 tmp = readl_relaxed(base + offset);
+
+ tmp &= ~mask; /* clear written bits */
+ val = tmp | (val << shift);
+ writel_relaxed(val, base + offset);
+ /* ensure register write goes through before next regiser operation */
+ wmb();
+}
+
+extern int ep_pcie_core_register_event(struct ep_pcie_register_event *reg);
+extern int ep_pcie_get_debug_mask(void);
+extern void ep_pcie_phy_init(struct ep_pcie_dev_t *dev);
+extern bool ep_pcie_phy_is_ready(struct ep_pcie_dev_t *dev);
+extern void ep_pcie_reg_dump(struct ep_pcie_dev_t *dev, u32 sel, bool linkdown);
+extern void ep_pcie_debugfs_init(struct ep_pcie_dev_t *ep_dev);
+extern void ep_pcie_debugfs_exit(void);
+
+#endif
diff --git a/drivers/platform/msm/ep_pcie/ep_pcie_core.c b/drivers/platform/msm/ep_pcie/ep_pcie_core.c
new file mode 100644
index 0000000..e48409b
--- /dev/null
+++ b/drivers/platform/msm/ep_pcie/ep_pcie_core.c
@@ -0,0 +1,2609 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * MSM PCIe endpoint core driver.
+ */
+
+#include <linux/module.h>
+#include <linux/bitops.h>
+#include <linux/clk.h>
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/consumer.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/of_gpio.h>
+#include <linux/clk/qcom.h>
+#include <linux/reset.h>
+#include <linux/msm-bus.h>
+#include <linux/msm-bus-board.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+
+#include "ep_pcie_com.h"
+
+/* debug mask sys interface */
+static int ep_pcie_debug_mask;
+static int ep_pcie_debug_keep_resource;
+static u32 ep_pcie_bar0_address;
+module_param_named(debug_mask, ep_pcie_debug_mask,
+ int, 0664);
+module_param_named(debug_keep_resource, ep_pcie_debug_keep_resource,
+ int, 0664);
+module_param_named(bar0_address, ep_pcie_bar0_address,
+ int, 0664);
+
+struct ep_pcie_dev_t ep_pcie_dev = {0};
+
+static struct ep_pcie_vreg_info_t ep_pcie_vreg_info[EP_PCIE_MAX_VREG] = {
+ {NULL, "vreg-1.8", 1800000, 1800000, 14000, true},
+ {NULL, "vreg-0.9", 1000000, 1000000, 40000, true}
+};
+
+static struct ep_pcie_gpio_info_t ep_pcie_gpio_info[EP_PCIE_MAX_GPIO] = {
+ {"perst-gpio", 0, 0, 0, 1},
+ {"wake-gpio", 0, 1, 0, 1},
+ {"clkreq-gpio", 0, 1, 0, 0},
+ {"mdm2apstatus-gpio", 0, 1, 1, 0}
+};
+
+static struct ep_pcie_clk_info_t
+ ep_pcie_clk_info[EP_PCIE_MAX_CLK] = {
+ {NULL, "pcie_0_cfg_ahb_clk", 0, true},
+ {NULL, "pcie_0_mstr_axi_clk", 0, true},
+ {NULL, "pcie_0_slv_axi_clk", 0, true},
+ {NULL, "pcie_0_aux_clk", 1000000, true},
+ {NULL, "pcie_0_ldo", 0, true},
+};
+
+static struct ep_pcie_clk_info_t
+ ep_pcie_pipe_clk_info[EP_PCIE_MAX_PIPE_CLK] = {
+ {NULL, "pcie_0_pipe_clk", 62500000, true}
+};
+
+static struct ep_pcie_reset_info_t
+ ep_pcie_reset_info[EP_PCIE_MAX_RESET] = {
+ {NULL, "pcie_0_core_reset", false},
+ {NULL, "pcie_0_phy_reset", false},
+};
+
+static const struct ep_pcie_res_info_t ep_pcie_res_info[EP_PCIE_MAX_RES] = {
+ {"parf", 0, 0},
+ {"phy", 0, 0},
+ {"mmio", 0, 0},
+ {"msi", 0, 0},
+ {"dm_core", 0, 0},
+ {"elbi", 0, 0}
+};
+
+static const struct ep_pcie_irq_info_t ep_pcie_irq_info[EP_PCIE_MAX_IRQ] = {
+ {"int_pm_turnoff", 0},
+ {"int_dstate_change", 0},
+ {"int_l1sub_timeout", 0},
+ {"int_link_up", 0},
+ {"int_link_down", 0},
+ {"int_bridge_flush_n", 0},
+ {"int_bme", 0},
+ {"int_global", 0}
+};
+
+int ep_pcie_get_debug_mask(void)
+{
+ return ep_pcie_debug_mask;
+}
+
+static bool ep_pcie_confirm_linkup(struct ep_pcie_dev_t *dev,
+ bool check_sw_stts)
+{
+ u32 val;
+
+ if (check_sw_stts && (dev->link_status != EP_PCIE_LINK_ENABLED)) {
+ EP_PCIE_DBG(dev, "PCIe V%d: The link is not enabled.\n",
+ dev->rev);
+ return false;
+ }
+
+ val = readl_relaxed(dev->dm_core);
+ EP_PCIE_DBG(dev, "PCIe V%d: device ID and vender ID are 0x%x.\n",
+ dev->rev, val);
+ if (val == EP_PCIE_LINK_DOWN) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: The link is not really up; device ID and vender ID are 0x%x.\n",
+ dev->rev, val);
+ return false;
+ }
+
+ return true;
+}
+
+static int ep_pcie_gpio_init(struct ep_pcie_dev_t *dev)
+{
+ int i, rc = 0;
+ struct ep_pcie_gpio_info_t *info;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ for (i = 0; i < EP_PCIE_MAX_GPIO; i++) {
+ info = &dev->gpio[i];
+
+ if (!info->num) {
+ if (i == EP_PCIE_GPIO_MDM2AP) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: gpio %s does not exist.\n",
+ dev->rev, info->name);
+ continue;
+ } else {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: the number of gpio %s is invalid\n",
+ dev->rev, info->name);
+ rc = -EINVAL;
+ break;
+ }
+ }
+
+ rc = gpio_request(info->num, info->name);
+ if (rc) {
+ EP_PCIE_ERR(dev, "PCIe V%d: can't get gpio %s; %d\n",
+ dev->rev, info->name, rc);
+ break;
+ }
+
+ if (info->out)
+ rc = gpio_direction_output(info->num, info->init);
+ else
+ rc = gpio_direction_input(info->num);
+ if (rc) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: can't set direction for GPIO %s:%d\n",
+ dev->rev, info->name, rc);
+ gpio_free(info->num);
+ break;
+ }
+ }
+
+ if (rc)
+ while (i--)
+ gpio_free(dev->gpio[i].num);
+
+ return rc;
+}
+
+static void ep_pcie_gpio_deinit(struct ep_pcie_dev_t *dev)
+{
+ int i;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ for (i = 0; i < EP_PCIE_MAX_GPIO; i++)
+ gpio_free(dev->gpio[i].num);
+}
+
+static int ep_pcie_vreg_init(struct ep_pcie_dev_t *dev)
+{
+ int i, rc = 0;
+ struct regulator *vreg;
+ struct ep_pcie_vreg_info_t *info;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ for (i = 0; i < EP_PCIE_MAX_VREG; i++) {
+ info = &dev->vreg[i];
+ vreg = info->hdl;
+
+ if (!vreg) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: handle of Vreg %s is NULL\n",
+ dev->rev, info->name);
+ rc = -EINVAL;
+ break;
+ }
+
+ EP_PCIE_DBG(dev, "PCIe V%d: Vreg %s is being enabled\n",
+ dev->rev, info->name);
+ if (info->max_v) {
+ rc = regulator_set_voltage(vreg,
+ info->min_v, info->max_v);
+ if (rc) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: can't set voltage for %s: %d\n",
+ dev->rev, info->name, rc);
+ break;
+ }
+ }
+
+ if (info->opt_mode) {
+ rc = regulator_set_load(vreg, info->opt_mode);
+ if (rc < 0) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: can't set mode for %s: %d\n",
+ dev->rev, info->name, rc);
+ break;
+ }
+ }
+
+ rc = regulator_enable(vreg);
+ if (rc) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: can't enable regulator %s: %d\n",
+ dev->rev, info->name, rc);
+ break;
+ }
+ }
+
+ if (rc)
+ while (i--) {
+ struct regulator *hdl = dev->vreg[i].hdl;
+
+ if (hdl)
+ regulator_disable(hdl);
+ }
+
+ return rc;
+}
+
+static void ep_pcie_vreg_deinit(struct ep_pcie_dev_t *dev)
+{
+ int i;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ for (i = EP_PCIE_MAX_VREG - 1; i >= 0; i--) {
+ if (dev->vreg[i].hdl) {
+ EP_PCIE_DBG(dev, "Vreg %s is being disabled\n",
+ dev->vreg[i].name);
+ regulator_disable(dev->vreg[i].hdl);
+ }
+ }
+}
+
+static int ep_pcie_clk_init(struct ep_pcie_dev_t *dev)
+{
+ int i, rc = 0;
+ struct ep_pcie_clk_info_t *info;
+ struct ep_pcie_reset_info_t *reset_info;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ rc = regulator_enable(dev->gdsc);
+
+ if (rc) {
+ EP_PCIE_ERR(dev, "PCIe V%d: fail to enable GDSC for %s\n",
+ dev->rev, dev->pdev->name);
+ return rc;
+ }
+
+ if (dev->bus_client) {
+ rc = msm_bus_scale_client_update_request(dev->bus_client, 1);
+ if (rc) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: fail to set bus bandwidth:%d.\n",
+ dev->rev, rc);
+ return rc;
+ }
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: set bus bandwidth.\n",
+ dev->rev);
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_CLK; i++) {
+ info = &dev->clk[i];
+
+ if (!info->hdl) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: handle of Clock %s is NULL\n",
+ dev->rev, info->name);
+ continue;
+ }
+
+ if (info->freq) {
+ rc = clk_set_rate(info->hdl, info->freq);
+ if (rc) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: can't set rate for clk %s: %d.\n",
+ dev->rev, info->name, rc);
+ break;
+ }
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: set rate for clk %s.\n",
+ dev->rev, info->name);
+ }
+
+ rc = clk_prepare_enable(info->hdl);
+
+ if (rc)
+ EP_PCIE_ERR(dev, "PCIe V%d: failed to enable clk %s\n",
+ dev->rev, info->name);
+ else
+ EP_PCIE_DBG(dev, "PCIe V%d: enable clk %s.\n",
+ dev->rev, info->name);
+ }
+
+ if (rc) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: disable clocks for error handling.\n",
+ dev->rev);
+ while (i--) {
+ struct clk *hdl = dev->clk[i].hdl;
+
+ if (hdl)
+ clk_disable_unprepare(hdl);
+ }
+
+ regulator_disable(dev->gdsc);
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_RESET; i++) {
+ reset_info = &dev->reset[i];
+ if (reset_info->hdl) {
+ rc = reset_control_assert(reset_info->hdl);
+ if (rc)
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: failed to assert reset for %s.\n",
+ dev->rev, reset_info->name);
+ else
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: successfully asserted reset for %s.\n",
+ dev->rev, reset_info->name);
+
+ /* add a 1ms delay to ensure the reset is asserted */
+ usleep_range(1000, 1005);
+
+ rc = reset_control_deassert(reset_info->hdl);
+ if (rc)
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: failed to deassert reset for %s.\n",
+ dev->rev, reset_info->name);
+ else
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: successfully deasserted reset for %s.\n",
+ dev->rev, reset_info->name);
+ }
+ }
+
+ return rc;
+}
+
+static void ep_pcie_clk_deinit(struct ep_pcie_dev_t *dev)
+{
+ int i;
+ int rc;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ for (i = EP_PCIE_MAX_CLK - 1; i >= 0; i--)
+ if (dev->clk[i].hdl)
+ clk_disable_unprepare(dev->clk[i].hdl);
+
+ if (dev->bus_client) {
+ rc = msm_bus_scale_client_update_request(dev->bus_client, 0);
+ if (rc)
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: fail to relinquish bus bandwidth:%d.\n",
+ dev->rev, rc);
+ else
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: relinquish bus bandwidth.\n",
+ dev->rev);
+ }
+
+ regulator_disable(dev->gdsc);
+}
+
+static int ep_pcie_pipe_clk_init(struct ep_pcie_dev_t *dev)
+{
+ int i, rc = 0;
+ struct ep_pcie_clk_info_t *info;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ for (i = 0; i < EP_PCIE_MAX_PIPE_CLK; i++) {
+ info = &dev->pipeclk[i];
+
+ if (!info->hdl) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: handle of Pipe Clock %s is NULL\n",
+ dev->rev, info->name);
+ rc = -EINVAL;
+ break;
+ }
+
+ if (info->freq) {
+ rc = clk_set_rate(info->hdl, info->freq);
+ if (rc) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: can't set rate for clk %s: %d.\n",
+ dev->rev, info->name, rc);
+ break;
+ }
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: set rate for clk %s\n",
+ dev->rev, info->name);
+ }
+
+ rc = clk_prepare_enable(info->hdl);
+
+ if (rc)
+ EP_PCIE_ERR(dev, "PCIe V%d: failed to enable clk %s.\n",
+ dev->rev, info->name);
+ else
+ EP_PCIE_DBG(dev, "PCIe V%d: enabled pipe clk %s.\n",
+ dev->rev, info->name);
+ }
+
+ if (rc) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: disable pipe clocks for error handling.\n",
+ dev->rev);
+ while (i--)
+ if (dev->pipeclk[i].hdl)
+ clk_disable_unprepare(dev->pipeclk[i].hdl);
+ }
+
+ return rc;
+}
+
+static void ep_pcie_pipe_clk_deinit(struct ep_pcie_dev_t *dev)
+{
+ int i;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ for (i = 0; i < EP_PCIE_MAX_PIPE_CLK; i++)
+ if (dev->pipeclk[i].hdl)
+ clk_disable_unprepare(
+ dev->pipeclk[i].hdl);
+}
+
+static void ep_pcie_bar_init(struct ep_pcie_dev_t *dev)
+{
+ struct resource *res = dev->res[EP_PCIE_RES_MMIO].resource;
+ u32 mask = res->end - res->start;
+ u32 properties = 0x4;
+
+ EP_PCIE_DBG(dev, "PCIe V%d: BAR mask to program is 0x%x\n",
+ dev->rev, mask);
+
+ /* Configure BAR mask via CS2 */
+ ep_pcie_write_mask(dev->elbi + PCIE20_ELBI_CS2_ENABLE, 0, BIT(0));
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0, mask);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0 + 0x4, 0);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0 + 0x8, mask);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0 + 0xc, 0);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0 + 0x10, 0);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0 + 0x14, 0);
+ ep_pcie_write_mask(dev->elbi + PCIE20_ELBI_CS2_ENABLE, BIT(0), 0);
+
+ /* Configure BAR properties via CS */
+ ep_pcie_write_mask(dev->dm_core + PCIE20_MISC_CONTROL_1, 0, BIT(0));
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0, properties);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BAR0 + 0x8, properties);
+ ep_pcie_write_mask(dev->dm_core + PCIE20_MISC_CONTROL_1, BIT(0), 0);
+}
+
+static void ep_pcie_core_init(struct ep_pcie_dev_t *dev, bool configured)
+{
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ /* enable debug IRQ */
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_DEBUG_INT_EN,
+ 0, BIT(3) | BIT(2) | BIT(1));
+
+ if (!configured) {
+ /* Configure PCIe to endpoint mode */
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_DEVICE_TYPE, 0x0);
+
+ /* adjust DBI base address */
+ if (dev->dbi_base_reg)
+ writel_relaxed(0x3FFFE000,
+ dev->parf + dev->dbi_base_reg);
+ else
+ writel_relaxed(0x3FFFE000,
+ dev->parf + PCIE20_PARF_DBI_BASE_ADDR);
+
+ /* Configure PCIe core to support 1GB aperture */
+ if (dev->slv_space_reg)
+ ep_pcie_write_reg(dev->parf, dev->slv_space_reg,
+ 0x40000000);
+ else
+ ep_pcie_write_reg(dev->parf,
+ PCIE20_PARF_SLV_ADDR_SPACE_SIZE, 0x40000000);
+
+ /* Configure link speed */
+ ep_pcie_write_mask(dev->dm_core +
+ PCIE20_LINK_CONTROL2_LINK_STATUS2,
+ 0xf, dev->link_speed);
+ }
+
+ /* Read halts write */
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_AXI_MSTR_RD_HALT_NO_WRITES,
+ 0, BIT(0));
+
+ /* Write after write halt */
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT,
+ 0, BIT(31));
+
+ /* Q2A flush disable */
+ writel_relaxed(0, dev->parf + PCIE20_PARF_Q2A_FLUSH);
+
+ /* Disable the DBI Wakeup */
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_SYS_CTRL, BIT(11), 0);
+
+ /* Disable the debouncers */
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_DB_CTRL, 0x73);
+
+ /* Disable core clock CGC */
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_SYS_CTRL, 0, BIT(6));
+
+ /* Set AUX power to be on */
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_SYS_CTRL, 0, BIT(4));
+
+ /* Request to exit from L1SS for MSI and LTR MSG */
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_CFG_BITS, 0, BIT(1));
+
+ EP_PCIE_DBG(dev,
+ "Initial: CLASS_CODE_REVISION_ID:0x%x; HDR_TYPE:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_CLASS_CODE_REVISION_ID),
+ readl_relaxed(dev->dm_core + PCIE20_BIST_HDR_TYPE));
+
+ if (!configured) {
+ /* Enable CS for RO(CS) register writes */
+ ep_pcie_write_mask(dev->dm_core + PCIE20_MISC_CONTROL_1, 0,
+ BIT(0));
+
+ /* Set class code and revision ID */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_CLASS_CODE_REVISION_ID,
+ 0xff000000);
+
+ /* Set header type */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_BIST_HDR_TYPE, 0x10);
+
+ /* Set Subsystem ID and Subsystem Vendor ID */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_SUBSYSTEM, 0xa01f17cb);
+
+ /* Set the PMC Register - to support PME in D0/D3hot/D3cold */
+ ep_pcie_write_mask(dev->dm_core + PCIE20_CAP_ID_NXT_PTR, 0,
+ BIT(31)|BIT(30)|BIT(27));
+
+ /* Set the Endpoint L0s Acceptable Latency to 1us (max) */
+ ep_pcie_write_reg_field(dev->dm_core,
+ PCIE20_DEVICE_CAPABILITIES,
+ PCIE20_MASK_EP_L0S_ACCPT_LATENCY, 0x7);
+
+ /* Set the Endpoint L1 Acceptable Latency to 2 us (max) */
+ ep_pcie_write_reg_field(dev->dm_core,
+ PCIE20_DEVICE_CAPABILITIES,
+ PCIE20_MASK_EP_L1_ACCPT_LATENCY, 0x7);
+
+ /* Set the L0s Exit Latency to 2us-4us = 0x6 */
+ ep_pcie_write_reg_field(dev->dm_core, PCIE20_LINK_CAPABILITIES,
+ PCIE20_MASK_L1_EXIT_LATENCY, 0x6);
+
+ /* Set the L1 Exit Latency to be 32us-64 us = 0x6 */
+ ep_pcie_write_reg_field(dev->dm_core, PCIE20_LINK_CAPABILITIES,
+ PCIE20_MASK_L0S_EXIT_LATENCY, 0x6);
+
+ /* L1ss is supported */
+ ep_pcie_write_mask(dev->dm_core + PCIE20_L1SUB_CAPABILITY, 0,
+ 0x1f);
+
+ /* Enable Clock Power Management */
+ ep_pcie_write_reg_field(dev->dm_core, PCIE20_LINK_CAPABILITIES,
+ PCIE20_MASK_CLOCK_POWER_MAN, 0x1);
+
+ /* Disable CS for RO(CS) register writes */
+ ep_pcie_write_mask(dev->dm_core + PCIE20_MISC_CONTROL_1, BIT(0),
+ 0);
+
+ /* Set FTS value to match the PHY setting */
+ ep_pcie_write_reg_field(dev->dm_core,
+ PCIE20_ACK_F_ASPM_CTRL_REG,
+ PCIE20_MASK_ACK_N_FTS, 0x80);
+
+ EP_PCIE_DBG(dev,
+ "After program: CLASS_CODE_REVISION_ID:0x%x; HDR_TYPE:0x%x; L1SUB_CAPABILITY:0x%x; PARF_SYS_CTRL:0x%x\n",
+ readl_relaxed(dev->dm_core +
+ PCIE20_CLASS_CODE_REVISION_ID),
+ readl_relaxed(dev->dm_core + PCIE20_BIST_HDR_TYPE),
+ readl_relaxed(dev->dm_core + PCIE20_L1SUB_CAPABILITY),
+ readl_relaxed(dev->parf + PCIE20_PARF_SYS_CTRL));
+
+ /* Configure BARs */
+ ep_pcie_bar_init(dev);
+
+ ep_pcie_write_reg(dev->mmio, PCIE20_MHICFG, 0x02800880);
+ ep_pcie_write_reg(dev->mmio, PCIE20_BHI_EXECENV, 0x2);
+ }
+
+ /* Configure IRQ events */
+ if (dev->aggregated_irq) {
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_INT_ALL_MASK, 0);
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_INT_ALL_MASK, 0,
+ BIT(EP_PCIE_INT_EVT_LINK_DOWN) |
+ BIT(EP_PCIE_INT_EVT_BME) |
+ BIT(EP_PCIE_INT_EVT_PM_TURNOFF) |
+ BIT(EP_PCIE_INT_EVT_DSTATE_CHANGE) |
+ BIT(EP_PCIE_INT_EVT_LINK_UP));
+ if (!dev->mhi_a7_irq)
+ ep_pcie_write_mask(dev->parf +
+ PCIE20_PARF_INT_ALL_MASK, 0,
+ BIT(EP_PCIE_INT_EVT_MHI_A7));
+
+ EP_PCIE_DBG(dev, "PCIe V%d: PCIE20_PARF_INT_ALL_MASK:0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->parf + PCIE20_PARF_INT_ALL_MASK));
+ }
+
+ if (dev->active_config) {
+ ep_pcie_write_reg(dev->dm_core, PCIE20_AUX_CLK_FREQ_REG, 0x14);
+
+ EP_PCIE_DBG2(dev, "PCIe V%d: Enable L1.\n", dev->rev);
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_PM_CTRL, BIT(5), 0);
+ }
+}
+
+static void ep_pcie_config_inbound_iatu(struct ep_pcie_dev_t *dev)
+{
+ struct resource *mmio = dev->res[EP_PCIE_RES_MMIO].resource;
+ u32 lower, limit, bar;
+
+ lower = mmio->start;
+ limit = mmio->end;
+ bar = readl_relaxed(dev->dm_core + PCIE20_BAR0);
+
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: BAR0 is 0x%x; MMIO[0x%x-0x%x]\n",
+ dev->rev, bar, lower, limit);
+
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_MHI_BASE_ADDR_LOWER, lower);
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_MHI_BASE_ADDR_UPPER, 0x0);
+
+ /* program inbound address translation using region 0 */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_VIEWPORT, 0x80000000);
+ /* set region to mem type */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_CTRL1, 0x0);
+ /* setup target address registers */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_LTAR, lower);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_UTAR, 0x0);
+ /* use BAR match mode for BAR0 and enable region 0 */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_CTRL2, 0xc0000000);
+
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_VIEWPORT:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_VIEWPORT));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_CTRL1:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_CTRL1));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_LTAR:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_LTAR));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_UTAR:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_UTAR));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_CTRL2:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_CTRL2));
+}
+
+static void ep_pcie_config_outbound_iatu_entry(struct ep_pcie_dev_t *dev,
+ u32 region, u32 lower, u32 upper,
+ u32 limit, u32 tgt_lower, u32 tgt_upper)
+{
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: region:%d; lower:0x%x; limit:0x%x; target_lower:0x%x; target_upper:0x%x\n",
+ dev->rev, region, lower, limit, tgt_lower, tgt_upper);
+
+ /* program outbound address translation using an input region */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_VIEWPORT, region);
+ /* set region to mem type */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_CTRL1, 0x0);
+ /* setup source address registers */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_LBAR, lower);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_UBAR, upper);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_LAR, limit);
+ /* setup target address registers */
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_LTAR, tgt_lower);
+ ep_pcie_write_reg(dev->dm_core, PCIE20_PLR_IATU_UTAR, tgt_upper);
+ /* use DMA bypass mode and enable the region */
+ ep_pcie_write_mask(dev->dm_core + PCIE20_PLR_IATU_CTRL2, 0,
+ BIT(31) | BIT(27));
+
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_VIEWPORT:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_VIEWPORT));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_CTRL1:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_CTRL1));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_LBAR:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_LBAR));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_UBAR:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_UBAR));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_LAR:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_LAR));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_LTAR:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_LTAR));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_UTAR:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_UTAR));
+ EP_PCIE_DBG(dev, "PCIE20_PLR_IATU_CTRL2:0x%x\n",
+ readl_relaxed(dev->dm_core + PCIE20_PLR_IATU_CTRL2));
+}
+
+static void ep_pcie_notify_event(struct ep_pcie_dev_t *dev,
+ enum ep_pcie_event event)
+{
+ if (dev->event_reg && dev->event_reg->callback &&
+ (dev->event_reg->events & event)) {
+ struct ep_pcie_notify *notify = &dev->event_reg->notify;
+
+ notify->event = event;
+ notify->user = dev->event_reg->user;
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: Callback client for event %d.\n",
+ dev->rev, event);
+ dev->event_reg->callback(notify);
+ } else {
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: Client does not register for event %d.\n",
+ dev->rev, event);
+ }
+}
+
+static int ep_pcie_get_resources(struct ep_pcie_dev_t *dev,
+ struct platform_device *pdev)
+{
+ int i, len, cnt, ret = 0, size = 0;
+ struct ep_pcie_vreg_info_t *vreg_info;
+ struct ep_pcie_gpio_info_t *gpio_info;
+ struct ep_pcie_clk_info_t *clk_info;
+ struct ep_pcie_reset_info_t *reset_info;
+ struct resource *res;
+ struct ep_pcie_res_info_t *res_info;
+ struct ep_pcie_irq_info_t *irq_info;
+ char prop_name[MAX_PROP_SIZE];
+ const __be32 *prop;
+ u32 *clkfreq = NULL;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ of_get_property(pdev->dev.of_node, "qcom,phy-init", &size);
+ if (size) {
+ dev->phy_init = (struct ep_pcie_phy_info_t *)
+ devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
+
+ if (dev->phy_init) {
+ dev->phy_init_len =
+ size / sizeof(*dev->phy_init);
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: phy init length is 0x%x.\n",
+ dev->rev, dev->phy_init_len);
+
+ of_property_read_u32_array(pdev->dev.of_node,
+ "qcom,phy-init",
+ (unsigned int *)dev->phy_init,
+ size / sizeof(dev->phy_init->offset));
+ } else {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Could not allocate memory for phy init sequence.\n",
+ dev->rev);
+ return -ENOMEM;
+ }
+ } else {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PHY V%d: phy init sequence is not present in DT.\n",
+ dev->rev, dev->phy_rev);
+ }
+
+ cnt = of_property_count_strings((&pdev->dev)->of_node,
+ "clock-names");
+ if (cnt > 0) {
+ size_t size = cnt * sizeof(*clkfreq);
+
+ clkfreq = kzalloc(size, GFP_KERNEL);
+ if (!clkfreq) {
+ EP_PCIE_ERR(dev, "PCIe V%d: memory alloc failed\n",
+ dev->rev);
+ return -ENOMEM;
+ }
+ ret = of_property_read_u32_array(
+ (&pdev->dev)->of_node,
+ "max-clock-frequency-hz", clkfreq, cnt);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: invalid max-clock-frequency-hz property:%d\n",
+ dev->rev, ret);
+ goto out;
+ }
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_VREG; i++) {
+ vreg_info = &dev->vreg[i];
+ vreg_info->hdl =
+ devm_regulator_get(&pdev->dev, vreg_info->name);
+
+ if (PTR_ERR(vreg_info->hdl) == -EPROBE_DEFER) {
+ EP_PCIE_DBG(dev, "EPROBE_DEFER for VReg:%s\n",
+ vreg_info->name);
+ ret = PTR_ERR(vreg_info->hdl);
+ goto out;
+ }
+
+ if (IS_ERR(vreg_info->hdl)) {
+ if (vreg_info->required) {
+ EP_PCIE_ERR(dev, "Vreg %s doesn't exist\n",
+ vreg_info->name);
+ ret = PTR_ERR(vreg_info->hdl);
+ goto out;
+ } else {
+ EP_PCIE_DBG(dev,
+ "Optional Vreg %s doesn't exist\n",
+ vreg_info->name);
+ vreg_info->hdl = NULL;
+ }
+ } else {
+ snprintf(prop_name, MAX_PROP_SIZE,
+ "qcom,%s-voltage-level", vreg_info->name);
+ prop = of_get_property((&pdev->dev)->of_node,
+ prop_name, &len);
+ if (!prop || (len != (3 * sizeof(__be32)))) {
+ EP_PCIE_DBG(dev, "%s %s property\n",
+ prop ? "invalid format" :
+ "no", prop_name);
+ } else {
+ vreg_info->max_v = be32_to_cpup(&prop[0]);
+ vreg_info->min_v = be32_to_cpup(&prop[1]);
+ vreg_info->opt_mode =
+ be32_to_cpup(&prop[2]);
+ }
+ }
+ }
+
+ dev->gdsc = devm_regulator_get(&pdev->dev, "gdsc-vdd");
+
+ if (IS_ERR(dev->gdsc)) {
+ EP_PCIE_ERR(dev, "PCIe V%d: Failed to get %s GDSC:%ld\n",
+ dev->rev, dev->pdev->name, PTR_ERR(dev->gdsc));
+ if (PTR_ERR(dev->gdsc) == -EPROBE_DEFER)
+ EP_PCIE_DBG(dev, "PCIe V%d: EPROBE_DEFER for %s GDSC\n",
+ dev->rev, dev->pdev->name);
+ ret = PTR_ERR(dev->gdsc);
+ goto out;
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_GPIO; i++) {
+ gpio_info = &dev->gpio[i];
+ ret = of_get_named_gpio((&pdev->dev)->of_node,
+ gpio_info->name, 0);
+ if (ret >= 0) {
+ gpio_info->num = ret;
+ ret = 0;
+ EP_PCIE_DBG(dev, "GPIO num for %s is %d\n",
+ gpio_info->name, gpio_info->num);
+ } else {
+ EP_PCIE_DBG(dev,
+ "GPIO %s is not supported in this configuration.\n",
+ gpio_info->name);
+ ret = 0;
+ }
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_CLK; i++) {
+ clk_info = &dev->clk[i];
+
+ clk_info->hdl = devm_clk_get(&pdev->dev, clk_info->name);
+
+ if (IS_ERR(clk_info->hdl)) {
+ if (clk_info->required) {
+ EP_PCIE_ERR(dev,
+ "Clock %s isn't available:%ld\n",
+ clk_info->name, PTR_ERR(clk_info->hdl));
+ ret = PTR_ERR(clk_info->hdl);
+ goto out;
+ } else {
+ EP_PCIE_DBG(dev, "Ignoring Clock %s\n",
+ clk_info->name);
+ clk_info->hdl = NULL;
+ }
+ } else {
+ if (clkfreq != NULL) {
+ clk_info->freq = clkfreq[i +
+ EP_PCIE_MAX_PIPE_CLK];
+ EP_PCIE_DBG(dev, "Freq of Clock %s is:%d\n",
+ clk_info->name, clk_info->freq);
+ }
+ }
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_PIPE_CLK; i++) {
+ clk_info = &dev->pipeclk[i];
+
+ clk_info->hdl = devm_clk_get(&pdev->dev, clk_info->name);
+
+ if (IS_ERR(clk_info->hdl)) {
+ if (clk_info->required) {
+ EP_PCIE_ERR(dev,
+ "Clock %s isn't available:%ld\n",
+ clk_info->name, PTR_ERR(clk_info->hdl));
+ ret = PTR_ERR(clk_info->hdl);
+ goto out;
+ } else {
+ EP_PCIE_DBG(dev, "Ignoring Clock %s\n",
+ clk_info->name);
+ clk_info->hdl = NULL;
+ }
+ } else {
+ if (clkfreq != NULL) {
+ clk_info->freq = clkfreq[i];
+ EP_PCIE_DBG(dev, "Freq of Clock %s is:%d\n",
+ clk_info->name, clk_info->freq);
+ }
+ }
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_RESET; i++) {
+ reset_info = &dev->reset[i];
+
+ reset_info->hdl = devm_reset_control_get(&pdev->dev,
+ reset_info->name);
+
+ if (IS_ERR(reset_info->hdl)) {
+ if (reset_info->required) {
+ EP_PCIE_ERR(dev,
+ "Reset %s isn't available:%ld\n",
+ reset_info->name,
+ PTR_ERR(reset_info->hdl));
+
+ ret = PTR_ERR(reset_info->hdl);
+ reset_info->hdl = NULL;
+ goto out;
+ } else {
+ EP_PCIE_DBG(dev, "Ignoring Reset %s\n",
+ reset_info->name);
+ reset_info->hdl = NULL;
+ }
+ }
+ }
+
+ dev->bus_scale_table = msm_bus_cl_get_pdata(pdev);
+ if (!dev->bus_scale_table) {
+ EP_PCIE_DBG(dev, "PCIe V%d: No bus scale table for %s\n",
+ dev->rev, dev->pdev->name);
+ dev->bus_client = 0;
+ } else {
+ dev->bus_client =
+ msm_bus_scale_register_client(dev->bus_scale_table);
+ if (!dev->bus_client) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Failed to register bus client for %s\n",
+ dev->rev, dev->pdev->name);
+ msm_bus_cl_clear_pdata(dev->bus_scale_table);
+ ret = -ENODEV;
+ goto out;
+ }
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_RES; i++) {
+ res_info = &dev->res[i];
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ res_info->name);
+
+ if (!res) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: can't get resource for %s.\n",
+ dev->rev, res_info->name);
+ ret = -ENOMEM;
+ goto out;
+ } else {
+ EP_PCIE_DBG(dev, "start addr for %s is %pa.\n",
+ res_info->name, &res->start);
+ }
+
+ res_info->base = devm_ioremap(&pdev->dev,
+ res->start, resource_size(res));
+ if (!res_info->base) {
+ EP_PCIE_ERR(dev, "PCIe V%d: can't remap %s.\n",
+ dev->rev, res_info->name);
+ ret = -ENOMEM;
+ goto out;
+ }
+ res_info->resource = res;
+ }
+
+ for (i = 0; i < EP_PCIE_MAX_IRQ; i++) {
+ irq_info = &dev->irq[i];
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_IRQ,
+ irq_info->name);
+
+ if (!res) {
+ EP_PCIE_DBG2(dev, "PCIe V%d: can't find IRQ # for %s\n",
+ dev->rev, irq_info->name);
+ } else {
+ irq_info->num = res->start;
+ EP_PCIE_DBG2(dev, "IRQ # for %s is %d.\n",
+ irq_info->name, irq_info->num);
+ }
+ }
+
+ dev->parf = dev->res[EP_PCIE_RES_PARF].base;
+ dev->phy = dev->res[EP_PCIE_RES_PHY].base;
+ dev->mmio = dev->res[EP_PCIE_RES_MMIO].base;
+ dev->msi = dev->res[EP_PCIE_RES_MSI].base;
+ dev->dm_core = dev->res[EP_PCIE_RES_DM_CORE].base;
+ dev->elbi = dev->res[EP_PCIE_RES_ELBI].base;
+
+out:
+ kfree(clkfreq);
+ return ret;
+}
+
+static void ep_pcie_release_resources(struct ep_pcie_dev_t *dev)
+{
+ dev->parf = NULL;
+ dev->elbi = NULL;
+ dev->dm_core = NULL;
+ dev->phy = NULL;
+ dev->mmio = NULL;
+ dev->msi = NULL;
+
+ if (dev->bus_client) {
+ msm_bus_scale_unregister_client(dev->bus_client);
+ dev->bus_client = 0;
+ }
+}
+
+static void ep_pcie_enumeration_complete(struct ep_pcie_dev_t *dev)
+{
+ unsigned long irqsave_flags;
+
+ spin_lock_irqsave(&dev->isr_lock, irqsave_flags);
+
+ dev->enumerated = true;
+ dev->link_status = EP_PCIE_LINK_ENABLED;
+
+ if (dev->gpio[EP_PCIE_GPIO_MDM2AP].num) {
+ /* assert MDM2AP Status GPIO */
+ EP_PCIE_DBG2(dev, "PCIe V%d: assert MDM2AP Status.\n",
+ dev->rev);
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: MDM2APStatus GPIO initial:%d.\n",
+ dev->rev,
+ gpio_get_value(
+ dev->gpio[EP_PCIE_GPIO_MDM2AP].num));
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_MDM2AP].num,
+ dev->gpio[EP_PCIE_GPIO_MDM2AP].on);
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: MDM2APStatus GPIO after assertion:%d.\n",
+ dev->rev,
+ gpio_get_value(
+ dev->gpio[EP_PCIE_GPIO_MDM2AP].num));
+ }
+
+ hw_drv.device_id = readl_relaxed(dev->dm_core);
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: register driver for device 0x%x.\n",
+ ep_pcie_dev.rev, hw_drv.device_id);
+ ep_pcie_register_drv(&hw_drv);
+ if (!dev->no_notify)
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_LINKUP);
+ else
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: do not notify client about linkup.\n",
+ dev->rev);
+
+ spin_unlock_irqrestore(&dev->isr_lock, irqsave_flags);
+}
+
+int ep_pcie_core_enable_endpoint(enum ep_pcie_options opt)
+{
+ int ret = 0;
+ u32 val = 0;
+ u32 retries = 0;
+ u32 bme = 0;
+ bool ltssm_en = false;
+ struct ep_pcie_dev_t *dev = &ep_pcie_dev;
+
+ EP_PCIE_DBG(dev, "PCIe V%d: options input are 0x%x.\n", dev->rev, opt);
+
+ mutex_lock(&dev->setup_mtx);
+
+ if (dev->link_status == EP_PCIE_LINK_ENABLED) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: link is already enabled.\n",
+ dev->rev);
+ goto out;
+ }
+
+ if (dev->link_status == EP_PCIE_LINK_UP)
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: link is already up, let's proceed with the voting for the resources.\n",
+ dev->rev);
+
+ if (dev->power_on && (opt & EP_PCIE_OPT_POWER_ON)) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: request to turn on the power when link is already powered on.\n",
+ dev->rev);
+ goto out;
+ }
+
+ if (opt & EP_PCIE_OPT_POWER_ON) {
+ /* enable power */
+ ret = ep_pcie_vreg_init(dev);
+ if (ret) {
+ EP_PCIE_ERR(dev, "PCIe V%d: failed to enable Vreg\n",
+ dev->rev);
+ goto out;
+ }
+
+ /* enable clocks */
+ ret = ep_pcie_clk_init(dev);
+ if (ret) {
+ EP_PCIE_ERR(dev, "PCIe V%d: failed to enable clocks\n",
+ dev->rev);
+ goto clk_fail;
+ }
+
+ /* enable pipe clock */
+ ret = ep_pcie_pipe_clk_init(dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: failed to enable pipe clock\n",
+ dev->rev);
+ goto pipe_clk_fail;
+ }
+
+ dev->power_on = true;
+ }
+
+ if (!(opt & EP_PCIE_OPT_ENUM))
+ goto out;
+
+ /* check link status during initial bootup */
+ if (!dev->enumerated) {
+ val = readl_relaxed(dev->parf + PCIE20_PARF_PM_STTS);
+ val = val & PARF_XMLH_LINK_UP;
+ EP_PCIE_DBG(dev, "PCIe V%d: Link status is 0x%x.\n", dev->rev,
+ val);
+ if (val) {
+ EP_PCIE_INFO(dev,
+ "PCIe V%d: link initialized by bootloader for LE PCIe endpoint; skip link training in HLOS.\n",
+ dev->rev);
+ ep_pcie_core_init(dev, true);
+ dev->link_status = EP_PCIE_LINK_UP;
+ dev->l23_ready = false;
+ goto checkbme;
+ } else {
+ ltssm_en = readl_relaxed(dev->parf
+ + PCIE20_PARF_LTSSM) & BIT(8);
+
+ if (ltssm_en) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: link is not up when LTSSM has already enabled by bootloader.\n",
+ dev->rev);
+ ret = EP_PCIE_ERROR;
+ goto link_fail;
+ } else {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: Proceed with regular link training.\n",
+ dev->rev);
+ }
+ }
+ }
+
+ if (opt & EP_PCIE_OPT_AST_WAKE) {
+ /* assert PCIe WAKE# */
+ EP_PCIE_INFO(dev, "PCIe V%d: assert PCIe WAKE#.\n",
+ dev->rev);
+ EP_PCIE_DBG(dev, "PCIe V%d: WAKE GPIO initial:%d.\n",
+ dev->rev,
+ gpio_get_value(dev->gpio[EP_PCIE_GPIO_WAKE].num));
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_WAKE].num,
+ 1 - dev->gpio[EP_PCIE_GPIO_WAKE].on);
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: WAKE GPIO after deassertion:%d.\n",
+ dev->rev,
+ gpio_get_value(dev->gpio[EP_PCIE_GPIO_WAKE].num));
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_WAKE].num,
+ dev->gpio[EP_PCIE_GPIO_WAKE].on);
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: WAKE GPIO after assertion:%d.\n",
+ dev->rev,
+ gpio_get_value(dev->gpio[EP_PCIE_GPIO_WAKE].num));
+ }
+
+ /* wait for host side to deassert PERST */
+ retries = 0;
+ do {
+ if (gpio_get_value(dev->gpio[EP_PCIE_GPIO_PERST].num) == 1)
+ break;
+ retries++;
+ usleep_range(PERST_TIMEOUT_US_MIN, PERST_TIMEOUT_US_MAX);
+ } while (retries < PERST_CHECK_MAX_COUNT);
+
+ EP_PCIE_DBG(dev, "PCIe V%d: number of PERST retries:%d.\n",
+ dev->rev, retries);
+
+ if (retries == PERST_CHECK_MAX_COUNT) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: PERST is not de-asserted by host\n",
+ dev->rev);
+ ret = EP_PCIE_ERROR;
+ goto link_fail;
+ } else {
+ dev->perst_deast = true;
+ if (opt & EP_PCIE_OPT_AST_WAKE) {
+ /* deassert PCIe WAKE# */
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: deassert PCIe WAKE# after PERST# is deasserted.\n",
+ dev->rev);
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_WAKE].num,
+ 1 - dev->gpio[EP_PCIE_GPIO_WAKE].on);
+ }
+ }
+
+ /* init PCIe PHY */
+ ep_pcie_phy_init(dev);
+
+ EP_PCIE_DBG(dev, "PCIe V%d: waiting for phy ready...\n", dev->rev);
+ retries = 0;
+ do {
+ if (ep_pcie_phy_is_ready(dev))
+ break;
+ retries++;
+ if (retries % 100 == 0)
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: current number of PHY retries:%d.\n",
+ dev->rev, retries);
+ usleep_range(REFCLK_STABILIZATION_DELAY_US_MIN,
+ REFCLK_STABILIZATION_DELAY_US_MAX);
+ } while (retries < PHY_READY_TIMEOUT_COUNT);
+
+ EP_PCIE_DBG(dev, "PCIe V%d: number of PHY retries:%d.\n",
+ dev->rev, retries);
+
+ if (retries == PHY_READY_TIMEOUT_COUNT) {
+ EP_PCIE_ERR(dev, "PCIe V%d: PCIe PHY failed to come up!\n",
+ dev->rev);
+ ret = EP_PCIE_ERROR;
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_PHY), false);
+ goto link_fail;
+ } else {
+ EP_PCIE_INFO(dev, "PCIe V%d: PCIe PHY is ready!\n", dev->rev);
+ }
+
+ ep_pcie_core_init(dev, false);
+ ep_pcie_config_inbound_iatu(dev);
+
+ /* enable link training */
+ if (dev->phy_rev >= 3)
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_LTSSM, 0, BIT(8));
+ else
+ ep_pcie_write_mask(dev->elbi + PCIE20_ELBI_SYS_CTRL, 0, BIT(0));
+
+ EP_PCIE_DBG(dev, "PCIe V%d: check if link is up\n", dev->rev);
+
+ /* Wait for up to 100ms for the link to come up */
+ retries = 0;
+ do {
+ usleep_range(LINK_UP_TIMEOUT_US_MIN, LINK_UP_TIMEOUT_US_MAX);
+ val = readl_relaxed(dev->elbi + PCIE20_ELBI_SYS_STTS);
+ retries++;
+ if (retries % 100 == 0)
+ EP_PCIE_DBG(dev, "PCIe V%d: LTSSM_STATE:0x%x.\n",
+ dev->rev, (val >> 0xC) & 0x3f);
+ } while ((!(val & XMLH_LINK_UP) ||
+ !ep_pcie_confirm_linkup(dev, false))
+ && (retries < LINK_UP_CHECK_MAX_COUNT));
+
+ if (retries == LINK_UP_CHECK_MAX_COUNT) {
+ EP_PCIE_ERR(dev, "PCIe V%d: link initialization failed\n",
+ dev->rev);
+ ret = EP_PCIE_ERROR;
+ goto link_fail;
+ } else {
+ dev->link_status = EP_PCIE_LINK_UP;
+ dev->l23_ready = false;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: link is up after %d checkings (%d ms)\n",
+ dev->rev, retries,
+ LINK_UP_TIMEOUT_US_MIN * retries / 1000);
+ EP_PCIE_INFO(dev,
+ "PCIe V%d: link initialized for LE PCIe endpoint\n",
+ dev->rev);
+ }
+
+checkbme:
+ if (dev->active_config) {
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_SLV_ADDR_MSB_CTRL,
+ 0, BIT(0));
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_SLV_ADDR_SPACE_SIZE_HI,
+ 0x200);
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_SLV_ADDR_SPACE_SIZE,
+ 0x0);
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_DBI_BASE_ADDR_HI,
+ 0x100);
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_DBI_BASE_ADDR,
+ 0x7FFFE000);
+ }
+
+ if (!(opt & EP_PCIE_OPT_ENUM_ASYNC)) {
+ /* Wait for up to 1000ms for BME to be set */
+ retries = 0;
+
+ bme = readl_relaxed(dev->dm_core +
+ PCIE20_COMMAND_STATUS) & BIT(2);
+ while (!bme && (retries < BME_CHECK_MAX_COUNT)) {
+ retries++;
+ usleep_range(BME_TIMEOUT_US_MIN, BME_TIMEOUT_US_MAX);
+ bme = readl_relaxed(dev->dm_core +
+ PCIE20_COMMAND_STATUS) & BIT(2);
+ }
+ } else {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: EP_PCIE_OPT_ENUM_ASYNC is true.\n",
+ dev->rev);
+ bme = readl_relaxed(dev->dm_core +
+ PCIE20_COMMAND_STATUS) & BIT(2);
+ }
+
+ if (bme) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PCIe link is up and BME is enabled after %d checkings (%d ms).\n",
+ dev->rev, retries,
+ BME_TIMEOUT_US_MIN * retries / 1000);
+ ep_pcie_enumeration_complete(dev);
+ /* expose BAR to user space to identify modem */
+ ep_pcie_bar0_address =
+ readl_relaxed(dev->dm_core + PCIE20_BAR0);
+ } else {
+ if (!(opt & EP_PCIE_OPT_ENUM_ASYNC))
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: PCIe link is up but BME is still disabled after max waiting time.\n",
+ dev->rev);
+ if (!ep_pcie_debug_keep_resource &&
+ !(opt&EP_PCIE_OPT_ENUM_ASYNC)) {
+ ret = EP_PCIE_ERROR;
+ dev->link_status = EP_PCIE_LINK_DISABLED;
+ goto link_fail;
+ }
+ }
+
+ dev->suspending = false;
+ goto out;
+
+link_fail:
+ dev->power_on = false;
+ if (!ep_pcie_debug_keep_resource)
+ ep_pcie_pipe_clk_deinit(dev);
+pipe_clk_fail:
+ if (!ep_pcie_debug_keep_resource)
+ ep_pcie_clk_deinit(dev);
+clk_fail:
+ if (!ep_pcie_debug_keep_resource)
+ ep_pcie_vreg_deinit(dev);
+ else
+ ret = 0;
+out:
+ mutex_unlock(&dev->setup_mtx);
+
+ return ret;
+}
+
+int ep_pcie_core_disable_endpoint(void)
+{
+ int rc = 0;
+ struct ep_pcie_dev_t *dev = &ep_pcie_dev;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ mutex_lock(&dev->setup_mtx);
+
+ if (!dev->power_on) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: the link is already power down.\n",
+ dev->rev);
+ goto out;
+ }
+
+ dev->link_status = EP_PCIE_LINK_DISABLED;
+ dev->power_on = false;
+
+ EP_PCIE_DBG(dev, "PCIe V%d: shut down the link.\n",
+ dev->rev);
+
+ ep_pcie_pipe_clk_deinit(dev);
+ ep_pcie_clk_deinit(dev);
+ ep_pcie_vreg_deinit(dev);
+out:
+ mutex_unlock(&dev->setup_mtx);
+ return rc;
+}
+
+int ep_pcie_core_mask_irq_event(enum ep_pcie_irq_event event,
+ bool enable)
+{
+ int rc = 0;
+ struct ep_pcie_dev_t *dev = &ep_pcie_dev;
+ unsigned long irqsave_flags;
+ u32 mask = 0;
+
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: Client askes to %s IRQ event 0x%x.\n",
+ dev->rev,
+ enable ? "enable" : "disable",
+ event);
+
+ spin_lock_irqsave(&dev->ext_lock, irqsave_flags);
+
+ if (dev->aggregated_irq) {
+ mask = readl_relaxed(dev->parf + PCIE20_PARF_INT_ALL_MASK);
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: current PCIE20_PARF_INT_ALL_MASK:0x%x\n",
+ dev->rev, mask);
+ if (enable)
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_INT_ALL_MASK,
+ 0, BIT(event));
+ else
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_INT_ALL_MASK,
+ BIT(event), 0);
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: new PCIE20_PARF_INT_ALL_MASK:0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->parf + PCIE20_PARF_INT_ALL_MASK));
+ } else {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Client askes to %s IRQ event 0x%x when aggregated IRQ is not supported.\n",
+ dev->rev,
+ enable ? "enable" : "disable",
+ event);
+ rc = EP_PCIE_ERROR;
+ }
+
+ spin_unlock_irqrestore(&dev->ext_lock, irqsave_flags);
+ return rc;
+}
+
+static irqreturn_t ep_pcie_handle_bme_irq(int irq, void *data)
+{
+ struct ep_pcie_dev_t *dev = data;
+ unsigned long irqsave_flags;
+
+ spin_lock_irqsave(&dev->isr_lock, irqsave_flags);
+
+ dev->bme_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld BME IRQ.\n", dev->rev, dev->bme_counter);
+
+ if (readl_relaxed(dev->dm_core + PCIE20_COMMAND_STATUS) & BIT(2)) {
+ /* BME has been enabled */
+ if (!dev->enumerated) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d:BME is set. Enumeration is complete\n",
+ dev->rev);
+ schedule_work(&dev->handle_bme_work);
+ } else {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d:BME is set again after the enumeration has completed; callback client for link ready.\n",
+ dev->rev);
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_LINKUP);
+ }
+ } else {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d:BME is still disabled\n", dev->rev);
+ }
+
+ spin_unlock_irqrestore(&dev->isr_lock, irqsave_flags);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ep_pcie_handle_linkdown_irq(int irq, void *data)
+{
+ struct ep_pcie_dev_t *dev = data;
+ unsigned long irqsave_flags;
+
+ spin_lock_irqsave(&dev->isr_lock, irqsave_flags);
+
+ dev->linkdown_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld linkdown IRQ.\n",
+ dev->rev, dev->linkdown_counter);
+
+ if (!dev->enumerated || dev->link_status == EP_PCIE_LINK_DISABLED) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d:Linkdown IRQ happened when the link is disabled.\n",
+ dev->rev);
+ } else if (dev->suspending) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d:Linkdown IRQ happened when the link is suspending.\n",
+ dev->rev);
+ } else {
+ dev->link_status = EP_PCIE_LINK_DISABLED;
+ EP_PCIE_ERR(dev, "PCIe V%d:PCIe link is down for %ld times\n",
+ dev->rev, dev->linkdown_counter);
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_PHY) |
+ BIT(EP_PCIE_RES_PARF), true);
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_LINKDOWN);
+ }
+
+ spin_unlock_irqrestore(&dev->isr_lock, irqsave_flags);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ep_pcie_handle_linkup_irq(int irq, void *data)
+{
+ struct ep_pcie_dev_t *dev = data;
+ unsigned long irqsave_flags;
+
+ spin_lock_irqsave(&dev->isr_lock, irqsave_flags);
+
+ dev->linkup_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld linkup IRQ.\n",
+ dev->rev, dev->linkup_counter);
+
+ dev->link_status = EP_PCIE_LINK_UP;
+
+ spin_unlock_irqrestore(&dev->isr_lock, irqsave_flags);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ep_pcie_handle_pm_turnoff_irq(int irq, void *data)
+{
+ struct ep_pcie_dev_t *dev = data;
+ unsigned long irqsave_flags;
+
+ spin_lock_irqsave(&dev->isr_lock, irqsave_flags);
+
+ dev->pm_to_counter++;
+ EP_PCIE_DBG2(dev,
+ "PCIe V%d: No. %ld PM_TURNOFF is received.\n",
+ dev->rev, dev->pm_to_counter);
+ EP_PCIE_DBG2(dev, "PCIe V%d: Put the link into L23.\n", dev->rev);
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_PM_CTRL, 0, BIT(2));
+
+ spin_unlock_irqrestore(&dev->isr_lock, irqsave_flags);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ep_pcie_handle_dstate_change_irq(int irq, void *data)
+{
+ struct ep_pcie_dev_t *dev = data;
+ unsigned long irqsave_flags;
+ u32 dstate;
+
+ spin_lock_irqsave(&dev->isr_lock, irqsave_flags);
+
+ dstate = readl_relaxed(dev->dm_core +
+ PCIE20_CON_STATUS) & 0x3;
+
+ if (dev->dump_conf)
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_DM_CORE), false);
+
+ if (dstate == 3) {
+ dev->l23_ready = true;
+ dev->d3_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld change to D3 state.\n",
+ dev->rev, dev->d3_counter);
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_PM_CTRL, 0, BIT(1));
+
+ if (dev->enumerated)
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_PM_D3_HOT);
+ else
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: do not notify client about this D3 hot event since enumeration by HLOS is not done yet.\n",
+ dev->rev);
+ } else if (dstate == 0) {
+ dev->l23_ready = false;
+ dev->d0_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld change to D0 state.\n",
+ dev->rev, dev->d0_counter);
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_PM_D0);
+ } else {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d:invalid D state change to 0x%x.\n",
+ dev->rev, dstate);
+ }
+
+ spin_unlock_irqrestore(&dev->isr_lock, irqsave_flags);
+
+ return IRQ_HANDLED;
+}
+
+static int ep_pcie_enumeration(struct ep_pcie_dev_t *dev)
+{
+ int ret = 0;
+
+ if (!dev) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: the input handler is NULL.\n",
+ ep_pcie_dev.rev);
+ return EP_PCIE_ERROR;
+ }
+
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: start PCIe link enumeration per host side.\n",
+ dev->rev);
+
+ ret = ep_pcie_core_enable_endpoint(EP_PCIE_OPT_ALL);
+
+ if (ret) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: PCIe link enumeration failed.\n",
+ ep_pcie_dev.rev);
+ } else {
+ if (dev->link_status == EP_PCIE_LINK_ENABLED) {
+ EP_PCIE_INFO(&ep_pcie_dev,
+ "PCIe V%d: PCIe link enumeration is successful with host side.\n",
+ ep_pcie_dev.rev);
+ } else if (dev->link_status == EP_PCIE_LINK_UP) {
+ EP_PCIE_INFO(&ep_pcie_dev,
+ "PCIe V%d: PCIe link training is successful with host side. Waiting for enumeration to complete.\n",
+ ep_pcie_dev.rev);
+ } else {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: PCIe link is in the unexpected status: %d\n",
+ ep_pcie_dev.rev, dev->link_status);
+ }
+ }
+
+ return ret;
+}
+
+static void handle_perst_func(struct work_struct *work)
+{
+ struct ep_pcie_dev_t *dev = container_of(work, struct ep_pcie_dev_t,
+ handle_perst_work);
+
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: Start enumeration due to PERST deassertion.\n",
+ dev->rev);
+
+ ep_pcie_enumeration(dev);
+}
+
+static void handle_d3cold_func(struct work_struct *work)
+{
+ struct ep_pcie_dev_t *dev = container_of(work, struct ep_pcie_dev_t,
+ handle_d3cold_work);
+
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: shutdown PCIe link due to PERST assertion before BME is set.\n",
+ dev->rev);
+ ep_pcie_core_disable_endpoint();
+ dev->no_notify = false;
+}
+
+static void handle_bme_func(struct work_struct *work)
+{
+ struct ep_pcie_dev_t *dev = container_of(work,
+ struct ep_pcie_dev_t, handle_bme_work);
+
+ ep_pcie_enumeration_complete(dev);
+}
+
+static irqreturn_t ep_pcie_handle_perst_irq(int irq, void *data)
+{
+ struct ep_pcie_dev_t *dev = data;
+ unsigned long irqsave_flags;
+ u32 perst;
+
+ spin_lock_irqsave(&dev->isr_lock, irqsave_flags);
+
+ perst = gpio_get_value(dev->gpio[EP_PCIE_GPIO_PERST].num);
+
+ if (!dev->enumerated) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PCIe is not enumerated yet; PERST is %sasserted.\n",
+ dev->rev, perst ? "de" : "");
+ if (perst) {
+ /* start work for link enumeration with the host side */
+ schedule_work(&dev->handle_perst_work);
+ } else {
+ dev->no_notify = true;
+ /* shutdown the link if the link is already on */
+ schedule_work(&dev->handle_d3cold_work);
+ }
+
+ goto out;
+ }
+
+ if (perst) {
+ dev->perst_deast = true;
+ dev->perst_deast_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld PERST deassertion.\n",
+ dev->rev, dev->perst_deast_counter);
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_PM_RST_DEAST);
+ } else {
+ dev->perst_deast = false;
+ dev->perst_ast_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld PERST assertion.\n",
+ dev->rev, dev->perst_ast_counter);
+
+ if (dev->client_ready) {
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_PM_D3_COLD);
+ } else {
+ dev->no_notify = true;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: Client driver is not ready when this PERST assertion happens; shutdown link now.\n",
+ dev->rev);
+ schedule_work(&dev->handle_d3cold_work);
+ }
+ }
+
+out:
+ spin_unlock_irqrestore(&dev->isr_lock, irqsave_flags);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ep_pcie_handle_global_irq(int irq, void *data)
+{
+ struct ep_pcie_dev_t *dev = data;
+ int i;
+ u32 status = readl_relaxed(dev->parf + PCIE20_PARF_INT_ALL_STATUS);
+ u32 mask = readl_relaxed(dev->parf + PCIE20_PARF_INT_ALL_MASK);
+
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_INT_ALL_CLEAR, 0, status);
+
+ dev->global_irq_counter++;
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: No. %ld Global IRQ %d received; status:0x%x; mask:0x%x.\n",
+ dev->rev, dev->global_irq_counter, irq, status, mask);
+ status &= mask;
+
+ for (i = 1; i <= EP_PCIE_INT_EVT_MAX; i++) {
+ if (status & BIT(i)) {
+ switch (i) {
+ case EP_PCIE_INT_EVT_LINK_DOWN:
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: handle linkdown event.\n",
+ dev->rev);
+ ep_pcie_handle_linkdown_irq(irq, data);
+ break;
+ case EP_PCIE_INT_EVT_BME:
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: handle BME event.\n",
+ dev->rev);
+ ep_pcie_handle_bme_irq(irq, data);
+ break;
+ case EP_PCIE_INT_EVT_PM_TURNOFF:
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: handle PM Turn-off event.\n",
+ dev->rev);
+ ep_pcie_handle_pm_turnoff_irq(irq, data);
+ break;
+ case EP_PCIE_INT_EVT_MHI_A7:
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: handle MHI A7 event.\n",
+ dev->rev);
+ ep_pcie_notify_event(dev, EP_PCIE_EVENT_MHI_A7);
+ break;
+ case EP_PCIE_INT_EVT_DSTATE_CHANGE:
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: handle D state chagge event.\n",
+ dev->rev);
+ ep_pcie_handle_dstate_change_irq(irq, data);
+ break;
+ case EP_PCIE_INT_EVT_LINK_UP:
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: handle linkup event.\n",
+ dev->rev);
+ ep_pcie_handle_linkup_irq(irq, data);
+ break;
+ default:
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unexpected event %d is caught!\n",
+ dev->rev, i);
+ }
+ }
+ }
+
+ return IRQ_HANDLED;
+}
+
+int32_t ep_pcie_irq_init(struct ep_pcie_dev_t *dev)
+{
+ int ret;
+ struct device *pdev = &dev->pdev->dev;
+ u32 perst_irq;
+
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ /* Initialize all works to be performed before registering for IRQs*/
+ INIT_WORK(&dev->handle_perst_work, handle_perst_func);
+ INIT_WORK(&dev->handle_bme_work, handle_bme_func);
+ INIT_WORK(&dev->handle_d3cold_work, handle_d3cold_func);
+
+ if (dev->aggregated_irq) {
+ ret = devm_request_irq(pdev,
+ dev->irq[EP_PCIE_INT_GLOBAL].num,
+ ep_pcie_handle_global_irq,
+ IRQF_TRIGGER_HIGH, dev->irq[EP_PCIE_INT_GLOBAL].name,
+ dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to request global interrupt %d\n",
+ dev->rev, dev->irq[EP_PCIE_INT_GLOBAL].num);
+ return ret;
+ }
+
+ ret = enable_irq_wake(dev->irq[EP_PCIE_INT_GLOBAL].num);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to enable wake for Global interrupt\n",
+ dev->rev);
+ return ret;
+ }
+
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: request global interrupt %d\n",
+ dev->rev, dev->irq[EP_PCIE_INT_GLOBAL].num);
+ goto perst_irq;
+ }
+
+ /* register handler for BME interrupt */
+ ret = devm_request_irq(pdev,
+ dev->irq[EP_PCIE_INT_BME].num,
+ ep_pcie_handle_bme_irq,
+ IRQF_TRIGGER_RISING, dev->irq[EP_PCIE_INT_BME].name,
+ dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to request BME interrupt %d\n",
+ dev->rev, dev->irq[EP_PCIE_INT_BME].num);
+ return ret;
+ }
+
+ ret = enable_irq_wake(dev->irq[EP_PCIE_INT_BME].num);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to enable wake for BME interrupt\n",
+ dev->rev);
+ return ret;
+ }
+
+ /* register handler for linkdown interrupt */
+ ret = devm_request_irq(pdev,
+ dev->irq[EP_PCIE_INT_LINK_DOWN].num,
+ ep_pcie_handle_linkdown_irq,
+ IRQF_TRIGGER_RISING, dev->irq[EP_PCIE_INT_LINK_DOWN].name,
+ dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to request linkdown interrupt %d\n",
+ dev->rev, dev->irq[EP_PCIE_INT_LINK_DOWN].num);
+ return ret;
+ }
+
+ /* register handler for linkup interrupt */
+ ret = devm_request_irq(pdev,
+ dev->irq[EP_PCIE_INT_LINK_UP].num, ep_pcie_handle_linkup_irq,
+ IRQF_TRIGGER_RISING, dev->irq[EP_PCIE_INT_LINK_UP].name,
+ dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to request linkup interrupt %d\n",
+ dev->rev, dev->irq[EP_PCIE_INT_LINK_UP].num);
+ return ret;
+ }
+
+ /* register handler for PM_TURNOFF interrupt */
+ ret = devm_request_irq(pdev,
+ dev->irq[EP_PCIE_INT_PM_TURNOFF].num,
+ ep_pcie_handle_pm_turnoff_irq,
+ IRQF_TRIGGER_RISING, dev->irq[EP_PCIE_INT_PM_TURNOFF].name,
+ dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to request PM_TURNOFF interrupt %d\n",
+ dev->rev, dev->irq[EP_PCIE_INT_PM_TURNOFF].num);
+ return ret;
+ }
+
+ /* register handler for D state change interrupt */
+ ret = devm_request_irq(pdev,
+ dev->irq[EP_PCIE_INT_DSTATE_CHANGE].num,
+ ep_pcie_handle_dstate_change_irq,
+ IRQF_TRIGGER_RISING, dev->irq[EP_PCIE_INT_DSTATE_CHANGE].name,
+ dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to request D state change interrupt %d\n",
+ dev->rev, dev->irq[EP_PCIE_INT_DSTATE_CHANGE].num);
+ return ret;
+ }
+
+perst_irq:
+ /* register handler for PERST interrupt */
+ perst_irq = gpio_to_irq(dev->gpio[EP_PCIE_GPIO_PERST].num);
+ ret = devm_request_irq(pdev, perst_irq,
+ ep_pcie_handle_perst_irq,
+ IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING,
+ "ep_pcie_perst", dev);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to request PERST interrupt %d\n",
+ dev->rev, perst_irq);
+ return ret;
+ }
+
+ ret = enable_irq_wake(perst_irq);
+ if (ret) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unable to enable PERST interrupt %d\n",
+ dev->rev, perst_irq);
+ return ret;
+ }
+
+ return 0;
+}
+
+void ep_pcie_irq_deinit(struct ep_pcie_dev_t *dev)
+{
+ EP_PCIE_DBG(dev, "PCIe V%d\n", dev->rev);
+
+ disable_irq(gpio_to_irq(dev->gpio[EP_PCIE_GPIO_PERST].num));
+}
+
+int ep_pcie_core_register_event(struct ep_pcie_register_event *reg)
+{
+ if (!reg) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: Event registration is NULL\n",
+ ep_pcie_dev.rev);
+ return -ENODEV;
+ }
+
+ if (!reg->user) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: User of event registration is NULL\n",
+ ep_pcie_dev.rev);
+ return -ENODEV;
+ }
+
+ ep_pcie_dev.event_reg = reg;
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: Event 0x%x is registered\n",
+ ep_pcie_dev.rev, reg->events);
+
+ ep_pcie_dev.client_ready = true;
+
+ return 0;
+}
+
+int ep_pcie_core_deregister_event(void)
+{
+ if (ep_pcie_dev.event_reg) {
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: current registered events:0x%x; events are deregistered.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.event_reg->events);
+ ep_pcie_dev.event_reg = NULL;
+ } else {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: Event registration is NULL\n",
+ ep_pcie_dev.rev);
+ }
+
+ return 0;
+}
+
+enum ep_pcie_link_status ep_pcie_core_get_linkstatus(void)
+{
+ struct ep_pcie_dev_t *dev = &ep_pcie_dev;
+ u32 bme;
+
+ if (!dev->power_on || (dev->link_status == EP_PCIE_LINK_DISABLED)) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PCIe endpoint is not powered on.\n",
+ dev->rev);
+ return EP_PCIE_LINK_DISABLED;
+ }
+
+ bme = readl_relaxed(dev->dm_core +
+ PCIE20_COMMAND_STATUS) & BIT(2);
+ if (bme) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PCIe link is up and BME is enabled; current SW link status:%d.\n",
+ dev->rev, dev->link_status);
+ dev->link_status = EP_PCIE_LINK_ENABLED;
+ if (dev->no_notify) {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: BME is set now, but do not tell client about BME enable.\n",
+ dev->rev);
+ return EP_PCIE_LINK_UP;
+ }
+ } else {
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PCIe link is up but BME is disabled; current SW link status:%d.\n",
+ dev->rev, dev->link_status);
+ dev->link_status = EP_PCIE_LINK_UP;
+ }
+ return dev->link_status;
+}
+
+int ep_pcie_core_config_outbound_iatu(struct ep_pcie_iatu entries[],
+ u32 num_entries)
+{
+ u32 data_start = 0;
+ u32 data_end = 0;
+ u32 data_tgt_lower = 0;
+ u32 data_tgt_upper = 0;
+ u32 ctrl_start = 0;
+ u32 ctrl_end = 0;
+ u32 ctrl_tgt_lower = 0;
+ u32 ctrl_tgt_upper = 0;
+ u32 upper = 0;
+ bool once = true;
+
+ if (ep_pcie_dev.active_config) {
+ upper = EP_PCIE_OATU_UPPER;
+ if (once) {
+ once = false;
+ EP_PCIE_DBG2(&ep_pcie_dev,
+ "PCIe V%d: No outbound iATU config is needed since active config is enabled.\n",
+ ep_pcie_dev.rev);
+ }
+ }
+
+ if ((num_entries > MAX_IATU_ENTRY_NUM) || !num_entries) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: Wrong iATU entry number %d.\n",
+ ep_pcie_dev.rev, num_entries);
+ return EP_PCIE_ERROR;
+ }
+
+ data_start = entries[0].start;
+ data_end = entries[0].end;
+ data_tgt_lower = entries[0].tgt_lower;
+ data_tgt_upper = entries[0].tgt_upper;
+
+ if (num_entries > 1) {
+ ctrl_start = entries[1].start;
+ ctrl_end = entries[1].end;
+ ctrl_tgt_lower = entries[1].tgt_lower;
+ ctrl_tgt_upper = entries[1].tgt_upper;
+ }
+
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: data_start:0x%x; data_end:0x%x; data_tgt_lower:0x%x; data_tgt_upper:0x%x; ctrl_start:0x%x; ctrl_end:0x%x; ctrl_tgt_lower:0x%x; ctrl_tgt_upper:0x%x.\n",
+ ep_pcie_dev.rev, data_start, data_end, data_tgt_lower,
+ data_tgt_upper, ctrl_start, ctrl_end, ctrl_tgt_lower,
+ ctrl_tgt_upper);
+
+
+ if ((ctrl_end < data_start) || (data_end < ctrl_start)) {
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: iATU configuration case No. 1: detached.\n",
+ ep_pcie_dev.rev);
+ ep_pcie_config_outbound_iatu_entry(&ep_pcie_dev,
+ EP_PCIE_OATU_INDEX_DATA,
+ data_start, upper, data_end,
+ data_tgt_lower, data_tgt_upper);
+ ep_pcie_config_outbound_iatu_entry(&ep_pcie_dev,
+ EP_PCIE_OATU_INDEX_CTRL,
+ ctrl_start, upper, ctrl_end,
+ ctrl_tgt_lower, ctrl_tgt_upper);
+ } else if ((data_start <= ctrl_start) && (ctrl_end <= data_end)) {
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: iATU configuration case No. 2: included.\n",
+ ep_pcie_dev.rev);
+ ep_pcie_config_outbound_iatu_entry(&ep_pcie_dev,
+ EP_PCIE_OATU_INDEX_DATA,
+ data_start, upper, data_end,
+ data_tgt_lower, data_tgt_upper);
+ } else {
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: iATU configuration case No. 3: overlap.\n",
+ ep_pcie_dev.rev);
+ ep_pcie_config_outbound_iatu_entry(&ep_pcie_dev,
+ EP_PCIE_OATU_INDEX_CTRL,
+ ctrl_start, upper, ctrl_end,
+ ctrl_tgt_lower, ctrl_tgt_upper);
+ ep_pcie_config_outbound_iatu_entry(&ep_pcie_dev,
+ EP_PCIE_OATU_INDEX_DATA,
+ data_start, upper, data_end,
+ data_tgt_lower, data_tgt_upper);
+ }
+
+ return 0;
+}
+
+int ep_pcie_core_get_msi_config(struct ep_pcie_msi_config *cfg)
+{
+ u32 cap, lower, upper, data, ctrl_reg;
+ static u32 changes;
+
+ if (ep_pcie_dev.link_status == EP_PCIE_LINK_DISABLED) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: PCIe link is currently disabled.\n",
+ ep_pcie_dev.rev);
+ return EP_PCIE_ERROR;
+ }
+
+ cap = readl_relaxed(ep_pcie_dev.dm_core + PCIE20_MSI_CAP_ID_NEXT_CTRL);
+ EP_PCIE_DBG(&ep_pcie_dev, "PCIe V%d: MSI CAP:0x%x\n",
+ ep_pcie_dev.rev, cap);
+
+ if (!(cap & BIT(16))) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: MSI is not enabled yet.\n",
+ ep_pcie_dev.rev);
+ return EP_PCIE_ERROR;
+ }
+
+ lower = readl_relaxed(ep_pcie_dev.dm_core + PCIE20_MSI_LOWER);
+ upper = readl_relaxed(ep_pcie_dev.dm_core + PCIE20_MSI_UPPER);
+ data = readl_relaxed(ep_pcie_dev.dm_core + PCIE20_MSI_DATA);
+ ctrl_reg = readl_relaxed(ep_pcie_dev.dm_core +
+ PCIE20_MSI_CAP_ID_NEXT_CTRL);
+
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: MSI info: lower:0x%x; upper:0x%x; data:0x%x.\n",
+ ep_pcie_dev.rev, lower, upper, data);
+
+ if (ctrl_reg & BIT(16)) {
+ struct resource *msi =
+ ep_pcie_dev.res[EP_PCIE_RES_MSI].resource;
+ if (ep_pcie_dev.active_config)
+ ep_pcie_config_outbound_iatu_entry(&ep_pcie_dev,
+ EP_PCIE_OATU_INDEX_MSI,
+ msi->start, EP_PCIE_OATU_UPPER,
+ msi->end, lower, upper);
+ else
+ ep_pcie_config_outbound_iatu_entry(&ep_pcie_dev,
+ EP_PCIE_OATU_INDEX_MSI,
+ msi->start, 0, msi->end,
+ lower, upper);
+
+ if (ep_pcie_dev.active_config) {
+ cfg->lower = lower;
+ cfg->upper = upper;
+ } else {
+ cfg->lower = msi->start + (lower & 0xfff);
+ cfg->upper = 0;
+ }
+ cfg->data = data;
+ cfg->msg_num = (cap >> 20) & 0x7;
+ if ((lower != ep_pcie_dev.msi_cfg.lower)
+ || (upper != ep_pcie_dev.msi_cfg.upper)
+ || (data != ep_pcie_dev.msi_cfg.data)
+ || (cfg->msg_num != ep_pcie_dev.msi_cfg.msg_num)) {
+ changes++;
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: MSI config has been changed by host side for %d time(s).\n",
+ ep_pcie_dev.rev, changes);
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: old MSI cfg: lower:0x%x; upper:0x%x; data:0x%x; msg_num:0x%x.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.msi_cfg.lower,
+ ep_pcie_dev.msi_cfg.upper,
+ ep_pcie_dev.msi_cfg.data,
+ ep_pcie_dev.msi_cfg.msg_num);
+ ep_pcie_dev.msi_cfg.lower = lower;
+ ep_pcie_dev.msi_cfg.upper = upper;
+ ep_pcie_dev.msi_cfg.data = data;
+ ep_pcie_dev.msi_cfg.msg_num = cfg->msg_num;
+ }
+ return 0;
+ }
+
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: Wrong MSI info found when MSI is enabled: lower:0x%x; data:0x%x.\n",
+ ep_pcie_dev.rev, lower, data);
+ return EP_PCIE_ERROR;
+}
+
+int ep_pcie_core_trigger_msi(u32 idx)
+{
+ u32 addr, data, ctrl_reg;
+ int max_poll = MSI_EXIT_L1SS_WAIT_MAX_COUNT;
+
+ if (ep_pcie_dev.link_status == EP_PCIE_LINK_DISABLED) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: PCIe link is currently disabled.\n",
+ ep_pcie_dev.rev);
+ return EP_PCIE_ERROR;
+ }
+
+ addr = readl_relaxed(ep_pcie_dev.dm_core + PCIE20_MSI_LOWER);
+ data = readl_relaxed(ep_pcie_dev.dm_core + PCIE20_MSI_DATA);
+ ctrl_reg = readl_relaxed(ep_pcie_dev.dm_core +
+ PCIE20_MSI_CAP_ID_NEXT_CTRL);
+
+ if (ctrl_reg & BIT(16)) {
+ ep_pcie_dev.msi_counter++;
+ EP_PCIE_DUMP(&ep_pcie_dev,
+ "PCIe V%d: No. %ld MSI fired for IRQ %d; index from client:%d; active-config is %s enabled.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.msi_counter,
+ data + idx, idx,
+ ep_pcie_dev.active_config ? "" : "not");
+
+ if (ep_pcie_dev.active_config) {
+ u32 status;
+
+ if (ep_pcie_dev.msi_counter % 2) {
+ EP_PCIE_DBG2(&ep_pcie_dev,
+ "PCIe V%d: try to trigger MSI by PARF_MSI_GEN.\n",
+ ep_pcie_dev.rev);
+ ep_pcie_write_reg(ep_pcie_dev.parf,
+ PCIE20_PARF_MSI_GEN, idx);
+ status = readl_relaxed(ep_pcie_dev.parf +
+ PCIE20_PARF_LTR_MSI_EXIT_L1SS);
+ while ((status & BIT(1)) && (max_poll-- > 0)) {
+ udelay(MSI_EXIT_L1SS_WAIT);
+ status = readl_relaxed(ep_pcie_dev.parf
+ +
+ PCIE20_PARF_LTR_MSI_EXIT_L1SS);
+ }
+ if (max_poll == 0)
+ EP_PCIE_DBG2(&ep_pcie_dev,
+ "PCIe V%d: MSI_EXIT_L1SS is not cleared yet.\n",
+ ep_pcie_dev.rev);
+ else
+ EP_PCIE_DBG2(&ep_pcie_dev,
+ "PCIe V%d: MSI_EXIT_L1SS has been cleared.\n",
+ ep_pcie_dev.rev);
+ } else {
+ EP_PCIE_DBG2(&ep_pcie_dev,
+ "PCIe V%d: try to trigger MSI by direct address write as well.\n",
+ ep_pcie_dev.rev);
+ ep_pcie_write_reg(ep_pcie_dev.msi, addr & 0xfff,
+ data + idx);
+ }
+ } else {
+ ep_pcie_write_reg(ep_pcie_dev.msi, addr & 0xfff, data
+ + idx);
+ }
+ return 0;
+ }
+
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: MSI is not enabled yet. MSI addr:0x%x; data:0x%x; index from client:%d.\n",
+ ep_pcie_dev.rev, addr, data, idx);
+ return EP_PCIE_ERROR;
+}
+
+int ep_pcie_core_wakeup_host(void)
+{
+ struct ep_pcie_dev_t *dev = &ep_pcie_dev;
+
+ if (dev->perst_deast && !dev->l23_ready) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: request to assert WAKE# when PERST is de-asserted and D3hot is not received.\n",
+ dev->rev);
+ return EP_PCIE_ERROR;
+ }
+
+ dev->wake_counter++;
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: No. %ld to assert PCIe WAKE#; perst is %s de-asserted; D3hot is %s received.\n",
+ dev->rev, dev->wake_counter,
+ dev->perst_deast ? "" : "not",
+ dev->l23_ready ? "" : "not");
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_WAKE].num,
+ 1 - dev->gpio[EP_PCIE_GPIO_WAKE].on);
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_WAKE].num,
+ dev->gpio[EP_PCIE_GPIO_WAKE].on);
+ return 0;
+}
+
+int ep_pcie_core_config_db_routing(struct ep_pcie_db_config chdb_cfg,
+ struct ep_pcie_db_config erdb_cfg)
+{
+ u32 dbs = (erdb_cfg.end << 24) | (erdb_cfg.base << 16) |
+ (chdb_cfg.end << 8) | chdb_cfg.base;
+
+ ep_pcie_write_reg(ep_pcie_dev.parf, PCIE20_PARF_MHI_IPA_DBS, dbs);
+ ep_pcie_write_reg(ep_pcie_dev.parf,
+ PCIE20_PARF_MHI_IPA_CDB_TARGET_LOWER,
+ chdb_cfg.tgt_addr);
+ ep_pcie_write_reg(ep_pcie_dev.parf,
+ PCIE20_PARF_MHI_IPA_EDB_TARGET_LOWER,
+ erdb_cfg.tgt_addr);
+
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: DB routing info: chdb_cfg.base:0x%x; chdb_cfg.end:0x%x; erdb_cfg.base:0x%x; erdb_cfg.end:0x%x; chdb_cfg.tgt_addr:0x%x; erdb_cfg.tgt_addr:0x%x.\n",
+ ep_pcie_dev.rev, chdb_cfg.base, chdb_cfg.end, erdb_cfg.base,
+ erdb_cfg.end, chdb_cfg.tgt_addr, erdb_cfg.tgt_addr);
+
+ return 0;
+}
+
+struct ep_pcie_hw hw_drv = {
+ .register_event = ep_pcie_core_register_event,
+ .deregister_event = ep_pcie_core_deregister_event,
+ .get_linkstatus = ep_pcie_core_get_linkstatus,
+ .config_outbound_iatu = ep_pcie_core_config_outbound_iatu,
+ .get_msi_config = ep_pcie_core_get_msi_config,
+ .trigger_msi = ep_pcie_core_trigger_msi,
+ .wakeup_host = ep_pcie_core_wakeup_host,
+ .config_db_routing = ep_pcie_core_config_db_routing,
+ .enable_endpoint = ep_pcie_core_enable_endpoint,
+ .disable_endpoint = ep_pcie_core_disable_endpoint,
+ .mask_irq_event = ep_pcie_core_mask_irq_event,
+};
+
+static int ep_pcie_probe(struct platform_device *pdev)
+{
+ int ret;
+
+ pr_debug("%s\n", __func__);
+
+ ep_pcie_dev.link_speed = 1;
+ ret = of_property_read_u32((&pdev->dev)->of_node,
+ "qcom,pcie-link-speed",
+ &ep_pcie_dev.link_speed);
+ if (ret)
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: pcie-link-speed does not exist.\n",
+ ep_pcie_dev.rev);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev, "PCIe V%d: pcie-link-speed:%d.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.link_speed);
+
+ ret = of_property_read_u32((&pdev->dev)->of_node,
+ "qcom,dbi-base-reg",
+ &ep_pcie_dev.dbi_base_reg);
+ if (ret)
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: dbi-base-reg does not exist.\n",
+ ep_pcie_dev.rev);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev, "PCIe V%d: dbi-base-reg:0x%x.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.dbi_base_reg);
+
+ ret = of_property_read_u32((&pdev->dev)->of_node,
+ "qcom,slv-space-reg",
+ &ep_pcie_dev.slv_space_reg);
+ if (ret)
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: slv-space-reg does not exist.\n",
+ ep_pcie_dev.rev);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev, "PCIe V%d: slv-space-reg:0x%x.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.slv_space_reg);
+
+ ret = of_property_read_u32((&pdev->dev)->of_node,
+ "qcom,phy-status-reg",
+ &ep_pcie_dev.phy_status_reg);
+ if (ret)
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: phy-status-reg does not exist.\n",
+ ep_pcie_dev.rev);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev, "PCIe V%d: phy-status-reg:0x%x.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.phy_status_reg);
+
+ ep_pcie_dev.phy_rev = 1;
+ ret = of_property_read_u32((&pdev->dev)->of_node,
+ "qcom,pcie-phy-ver",
+ &ep_pcie_dev.phy_rev);
+ if (ret)
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: pcie-phy-ver does not exist.\n",
+ ep_pcie_dev.rev);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev, "PCIe V%d: pcie-phy-ver:%d.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.phy_rev);
+
+ ep_pcie_dev.active_config = of_property_read_bool((&pdev->dev)->of_node,
+ "qcom,pcie-active-config");
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: active config is %s enabled.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.active_config ? "" : "not");
+
+ ep_pcie_dev.aggregated_irq =
+ of_property_read_bool((&pdev->dev)->of_node,
+ "qcom,pcie-aggregated-irq");
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: aggregated IRQ is %s enabled.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.aggregated_irq ? "" : "not");
+
+ ep_pcie_dev.mhi_a7_irq =
+ of_property_read_bool((&pdev->dev)->of_node,
+ "qcom,pcie-mhi-a7-irq");
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: Mhi a7 IRQ is %s enabled.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.mhi_a7_irq ? "" : "not");
+
+ ep_pcie_dev.perst_enum = of_property_read_bool((&pdev->dev)->of_node,
+ "qcom,pcie-perst-enum");
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: enum by PERST is %s enabled.\n",
+ ep_pcie_dev.rev, ep_pcie_dev.perst_enum ? "" : "not");
+
+ ep_pcie_dev.rev = 1711211;
+ ep_pcie_dev.pdev = pdev;
+ memcpy(ep_pcie_dev.vreg, ep_pcie_vreg_info,
+ sizeof(ep_pcie_vreg_info));
+ memcpy(ep_pcie_dev.gpio, ep_pcie_gpio_info,
+ sizeof(ep_pcie_gpio_info));
+ memcpy(ep_pcie_dev.clk, ep_pcie_clk_info,
+ sizeof(ep_pcie_clk_info));
+ memcpy(ep_pcie_dev.pipeclk, ep_pcie_pipe_clk_info,
+ sizeof(ep_pcie_pipe_clk_info));
+ memcpy(ep_pcie_dev.reset, ep_pcie_reset_info,
+ sizeof(ep_pcie_reset_info));
+ memcpy(ep_pcie_dev.res, ep_pcie_res_info,
+ sizeof(ep_pcie_res_info));
+ memcpy(ep_pcie_dev.irq, ep_pcie_irq_info,
+ sizeof(ep_pcie_irq_info));
+
+ ret = ep_pcie_get_resources(&ep_pcie_dev,
+ ep_pcie_dev.pdev);
+ if (ret) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: failed to get resources.\n",
+ ep_pcie_dev.rev);
+ goto res_failure;
+ }
+
+ ret = ep_pcie_gpio_init(&ep_pcie_dev);
+ if (ret) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: failed to init GPIO.\n",
+ ep_pcie_dev.rev);
+ ep_pcie_release_resources(&ep_pcie_dev);
+ goto gpio_failure;
+ }
+
+ ret = ep_pcie_irq_init(&ep_pcie_dev);
+ if (ret) {
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: failed to init IRQ.\n",
+ ep_pcie_dev.rev);
+ ep_pcie_release_resources(&ep_pcie_dev);
+ ep_pcie_gpio_deinit(&ep_pcie_dev);
+ goto irq_failure;
+ }
+
+ if (ep_pcie_dev.perst_enum &&
+ !gpio_get_value(ep_pcie_dev.gpio[EP_PCIE_GPIO_PERST].num)) {
+ EP_PCIE_DBG2(&ep_pcie_dev,
+ "PCIe V%d: %s probe is done; link will be trained when PERST is deasserted.\n",
+ ep_pcie_dev.rev, dev_name(&(pdev->dev)));
+ return 0;
+ }
+
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: %s got resources successfully; start turning on the link.\n",
+ ep_pcie_dev.rev, dev_name(&(pdev->dev)));
+
+ ret = ep_pcie_enumeration(&ep_pcie_dev);
+
+ if (!ret || ep_pcie_debug_keep_resource)
+ return 0;
+
+ ep_pcie_irq_deinit(&ep_pcie_dev);
+irq_failure:
+ ep_pcie_gpio_deinit(&ep_pcie_dev);
+gpio_failure:
+ ep_pcie_release_resources(&ep_pcie_dev);
+res_failure:
+ EP_PCIE_ERR(&ep_pcie_dev, "PCIe V%d: Driver probe failed:%d\n",
+ ep_pcie_dev.rev, ret);
+
+ return ret;
+}
+
+static int __exit ep_pcie_remove(struct platform_device *pdev)
+{
+ pr_debug("%s\n", __func__);
+
+ ep_pcie_irq_deinit(&ep_pcie_dev);
+ ep_pcie_vreg_deinit(&ep_pcie_dev);
+ ep_pcie_pipe_clk_deinit(&ep_pcie_dev);
+ ep_pcie_clk_deinit(&ep_pcie_dev);
+ ep_pcie_gpio_deinit(&ep_pcie_dev);
+ ep_pcie_release_resources(&ep_pcie_dev);
+ ep_pcie_deregister_drv(&hw_drv);
+
+ return 0;
+}
+
+static const struct of_device_id ep_pcie_match[] = {
+ { .compatible = "qcom,pcie-ep",
+ },
+ {}
+};
+
+static struct platform_driver ep_pcie_driver = {
+ .probe = ep_pcie_probe,
+ .remove = ep_pcie_remove,
+ .driver = {
+ .name = "pcie-ep",
+ .owner = THIS_MODULE,
+ .of_match_table = ep_pcie_match,
+ },
+};
+
+static int __init ep_pcie_init(void)
+{
+ int ret;
+ char logname[MAX_NAME_LEN];
+
+ pr_debug("%s\n", __func__);
+
+ snprintf(logname, MAX_NAME_LEN, "ep-pcie-long");
+ ep_pcie_dev.ipc_log_sel =
+ ipc_log_context_create(EP_PCIE_LOG_PAGES, logname, 0);
+ if (ep_pcie_dev.ipc_log_sel == NULL)
+ pr_err("%s: unable to create IPC selected log for %s\n",
+ __func__, logname);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: IPC selected logging is enable for %s\n",
+ ep_pcie_dev.rev, logname);
+
+ snprintf(logname, MAX_NAME_LEN, "ep-pcie-short");
+ ep_pcie_dev.ipc_log_ful =
+ ipc_log_context_create(EP_PCIE_LOG_PAGES * 2, logname, 0);
+ if (ep_pcie_dev.ipc_log_ful == NULL)
+ pr_err("%s: unable to create IPC detailed log for %s\n",
+ __func__, logname);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: IPC detailed logging is enable for %s\n",
+ ep_pcie_dev.rev, logname);
+
+ snprintf(logname, MAX_NAME_LEN, "ep-pcie-dump");
+ ep_pcie_dev.ipc_log_dump =
+ ipc_log_context_create(EP_PCIE_LOG_PAGES, logname, 0);
+ if (ep_pcie_dev.ipc_log_dump == NULL)
+ pr_err("%s: unable to create IPC dump log for %s\n",
+ __func__, logname);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: IPC dump logging is enable for %s\n",
+ ep_pcie_dev.rev, logname);
+
+ mutex_init(&ep_pcie_dev.setup_mtx);
+ mutex_init(&ep_pcie_dev.ext_mtx);
+ spin_lock_init(&ep_pcie_dev.ext_lock);
+ spin_lock_init(&ep_pcie_dev.isr_lock);
+
+ ep_pcie_debugfs_init(&ep_pcie_dev);
+
+ ret = platform_driver_register(&ep_pcie_driver);
+
+ if (ret)
+ EP_PCIE_ERR(&ep_pcie_dev,
+ "PCIe V%d: failed register platform driver:%d\n",
+ ep_pcie_dev.rev, ret);
+ else
+ EP_PCIE_DBG(&ep_pcie_dev,
+ "PCIe V%d: platform driver is registered.\n",
+ ep_pcie_dev.rev);
+
+ return ret;
+}
+
+static void __exit ep_pcie_exit(void)
+{
+ pr_debug("%s\n", __func__);
+
+ ipc_log_context_destroy(ep_pcie_dev.ipc_log_sel);
+ ipc_log_context_destroy(ep_pcie_dev.ipc_log_ful);
+ ipc_log_context_destroy(ep_pcie_dev.ipc_log_dump);
+
+ ep_pcie_debugfs_exit();
+
+ platform_driver_unregister(&ep_pcie_driver);
+}
+
+module_init(ep_pcie_init);
+module_exit(ep_pcie_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("MSM PCIe Endpoint Driver");
diff --git a/drivers/platform/msm/ep_pcie/ep_pcie_dbg.c b/drivers/platform/msm/ep_pcie/ep_pcie_dbg.c
new file mode 100644
index 0000000..1f09a88
--- /dev/null
+++ b/drivers/platform/msm/ep_pcie/ep_pcie_dbg.c
@@ -0,0 +1,459 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Debugging enhancement in MSM PCIe endpoint driver.
+ */
+
+#include <linux/bitops.h>
+#include <linux/kernel.h>
+#include <linux/gpio.h>
+#include <linux/delay.h>
+#include <linux/debugfs.h>
+#include "ep_pcie_com.h"
+#include "ep_pcie_phy.h"
+
+static struct dentry *dent_ep_pcie;
+static struct dentry *dfile_case;
+static struct ep_pcie_dev_t *dev;
+
+static void ep_ep_pcie_phy_dump_pcs_debug_bus(struct ep_pcie_dev_t *dev,
+ u32 cntrl4, u32 cntrl5,
+ u32 cntrl6, u32 cntrl7)
+{
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_TEST_CONTROL4, cntrl4);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_TEST_CONTROL5, cntrl5);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_TEST_CONTROL6, cntrl6);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_TEST_CONTROL7, cntrl7);
+
+ if (!cntrl4 && !cntrl5 && !cntrl6 && !cntrl7) {
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: zero out test control registers.\n\n",
+ dev->rev);
+ return;
+ }
+
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_TEST_CONTROL4: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_TEST_CONTROL4));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_TEST_CONTROL5: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_TEST_CONTROL5));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_TEST_CONTROL6: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_TEST_CONTROL6));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_TEST_CONTROL7: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_TEST_CONTROL7));
+
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_DEBUG_BUS_0_STATUS: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_DEBUG_BUS_0_STATUS));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_DEBUG_BUS_1_STATUS: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_DEBUG_BUS_1_STATUS));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_DEBUG_BUS_2_STATUS: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_DEBUG_BUS_2_STATUS));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_DEBUG_BUS_3_STATUS: 0x%x\n\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_DEBUG_BUS_3_STATUS));
+}
+
+static void ep_ep_pcie_phy_dump_pcs_misc_debug_bus(struct ep_pcie_dev_t *dev,
+ u32 b0, u32 b1, u32 b2, u32 b3)
+{
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_MISC_DEBUG_BUS_BYTE0_INDEX, b0);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_MISC_DEBUG_BUS_BYTE1_INDEX, b1);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_MISC_DEBUG_BUS_BYTE2_INDEX, b2);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_MISC_DEBUG_BUS_BYTE3_INDEX, b3);
+
+ if (!b0 && !b1 && !b2 && !b3) {
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: zero out misc debug bus byte index registers.\n\n",
+ dev->rev);
+ return;
+ }
+
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_BYTE0_INDEX: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_BYTE0_INDEX));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_BYTE1_INDEX: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_BYTE1_INDEX));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_BYTE2_INDEX: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_BYTE2_INDEX));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_BYTE3_INDEX: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_BYTE3_INDEX));
+
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_0_STATUS: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_0_STATUS));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_1_STATUS: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_1_STATUS));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_2_STATUS: 0x%x\n", dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_2_STATUS));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PCIE_PHY_MISC_DEBUG_BUS_3_STATUS: 0x%x\n\n",
+ dev->rev,
+ readl_relaxed(dev->phy + PCIE_PHY_MISC_DEBUG_BUS_3_STATUS));
+}
+
+static void ep_pcie_phy_dump(struct ep_pcie_dev_t *dev)
+{
+ int i;
+ u32 write_val;
+
+ EP_PCIE_DUMP(dev, "PCIe V%d: Beginning of PHY debug dump.\n\n",
+ dev->rev);
+
+ EP_PCIE_DUMP(dev, "PCIe V%d: PCS Debug Signals.\n\n", dev->rev);
+
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0x01, 0x02, 0x03, 0x0A);
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0x0E, 0x0F, 0x12, 0x13);
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0x18, 0x19, 0x1A, 0x1B);
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0x1C, 0x1D, 0x1E, 0x1F);
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0x20, 0x21, 0x22, 0x23);
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0, 0, 0, 0);
+
+ EP_PCIE_DUMP(dev, "PCIe V%d: PCS Misc Debug Signals.\n\n", dev->rev);
+
+ ep_ep_pcie_phy_dump_pcs_misc_debug_bus(dev, 0x1, 0x2, 0x3, 0x4);
+ ep_ep_pcie_phy_dump_pcs_misc_debug_bus(dev, 0x5, 0x6, 0x7, 0x8);
+ ep_ep_pcie_phy_dump_pcs_misc_debug_bus(dev, 0, 0, 0, 0);
+
+ EP_PCIE_DUMP(dev, "PCIe V%d: QSERDES COM Debug Signals.\n\n", dev->rev);
+
+ for (i = 0; i < 2; i++) {
+ write_val = 0x2 + i;
+
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_DEBUG_BUS_SEL,
+ write_val);
+
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: to QSERDES_COM_DEBUG_BUS_SEL: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + QSERDES_COM_DEBUG_BUS_SEL));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: QSERDES_COM_DEBUG_BUS0: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + QSERDES_COM_DEBUG_BUS0));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: QSERDES_COM_DEBUG_BUS1: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + QSERDES_COM_DEBUG_BUS1));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: QSERDES_COM_DEBUG_BUS2: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + QSERDES_COM_DEBUG_BUS2));
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: QSERDES_COM_DEBUG_BUS3: 0x%x\n\n",
+ dev->rev,
+ readl_relaxed(dev->phy + QSERDES_COM_DEBUG_BUS3));
+ }
+
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_DEBUG_BUS_SEL, 0);
+
+ EP_PCIE_DUMP(dev, "PCIe V%d: QSERDES LANE Debug Signals.\n\n",
+ dev->rev);
+
+ for (i = 0; i < 3; i++) {
+ write_val = 0x1 + i;
+ ep_pcie_write_reg(dev->phy,
+ QSERDES_TX_DEBUG_BUS_SEL, write_val);
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: QSERDES_TX_DEBUG_BUS_SEL: 0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->phy + QSERDES_TX_DEBUG_BUS_SEL));
+
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0x30, 0x31, 0x32, 0x33);
+ }
+
+ ep_ep_pcie_phy_dump_pcs_debug_bus(dev, 0, 0, 0, 0);
+
+ EP_PCIE_DUMP(dev, "PCIe V%d: End of PHY debug dump.\n\n", dev->rev);
+
+}
+
+void ep_pcie_reg_dump(struct ep_pcie_dev_t *dev, u32 sel, bool linkdown)
+{
+ int r, i;
+ u32 original;
+ u32 size;
+
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: Dump PCIe reg for 0x%x %s linkdown.\n",
+ dev->rev, sel, linkdown ? "with" : "without");
+
+ if (!dev->power_on) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: the power is already down; can't dump registers.\n",
+ dev->rev);
+ return;
+ }
+
+ if (linkdown) {
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: dump PARF registers for linkdown case.\n",
+ dev->rev);
+
+ original = readl_relaxed(dev->parf + PCIE20_PARF_SYS_CTRL);
+ for (i = 1; i <= 0x1A; i++) {
+ ep_pcie_write_mask(dev->parf + PCIE20_PARF_SYS_CTRL,
+ 0xFF0000, i << 16);
+ EP_PCIE_DUMP(dev,
+ "PCIe V%d: PARF_SYS_CTRL:0x%x PARF_TEST_BUS:0x%x\n",
+ dev->rev,
+ readl_relaxed(dev->parf + PCIE20_PARF_SYS_CTRL),
+ readl_relaxed(dev->parf +
+ PCIE20_PARF_TEST_BUS));
+ }
+ ep_pcie_write_reg(dev->parf, PCIE20_PARF_SYS_CTRL, original);
+ }
+
+ for (r = 0; r < EP_PCIE_MAX_RES; r++) {
+ if (!(sel & BIT(r)))
+ continue;
+
+ if ((r == EP_PCIE_RES_PHY) && (dev->phy_rev > 3))
+ ep_pcie_phy_dump(dev);
+
+ size = resource_size(dev->res[r].resource);
+ EP_PCIE_DUMP(dev,
+ "\nPCIe V%d: dump registers of %s.\n\n",
+ dev->rev, dev->res[r].name);
+
+ for (i = 0; i < size; i += 32) {
+ EP_PCIE_DUMP(dev,
+ "0x%04x %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ i, readl_relaxed(dev->res[r].base + i),
+ readl_relaxed(dev->res[r].base + (i + 4)),
+ readl_relaxed(dev->res[r].base + (i + 8)),
+ readl_relaxed(dev->res[r].base + (i + 12)),
+ readl_relaxed(dev->res[r].base + (i + 16)),
+ readl_relaxed(dev->res[r].base + (i + 20)),
+ readl_relaxed(dev->res[r].base + (i + 24)),
+ readl_relaxed(dev->res[r].base + (i + 28)));
+ }
+ }
+}
+
+static void ep_pcie_show_status(struct ep_pcie_dev_t *dev)
+{
+ EP_PCIE_DBG_FS("PCIe: is %s enumerated\n",
+ dev->enumerated ? "" : "not");
+ EP_PCIE_DBG_FS("PCIe: link is %s\n",
+ (dev->link_status == EP_PCIE_LINK_ENABLED)
+ ? "enabled" : "disabled");
+ EP_PCIE_DBG_FS("the link is %s suspending\n",
+ dev->suspending ? "" : "not");
+ EP_PCIE_DBG_FS("the power is %s on\n",
+ dev->power_on ? "" : "not");
+ EP_PCIE_DBG_FS("bus_client: %d\n",
+ dev->bus_client);
+ EP_PCIE_DBG_FS("linkdown_counter: %lu\n",
+ dev->linkdown_counter);
+ EP_PCIE_DBG_FS("linkup_counter: %lu\n",
+ dev->linkup_counter);
+ EP_PCIE_DBG_FS("wake_counter: %lu\n",
+ dev->wake_counter);
+ EP_PCIE_DBG_FS("d0_counter: %lu\n",
+ dev->d0_counter);
+ EP_PCIE_DBG_FS("d3_counter: %lu\n",
+ dev->d3_counter);
+ EP_PCIE_DBG_FS("perst_ast_counter: %lu\n",
+ dev->perst_ast_counter);
+ EP_PCIE_DBG_FS("perst_deast_counter: %lu\n",
+ dev->perst_deast_counter);
+}
+
+static ssize_t ep_pcie_cmd_debug(struct file *file,
+ const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long ret;
+ char str[MAX_MSG_LEN];
+ unsigned int testcase = 0;
+ struct ep_pcie_msi_config msi_cfg;
+ int i;
+ struct ep_pcie_hw *phandle = NULL;
+ struct ep_pcie_iatu entries[2] = {
+ {0x80000000, 0xbe7fffff, 0, 0},
+ {0xb1440000, 0xb144ae1e, 0x31440000, 0}
+ };
+ struct ep_pcie_db_config chdb_cfg = {0x64, 0x6b, 0xfd4fa000};
+ struct ep_pcie_db_config erdb_cfg = {0x64, 0x6b, 0xfd4fa080};
+
+ phandle = ep_pcie_get_phandle(hw_drv.device_id);
+
+ memset(str, 0, sizeof(str));
+ ret = copy_from_user(str, buf, sizeof(str));
+ if (ret)
+ return -EFAULT;
+
+ for (i = 0; i < sizeof(str) && (str[i] >= '0') && (str[i] <= '9'); ++i)
+ testcase = (testcase * 10) + (str[i] - '0');
+
+ EP_PCIE_DBG_FS("PCIe: TEST: %d\n", testcase);
+
+
+ switch (testcase) {
+ case 0: /* output status */
+ ep_pcie_show_status(dev);
+ break;
+ case 1: /* output PHY and PARF registers */
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_PHY) |
+ BIT(EP_PCIE_RES_PARF), true);
+ break;
+ case 2: /* output core registers */
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_DM_CORE), false);
+ break;
+ case 3: /* output MMIO registers */
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_MMIO), false);
+ break;
+ case 4: /* output ELBI registers */
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_ELBI), false);
+ break;
+ case 5: /* output MSI registers */
+ ep_pcie_reg_dump(dev, BIT(EP_PCIE_RES_MSI), false);
+ break;
+ case 6: /* turn on link */
+ ep_pcie_enable_endpoint(phandle, EP_PCIE_OPT_ALL);
+ break;
+ case 7: /* enumeration */
+ ep_pcie_enable_endpoint(phandle, EP_PCIE_OPT_ENUM);
+ break;
+ case 8: /* turn off link */
+ ep_pcie_disable_endpoint(phandle);
+ break;
+ case 9: /* check MSI */
+ ep_pcie_get_msi_config(phandle, &msi_cfg);
+ break;
+ case 10: /* trigger MSI */
+ ep_pcie_trigger_msi(phandle, 0);
+ break;
+ case 11: /* indicate the status of PCIe link */
+ EP_PCIE_DBG_FS("\nPCIe: link status is %d.\n\n",
+ ep_pcie_get_linkstatus(phandle));
+ break;
+ case 12: /* configure outbound iATU */
+ ep_pcie_config_outbound_iatu(phandle, entries, 2);
+ break;
+ case 13: /* wake up the host */
+ ep_pcie_wakeup_host(phandle);
+ break;
+ case 14: /* Configure routing of doorbells */
+ ep_pcie_config_db_routing(phandle, chdb_cfg, erdb_cfg);
+ break;
+ case 21: /* write D3 */
+ EP_PCIE_DBG_FS("\nPCIe Testcase %d: write D3 to EP\n\n",
+ testcase);
+ EP_PCIE_DBG_FS("\nPCIe: 0x44 of EP is 0x%x before change\n\n",
+ readl_relaxed(dev->dm_core + 0x44));
+ ep_pcie_write_mask(dev->dm_core + 0x44, 0, 0x3);
+ EP_PCIE_DBG_FS("\nPCIe: 0x44 of EP is 0x%x now\n\n",
+ readl_relaxed(dev->dm_core + 0x44));
+ break;
+ case 22: /* write D0 */
+ EP_PCIE_DBG_FS("\nPCIe Testcase %d: write D0 to EP\n\n",
+ testcase);
+ EP_PCIE_DBG_FS("\nPCIe: 0x44 of EP is 0x%x before change\n\n",
+ readl_relaxed(dev->dm_core + 0x44));
+ ep_pcie_write_mask(dev->dm_core + 0x44, 0x3, 0);
+ EP_PCIE_DBG_FS("\nPCIe: 0x44 of EP is 0x%x now\n\n",
+ readl_relaxed(dev->dm_core + 0x44));
+ break;
+ case 23: /* assert wake */
+ EP_PCIE_DBG_FS("\nPCIe Testcase %d: assert wake\n\n",
+ testcase);
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_WAKE].num,
+ dev->gpio[EP_PCIE_GPIO_WAKE].on);
+ break;
+ case 24: /* deassert wake */
+ EP_PCIE_DBG_FS("\nPCIe Testcase %d: deassert wake\n\n",
+ testcase);
+ gpio_set_value(dev->gpio[EP_PCIE_GPIO_WAKE].num,
+ 1 - dev->gpio[EP_PCIE_GPIO_WAKE].on);
+ break;
+ case 25: /* output PERST# status */
+ EP_PCIE_DBG_FS("\nPCIe: PERST# is %d.\n\n",
+ gpio_get_value(dev->gpio[EP_PCIE_GPIO_PERST].num));
+ break;
+ case 26: /* output WAKE# status */
+ EP_PCIE_DBG_FS("\nPCIe: WAKE# is %d.\n\n",
+ gpio_get_value(dev->gpio[EP_PCIE_GPIO_WAKE].num));
+ break;
+ case 31: /* output core registers when D3 hot is set by host*/
+ dev->dump_conf = true;
+ break;
+ case 32: /* do not output core registers when D3 hot is set by host*/
+ dev->dump_conf = false;
+ break;
+ default:
+ EP_PCIE_DBG_FS("PCIe: Invalid testcase: %d.\n", testcase);
+ break;
+ }
+
+ if (ret == 0)
+ return count;
+ else
+ return -EFAULT;
+}
+
+const struct file_operations ep_pcie_cmd_debug_ops = {
+ .write = ep_pcie_cmd_debug,
+};
+
+void ep_pcie_debugfs_init(struct ep_pcie_dev_t *ep_dev)
+{
+ dev = ep_dev;
+ dent_ep_pcie = debugfs_create_dir("pcie-ep", 0);
+ if (IS_ERR(dent_ep_pcie)) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: fail to create the folder for debug_fs.\n",
+ dev->rev);
+ return;
+ }
+
+ dfile_case = debugfs_create_file("case", 0664,
+ dent_ep_pcie, 0,
+ &ep_pcie_cmd_debug_ops);
+ if (!dfile_case || IS_ERR(dfile_case)) {
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: fail to create the file for case.\n",
+ dev->rev);
+ goto case_error;
+ }
+
+ EP_PCIE_DBG2(dev,
+ "PCIe V%d: debugfs is enabled.\n",
+ dev->rev);
+
+ return;
+
+case_error:
+ debugfs_remove(dent_ep_pcie);
+}
+
+void ep_pcie_debugfs_exit(void)
+{
+ debugfs_remove(dfile_case);
+ debugfs_remove(dent_ep_pcie);
+}
diff --git a/drivers/platform/msm/ep_pcie/ep_pcie_phy.c b/drivers/platform/msm/ep_pcie/ep_pcie_phy.c
new file mode 100644
index 0000000..776ef08
--- /dev/null
+++ b/drivers/platform/msm/ep_pcie/ep_pcie_phy.c
@@ -0,0 +1,160 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * MSM PCIe PHY endpoint mode
+ */
+
+#include "ep_pcie_com.h"
+#include "ep_pcie_phy.h"
+
+void ep_pcie_phy_init(struct ep_pcie_dev_t *dev)
+{
+ switch (dev->phy_rev) {
+ case 3:
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PHY V%d: Initializing 20nm QMP phy - 100MHz\n",
+ dev->rev, dev->phy_rev);
+ break;
+ case 4:
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PHY V%d: Initializing 14nm QMP phy - 100MHz\n",
+ dev->rev, dev->phy_rev);
+ break;
+ case 5:
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PHY V%d: Initializing 10nm QMP phy - 100MHz\n",
+ dev->rev, dev->phy_rev);
+ break;
+ default:
+ EP_PCIE_ERR(dev,
+ "PCIe V%d: Unexpected phy version %d is caught!\n",
+ dev->rev, dev->phy_rev);
+ }
+
+ if (dev->phy_init_len && dev->phy_init) {
+ int i;
+ struct ep_pcie_phy_info_t *phy_init;
+
+ EP_PCIE_DBG(dev,
+ "PCIe V%d: PHY V%d: process the sequence specified by DT.\n",
+ dev->rev, dev->phy_rev);
+
+ i = dev->phy_init_len;
+ phy_init = dev->phy_init;
+ while (i--) {
+ ep_pcie_write_reg(dev->phy,
+ phy_init->offset,
+ phy_init->val);
+ if (phy_init->delay)
+ usleep_range(phy_init->delay,
+ phy_init->delay + 1);
+ phy_init++;
+ }
+ return;
+ }
+
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_SW_RESET, 0x01);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_POWER_DOWN_CONTROL, 0x01);
+
+ /* Common block settings */
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_BIAS_EN_CLKBUFLR_EN, 0x18);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_CLK_ENABLE1, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_BG_TRIM, 0x0F);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_LOCK_CMP_EN, 0x01);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_VCO_TUNE_MAP, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_VCO_TUNE_TIMER1, 0xFF);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_VCO_TUNE_TIMER2, 0x1F);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_CMN_CONFIG, 0x06);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_PLL_IVCO, 0x0F);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_HSCLK_SEL, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_SVS_MODE_CLK_SEL, 0x01);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_CORE_CLK_EN, 0x20);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_CORECLK_DIV, 0x0A);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_RESETSM_CNTRL, 0x20);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_BG_TIMER, 0x01);
+
+ /* PLL Config Settings */
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_SYSCLK_EN_SEL, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_DEC_START_MODE0, 0x19);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_DIV_FRAC_START3_MODE0, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_DIV_FRAC_START2_MODE0, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_DIV_FRAC_START1_MODE0, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_LOCK_CMP3_MODE0, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_LOCK_CMP2_MODE0, 0x02);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_LOCK_CMP1_MODE0, 0x7F);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_CLK_SELECT, 0x30);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_SYS_CLK_CTRL, 0x06);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_SYSCLK_BUF_ENABLE, 0x1E);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_CP_CTRL_MODE0, 0x3F);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_PLL_RCTRL_MODE0, 0x1A);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_PLL_CCTRL_MODE0, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_INTEGLOOP_GAIN1_MODE0, 0x03);
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_INTEGLOOP_GAIN0_MODE0, 0xFF);
+
+ /* TX settings */
+ ep_pcie_write_reg(dev->phy, QSERDES_TX_HIGHZ_TRANSCEIVEREN_BIAS_DRVR_EN,
+ 0x45);
+ ep_pcie_write_reg(dev->phy, QSERDES_TX_LANE_MODE, 0x06);
+ ep_pcie_write_reg(dev->phy, QSERDES_TX_RES_CODE_LANE_OFFSET, 0x02);
+ ep_pcie_write_reg(dev->phy, QSERDES_TX_RCV_DETECT_LVL_2, 0x12);
+
+ /* RX settings */
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_SIGDET_ENABLES, 0x1C);
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_SIGDET_DEGLITCH_CNTRL, 0x14);
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_RX_EQU_ADAPTOR_CNTRL2, 0x01);
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_RX_EQU_ADAPTOR_CNTRL3, 0x00);
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_RX_EQU_ADAPTOR_CNTRL4, 0xDB);
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_UCDR_SO_SATURATION_AND_ENABLE,
+ 0x4B);
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_UCDR_SO_GAIN, 0x04);
+ ep_pcie_write_reg(dev->phy, QSERDES_RX_UCDR_SO_GAIN_HALF, 0x04);
+
+ /* EP_REF_CLK settings */
+ ep_pcie_write_reg(dev->phy, QSERDES_COM_CLK_EP_DIV, 0x19);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_ENDPOINT_REFCLK_DRIVE, 0x00);
+
+ /* PCIE L1SS settings */
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_PWRUP_RESET_DLY_TIME_AUXCLK, 0x40);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_L1SS_WAKEUP_DLY_TIME_AUXCLK_MSB,
+ 0x00);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_L1SS_WAKEUP_DLY_TIME_AUXCLK_LSB,
+ 0x40);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_LP_WAKEUP_DLY_TIME_AUXCLK_MSB,
+ 0x00);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_LP_WAKEUP_DLY_TIME_AUXCLK, 0x40);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_PLL_LOCK_CHK_DLY_TIME, 0x73);
+
+ /* PCS settings */
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_SIGDET_CNTRL, 0x07);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_RX_SIGDET_LVL, 0x99);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_TXDEEMPH_M6DB_V0, 0x15);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_TXDEEMPH_M3P5DB_V0, 0x0E);
+
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_SW_RESET, 0x00);
+ ep_pcie_write_reg(dev->phy, PCIE_PHY_START_CONTROL, 0x03);
+}
+
+bool ep_pcie_phy_is_ready(struct ep_pcie_dev_t *dev)
+{
+ u32 offset;
+
+ if (dev->phy_status_reg)
+ offset = dev->phy_status_reg;
+ else
+ offset = PCIE_PHY_PCS_STATUS;
+
+ if (readl_relaxed(dev->phy + offset) & BIT(6))
+ return false;
+ else
+ return true;
+}
diff --git a/drivers/platform/msm/ep_pcie/ep_pcie_phy.h b/drivers/platform/msm/ep_pcie/ep_pcie_phy.h
new file mode 100644
index 0000000..c8f01de
--- /dev/null
+++ b/drivers/platform/msm/ep_pcie/ep_pcie_phy.h
@@ -0,0 +1,463 @@
+/* Copyright (c) 2015, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __EP_PCIE_PHY_H
+#define __EP_PCIE_PHY_H
+
+#define QSERDES_COM_ATB_SEL1 0x000
+#define QSERDES_COM_ATB_SEL2 0x004
+#define QSERDES_COM_FREQ_UPDATE 0x008
+#define QSERDES_COM_BG_TIMER 0x00C
+#define QSERDES_COM_SSC_EN_CENTER 0x010
+#define QSERDES_COM_SSC_ADJ_PER1 0x014
+#define QSERDES_COM_SSC_ADJ_PER2 0x018
+#define QSERDES_COM_SSC_PER1 0x01C
+#define QSERDES_COM_SSC_PER2 0x020
+#define QSERDES_COM_SSC_STEP_SIZE1 0x024
+#define QSERDES_COM_SSC_STEP_SIZE2 0x028
+#define QSERDES_COM_POST_DIV 0x02C
+#define QSERDES_COM_POST_DIV_MUX 0x030
+#define QSERDES_COM_BIAS_EN_CLKBUFLR_EN 0x034
+#define QSERDES_COM_CLK_ENABLE1 0x038
+#define QSERDES_COM_SYS_CLK_CTRL 0x03C
+#define QSERDES_COM_SYSCLK_BUF_ENABLE 0x040
+#define QSERDES_COM_PLL_EN 0x044
+#define QSERDES_COM_PLL_IVCO 0x048
+#define QSERDES_COM_LOCK_CMP1_MODE0 0x04C
+#define QSERDES_COM_LOCK_CMP2_MODE0 0x050
+#define QSERDES_COM_LOCK_CMP3_MODE0 0x054
+#define QSERDES_COM_LOCK_CMP1_MODE1 0x058
+#define QSERDES_COM_LOCK_CMP2_MODE1 0x05C
+#define QSERDES_COM_LOCK_CMP3_MODE1 0x060
+#define QSERDES_COM_CMN_RSVD0 0x064
+#define QSERDES_COM_EP_CLOCK_DETECT_CTRL 0x068
+#define QSERDES_COM_SYSCLK_DET_COMP_STATUS 0x06C
+#define QSERDES_COM_BG_TRIM 0x070
+#define QSERDES_COM_CLK_EP_DIV 0x074
+#define QSERDES_COM_CP_CTRL_MODE0 0x078
+#define QSERDES_COM_CP_CTRL_MODE1 0x07C
+#define QSERDES_COM_CMN_RSVD1 0x080
+#define QSERDES_COM_PLL_RCTRL_MODE0 0x084
+#define QSERDES_COM_PLL_RCTRL_MODE1 0x088
+#define QSERDES_COM_CMN_RSVD2 0x08C
+#define QSERDES_COM_PLL_CCTRL_MODE0 0x090
+#define QSERDES_COM_PLL_CCTRL_MODE1 0x094
+#define QSERDES_COM_CMN_RSVD3 0x098
+#define QSERDES_COM_PLL_CNTRL 0x09C
+#define QSERDES_COM_PHASE_SEL_CTRL 0x0A0
+#define QSERDES_COM_PHASE_SEL_DC 0x0A4
+#define QSERDES_COM_BIAS_EN_CTRL_BY_PSM 0x0A8
+#define QSERDES_COM_SYSCLK_EN_SEL 0x0AC
+#define QSERDES_COM_CML_SYSCLK_SEL 0x0B0
+#define QSERDES_COM_RESETSM_CNTRL 0x0B4
+#define QSERDES_COM_RESETSM_CNTRL2 0x0B8
+#define QSERDES_COM_RESTRIM_CTRL 0x0BC
+#define QSERDES_COM_RESTRIM_CTRL2 0x0C0
+#define QSERDES_COM_RESCODE_DIV_NUM 0x0C4
+#define QSERDES_COM_LOCK_CMP_EN 0x0C8
+#define QSERDES_COM_LOCK_CMP_CFG 0x0CC
+#define QSERDES_COM_DEC_START_MODE0 0x0D0
+#define QSERDES_COM_DEC_START_MODE1 0x0D4
+#define QSERDES_COM_VCOCAL_DEADMAN_CTRL 0x0D8
+#define QSERDES_COM_DIV_FRAC_START1_MODE0 0x0DC
+#define QSERDES_COM_DIV_FRAC_START2_MODE0 0x0E0
+#define QSERDES_COM_DIV_FRAC_START3_MODE0 0x0E4
+#define QSERDES_COM_DIV_FRAC_START1_MODE1 0x0E8
+#define QSERDES_COM_DIV_FRAC_START2_MODE1 0x0EC
+#define QSERDES_COM_DIV_FRAC_START3_MODE1 0x0F0
+#define QSERDES_COM_VCO_TUNE_MINVAL1 0x0F4
+#define QSERDES_COM_VCO_TUNE_MINVAL2 0x0F8
+#define QSERDES_COM_CMN_RSVD4 0x0FC
+#define QSERDES_COM_INTEGLOOP_INITVAL 0x100
+#define QSERDES_COM_INTEGLOOP_EN 0x104
+#define QSERDES_COM_INTEGLOOP_GAIN0_MODE0 0x108
+#define QSERDES_COM_INTEGLOOP_GAIN1_MODE0 0x10C
+#define QSERDES_COM_INTEGLOOP_GAIN0_MODE1 0x110
+#define QSERDES_COM_INTEGLOOP_GAIN1_MODE1 0x114
+#define QSERDES_COM_VCO_TUNE_MAXVAL1 0x118
+#define QSERDES_COM_VCO_TUNE_MAXVAL2 0x11C
+#define QSERDES_COM_RES_TRIM_CONTROL2 0x120
+#define QSERDES_COM_VCO_TUNE_CTRL 0x124
+#define QSERDES_COM_VCO_TUNE_MAP 0x128
+#define QSERDES_COM_VCO_TUNE1_MODE0 0x12C
+#define QSERDES_COM_VCO_TUNE2_MODE0 0x130
+#define QSERDES_COM_VCO_TUNE1_MODE1 0x134
+#define QSERDES_COM_VCO_TUNE2_MODE1 0x138
+#define QSERDES_COM_VCO_TUNE_INITVAL1 0x13C
+#define QSERDES_COM_VCO_TUNE_INITVAL2 0x140
+#define QSERDES_COM_VCO_TUNE_TIMER1 0x144
+#define QSERDES_COM_VCO_TUNE_TIMER2 0x148
+#define QSERDES_COM_SAR 0x14C
+#define QSERDES_COM_SAR_CLK 0x150
+#define QSERDES_COM_SAR_CODE_OUT_STATUS 0x154
+#define QSERDES_COM_SAR_CODE_READY_STATUS 0x158
+#define QSERDES_COM_CMN_STATUS 0x15C
+#define QSERDES_COM_RESET_SM_STATUS 0x160
+#define QSERDES_COM_RESTRIM_CODE_STATUS 0x164
+#define QSERDES_COM_PLLCAL_CODE1_STATUS 0x168
+#define QSERDES_COM_PLLCAL_CODE2_STATUS 0x16C
+#define QSERDES_COM_BG_CTRL 0x170
+#define QSERDES_COM_CLK_SELECT 0x174
+#define QSERDES_COM_HSCLK_SEL 0x178
+#define QSERDES_COM_PLL_ANALOG 0x180
+#define QSERDES_COM_CORECLK_DIV 0x184
+#define QSERDES_COM_SW_RESET 0x188
+#define QSERDES_COM_CORE_CLK_EN 0x18C
+#define QSERDES_COM_C_READY_STATUS 0x190
+#define QSERDES_COM_CMN_CONFIG 0x194
+#define QSERDES_COM_CMN_RATE_OVERRIDE 0x198
+#define QSERDES_COM_SVS_MODE_CLK_SEL 0x19C
+#define QSERDES_COM_DEBUG_BUS0 0x1A0
+#define QSERDES_COM_DEBUG_BUS1 0x1A4
+#define QSERDES_COM_DEBUG_BUS2 0x1A8
+#define QSERDES_COM_DEBUG_BUS3 0x1AC
+#define QSERDES_COM_DEBUG_BUS_SEL 0x1B0
+#define QSERDES_COM_CMN_MISC1 0x1B4
+#define QSERDES_COM_CMN_MISC2 0x1B8
+#define QSERDES_COM_CORECLK_DIV_MODE1 0x1BC
+#define QSERDES_COM_CMN_RSVD5 0x1C0
+#define QSERDES_TX_BIST_MODE_LANENO 0x200
+#define QSERDES_TX_BIST_INVERT 0x204
+#define QSERDES_TX_CLKBUF_ENABLE 0x208
+#define QSERDES_TX_CMN_CONTROL_ONE 0x20C
+#define QSERDES_TX_CMN_CONTROL_TWO 0x210
+#define QSERDES_TX_CMN_CONTROL_THREE 0x214
+#define QSERDES_TX_TX_EMP_POST1_LVL 0x218
+#define QSERDES_TX_TX_POST2_EMPH 0x21C
+#define QSERDES_TX_TX_BOOST_LVL_UP_DN 0x220
+#define QSERDES_TX_HP_PD_ENABLES 0x224
+#define QSERDES_TX_TX_IDLE_LVL_LARGE_AMP 0x228
+#define QSERDES_TX_TX_DRV_LVL 0x22C
+#define QSERDES_TX_TX_DRV_LVL_OFFSET 0x230
+#define QSERDES_TX_RESET_TSYNC_EN 0x234
+#define QSERDES_TX_PRE_STALL_LDO_BOOST_EN 0x238
+#define QSERDES_TX_TX_BAND 0x23C
+#define QSERDES_TX_SLEW_CNTL 0x240
+#define QSERDES_TX_INTERFACE_SELECT 0x244
+#define QSERDES_TX_LPB_EN 0x248
+#define QSERDES_TX_RES_CODE_LANE_TX 0x24C
+#define QSERDES_TX_RES_CODE_LANE_RX 0x250
+#define QSERDES_TX_RES_CODE_LANE_OFFSET 0x254
+#define QSERDES_TX_PERL_LENGTH1 0x258
+#define QSERDES_TX_PERL_LENGTH2 0x25C
+#define QSERDES_TX_SERDES_BYP_EN_OUT 0x260
+#define QSERDES_TX_DEBUG_BUS_SEL 0x264
+#define QSERDES_TX_HIGHZ_TRANSCEIVEREN_BIAS_DRVR_EN 0x268
+#define QSERDES_TX_TX_POL_INV 0x26C
+#define QSERDES_TX_PARRATE_REC_DETECT_IDLE_EN 0x270
+#define QSERDES_TX_BIST_PATTERN1 0x274
+#define QSERDES_TX_BIST_PATTERN2 0x278
+#define QSERDES_TX_BIST_PATTERN3 0x27C
+#define QSERDES_TX_BIST_PATTERN4 0x280
+#define QSERDES_TX_BIST_PATTERN5 0x284
+#define QSERDES_TX_BIST_PATTERN6 0x288
+#define QSERDES_TX_BIST_PATTERN7 0x28C
+#define QSERDES_TX_BIST_PATTERN8 0x290
+#define QSERDES_TX_LANE_MODE 0x294
+#define QSERDES_TX_IDAC_CAL_LANE_MODE 0x298
+#define QSERDES_TX_IDAC_CAL_LANE_MODE_CONFIGURATION 0x29C
+#define QSERDES_TX_ATB_SEL1 0x2A0
+#define QSERDES_TX_ATB_SEL2 0x2A4
+#define QSERDES_TX_RCV_DETECT_LVL 0x2A8
+#define QSERDES_TX_RCV_DETECT_LVL_2 0x2AC
+#define QSERDES_TX_PRBS_SEED1 0x2B0
+#define QSERDES_TX_PRBS_SEED2 0x2B4
+#define QSERDES_TX_PRBS_SEED3 0x2B8
+#define QSERDES_TX_PRBS_SEED4 0x2BC
+#define QSERDES_TX_RESET_GEN 0x2C0
+#define QSERDES_TX_RESET_GEN_MUXES 0x2C4
+#define QSERDES_TX_TRAN_DRVR_EMP_EN 0x2C8
+#define QSERDES_TX_TX_INTERFACE_MODE 0x2CC
+#define QSERDES_TX_PWM_CTRL 0x2D0
+#define QSERDES_TX_PWM_ENCODED_OR_DATA 0x2D4
+#define QSERDES_TX_PWM_GEAR_1_DIVIDER_BAND2 0x2D8
+#define QSERDES_TX_PWM_GEAR_2_DIVIDER_BAND2 0x2DC
+#define QSERDES_TX_PWM_GEAR_3_DIVIDER_BAND2 0x2E0
+#define QSERDES_TX_PWM_GEAR_4_DIVIDER_BAND2 0x2E4
+#define QSERDES_TX_PWM_GEAR_1_DIVIDER_BAND0_1 0x2E8
+#define QSERDES_TX_PWM_GEAR_2_DIVIDER_BAND0_1 0x2EC
+#define QSERDES_TX_PWM_GEAR_3_DIVIDER_BAND0_1 0x2F0
+#define QSERDES_TX_PWM_GEAR_4_DIVIDER_BAND0_1 0x2F4
+#define QSERDES_TX_VMODE_CTRL1 0x2F8
+#define QSERDES_TX_VMODE_CTRL2 0x2FC
+#define QSERDES_TX_TX_ALOG_INTF_OBSV_CNTL 0x300
+#define QSERDES_TX_BIST_STATUS 0x304
+#define QSERDES_TX_BIST_ERROR_COUNT1 0x308
+#define QSERDES_TX_BIST_ERROR_COUNT2 0x30C
+#define QSERDES_TX_TX_ALOG_INTF_OBSV 0x310
+#define QSERDES_RX_UCDR_FO_GAIN_HALF 0x400
+#define QSERDES_RX_UCDR_FO_GAIN_QUARTER 0x404
+#define QSERDES_RX_UCDR_FO_GAIN_EIGHTH 0x408
+#define QSERDES_RX_UCDR_FO_GAIN 0x40C
+#define QSERDES_RX_UCDR_SO_GAIN_HALF 0x410
+#define QSERDES_RX_UCDR_SO_GAIN_QUARTER 0x414
+#define QSERDES_RX_UCDR_SO_GAIN_EIGHTH 0x418
+#define QSERDES_RX_UCDR_SO_GAIN 0x41C
+#define QSERDES_RX_UCDR_SVS_FO_GAIN_HALF 0x420
+#define QSERDES_RX_UCDR_SVS_FO_GAIN_QUARTER 0x424
+#define QSERDES_RX_UCDR_SVS_FO_GAIN_EIGHTH 0x428
+#define QSERDES_RX_UCDR_SVS_FO_GAIN 0x42C
+#define QSERDES_RX_UCDR_SVS_SO_GAIN_HALF 0x430
+#define QSERDES_RX_UCDR_SVS_SO_GAIN_QUARTER 0x434
+#define QSERDES_RX_UCDR_SVS_SO_GAIN_EIGHTH 0x438
+#define QSERDES_RX_UCDR_SVS_SO_GAIN 0x43C
+#define QSERDES_RX_UCDR_FASTLOCK_FO_GAIN 0x440
+#define QSERDES_RX_UCDR_FD_GAIN 0x444
+#define QSERDES_RX_UCDR_SO_SATURATION_AND_ENABLE 0x448
+#define QSERDES_RX_UCDR_FO_TO_SO_DELAY 0x44C
+#define QSERDES_RX_UCDR_FASTLOCK_COUNT_LOW 0x450
+#define QSERDES_RX_UCDR_FASTLOCK_COUNT_HIGH 0x454
+#define QSERDES_RX_UCDR_MODULATE 0x458
+#define QSERDES_RX_UCDR_PI_CONTROLS 0x45C
+#define QSERDES_RX_RBIST_CONTROL 0x460
+#define QSERDES_RX_AUX_CONTROL 0x464
+#define QSERDES_RX_AUX_DATA_TCOARSE 0x468
+#define QSERDES_RX_AUX_DATA_TFINE_LSB 0x46C
+#define QSERDES_RX_AUX_DATA_TFINE_MSB 0x470
+#define QSERDES_RX_RCLK_AUXDATA_SEL 0x474
+#define QSERDES_RX_AC_JTAG_ENABLE 0x478
+#define QSERDES_RX_AC_JTAG_INITP 0x47C
+#define QSERDES_RX_AC_JTAG_INITN 0x480
+#define QSERDES_RX_AC_JTAG_LVL 0x484
+#define QSERDES_RX_AC_JTAG_MODE 0x488
+#define QSERDES_RX_AC_JTAG_RESET 0x48C
+#define QSERDES_RX_RX_TERM_BW 0x490
+#define QSERDES_RX_RX_RCVR_IQ_EN 0x494
+#define QSERDES_RX_RX_IDAC_I_DC_OFFSETS 0x498
+#define QSERDES_RX_RX_IDAC_IBAR_DC_OFFSETS 0x49C
+#define QSERDES_RX_RX_IDAC_Q_DC_OFFSETS 0x4A0
+#define QSERDES_RX_RX_IDAC_QBAR_DC_OFFSETS 0x4A4
+#define QSERDES_RX_RX_IDAC_A_DC_OFFSETS 0x4A8
+#define QSERDES_RX_RX_IDAC_ABAR_DC_OFFSETS 0x4AC
+#define QSERDES_RX_RX_IDAC_EN 0x4B0
+#define QSERDES_RX_RX_IDAC_ENABLES 0x4B4
+#define QSERDES_RX_RX_IDAC_SIGN 0x4B8
+#define QSERDES_RX_RX_HIGHZ_HIGHRATE 0x4BC
+#define QSERDES_RX_RX_TERM_AC_BYPASS_DC_COUPLE_OFFSET 0x4C0
+#define QSERDES_RX_RX_EQ_GAIN1_LSB 0x4C4
+#define QSERDES_RX_RX_EQ_GAIN1_MSB 0x4C8
+#define QSERDES_RX_RX_EQ_GAIN2_LSB 0x4CC
+#define QSERDES_RX_RX_EQ_GAIN2_MSB 0x4D0
+#define QSERDES_RX_RX_EQU_ADAPTOR_CNTRL1 0x4D4
+#define QSERDES_RX_RX_EQU_ADAPTOR_CNTRL2 0x4D8
+#define QSERDES_RX_RX_EQU_ADAPTOR_CNTRL3 0x4DC
+#define QSERDES_RX_RX_EQU_ADAPTOR_CNTRL4 0x4E0
+#define QSERDES_RX_RX_IDAC_CAL_CONFIGURATION 0x4E4
+#define QSERDES_RX_RX_IDAC_TSETTLE_LOW 0x4E8
+#define QSERDES_RX_RX_IDAC_TSETTLE_HIGH 0x4EC
+#define QSERDES_RX_RX_IDAC_ENDSAMP_LOW 0x4F0
+#define QSERDES_RX_RX_IDAC_ENDSAMP_HIGH 0x4F4
+#define QSERDES_RX_RX_IDAC_MIDPOINT_LOW 0x4F8
+#define QSERDES_RX_RX_IDAC_MIDPOINT_HIGH 0x4FC
+#define QSERDES_RX_RX_EQ_OFFSET_LSB 0x500
+#define QSERDES_RX_RX_EQ_OFFSET_MSB 0x504
+#define QSERDES_RX_RX_EQ_OFFSET_ADAPTOR_CNTRL1 0x508
+#define QSERDES_RX_RX_OFFSET_ADAPTOR_CNTRL2 0x50C
+#define QSERDES_RX_SIGDET_ENABLES 0x510
+#define QSERDES_RX_SIGDET_CNTRL 0x514
+#define QSERDES_RX_SIGDET_LVL 0x518
+#define QSERDES_RX_SIGDET_DEGLITCH_CNTRL 0x51C
+#define QSERDES_RX_RX_BAND 0x520
+#define QSERDES_RX_CDR_FREEZE_UP_DN 0x524
+#define QSERDES_RX_CDR_RESET_OVERRIDE 0x528
+#define QSERDES_RX_RX_INTERFACE_MODE 0x52C
+#define QSERDES_RX_JITTER_GEN_MODE 0x530
+#define QSERDES_RX_BUJ_AMP 0x534
+#define QSERDES_RX_SJ_AMP1 0x538
+#define QSERDES_RX_SJ_AMP2 0x53C
+#define QSERDES_RX_SJ_PER1 0x540
+#define QSERDES_RX_SJ_PER2 0x544
+#define QSERDES_RX_BUJ_STEP_FREQ1 0x548
+#define QSERDES_RX_BUJ_STEP_FREQ2 0x54C
+#define QSERDES_RX_PPM_OFFSET1 0x550
+#define QSERDES_RX_PPM_OFFSET2 0x554
+#define QSERDES_RX_SIGN_PPM_PERIOD1 0x558
+#define QSERDES_RX_SIGN_PPM_PERIOD2 0x55C
+#define QSERDES_RX_SSC_CTRL 0x560
+#define QSERDES_RX_SSC_COUNT1 0x564
+#define QSERDES_RX_SSC_COUNT2 0x568
+#define QSERDES_RX_RX_ALOG_INTF_OBSV_CNTL 0x56C
+#define QSERDES_RX_RX_PWM_ENABLE_AND_DATA 0x570
+#define QSERDES_RX_RX_PWM_GEAR1_TIMEOUT_COUNT 0x574
+#define QSERDES_RX_RX_PWM_GEAR2_TIMEOUT_COUNT 0x578
+#define QSERDES_RX_RX_PWM_GEAR3_TIMEOUT_COUNT 0x57C
+#define QSERDES_RX_RX_PWM_GEAR4_TIMEOUT_COUNT 0x580
+#define QSERDES_RX_PI_CTRL1 0x584
+#define QSERDES_RX_PI_CTRL2 0x588
+#define QSERDES_RX_PI_QUAD 0x58C
+#define QSERDES_RX_IDATA1 0x590
+#define QSERDES_RX_IDATA2 0x594
+#define QSERDES_RX_AUX_DATA1 0x598
+#define QSERDES_RX_AUX_DATA2 0x59C
+#define QSERDES_RX_AC_JTAG_OUTP 0x5A0
+#define QSERDES_RX_AC_JTAG_OUTN 0x5A4
+#define QSERDES_RX_RX_SIGDET 0x5A8
+#define QSERDES_RX_RX_VDCOFF 0x5AC
+#define QSERDES_RX_IDAC_CAL_ON 0x5B0
+#define QSERDES_RX_IDAC_STATUS_I 0x5B4
+#define QSERDES_RX_IDAC_STATUS_IBAR 0x5B8
+#define QSERDES_RX_IDAC_STATUS_Q 0x5BC
+#define QSERDES_RX_IDAC_STATUS_QBAR 0x5C0
+#define QSERDES_RX_IDAC_STATUS_A 0x5C4
+#define QSERDES_RX_IDAC_STATUS_ABAR 0x5C8
+#define QSERDES_RX_CALST_STATUS_I 0x5CC
+#define QSERDES_RX_CALST_STATUS_Q 0x5D0
+#define QSERDES_RX_CALST_STATUS_A 0x5D4
+#define QSERDES_RX_RX_ALOG_INTF_OBSV 0x5D8
+#define QSERDES_RX_READ_EQCODE 0x5DC
+#define QSERDES_RX_READ_OFFSETCODE 0x5E0
+#define QSERDES_RX_IA_ERROR_COUNTER_LOW 0x5E4
+#define QSERDES_RX_IA_ERROR_COUNTER_HIGH 0x5E8
+#define PCIE_PHY_MISC_DEBUG_BUS_BYTE0_INDEX 0x600
+#define PCIE_PHY_MISC_DEBUG_BUS_BYTE1_INDEX 0x604
+#define PCIE_PHY_MISC_DEBUG_BUS_BYTE2_INDEX 0x608
+#define PCIE_PHY_MISC_DEBUG_BUS_BYTE3_INDEX 0x60C
+#define PCIE_PHY_MISC_PLACEHOLDER_STATUS 0x610
+#define PCIE_PHY_MISC_DEBUG_BUS_0_STATUS 0x614
+#define PCIE_PHY_MISC_DEBUG_BUS_1_STATUS 0x618
+#define PCIE_PHY_MISC_DEBUG_BUS_2_STATUS 0x61C
+#define PCIE_PHY_MISC_DEBUG_BUS_3_STATUS 0x620
+#define PCIE_PHY_MISC_OSC_DTCT_STATUS 0x624
+#define PCIE_PHY_MISC_OSC_DTCT_CONFIG1 0x628
+#define PCIE_PHY_MISC_OSC_DTCT_CONFIG2 0x62C
+#define PCIE_PHY_MISC_OSC_DTCT_CONFIG3 0x630
+#define PCIE_PHY_MISC_OSC_DTCT_CONFIG4 0x634
+#define PCIE_PHY_MISC_OSC_DTCT_CONFIG5 0x638
+#define PCIE_PHY_MISC_OSC_DTCT_CONFIG6 0x63C
+#define PCIE_PHY_MISC_OSC_DTCT_CONFIG7 0x640
+#define PCIE_PHY_SW_RESET 0x800
+#define PCIE_PHY_POWER_DOWN_CONTROL 0x804
+#define PCIE_PHY_START_CONTROL 0x808
+#define PCIE_PHY_TXMGN_V0 0x80C
+#define PCIE_PHY_TXMGN_V1 0x810
+#define PCIE_PHY_TXMGN_V2 0x814
+#define PCIE_PHY_TXMGN_V3 0x818
+#define PCIE_PHY_TXMGN_V4 0x81C
+#define PCIE_PHY_TXMGN_LS 0x820
+#define PCIE_PHY_TXDEEMPH_M6DB_V0 0x824
+#define PCIE_PHY_TXDEEMPH_M3P5DB_V0 0x828
+#define PCIE_PHY_TXDEEMPH_M6DB_V1 0x82C
+#define PCIE_PHY_TXDEEMPH_M3P5DB_V1 0x830
+#define PCIE_PHY_TXDEEMPH_M6DB_V2 0x834
+#define PCIE_PHY_TXDEEMPH_M3P5DB_V2 0x838
+#define PCIE_PHY_TXDEEMPH_M6DB_V3 0x83C
+#define PCIE_PHY_TXDEEMPH_M3P5DB_V3 0x840
+#define PCIE_PHY_TXDEEMPH_M6DB_V4 0x844
+#define PCIE_PHY_TXDEEMPH_M3P5DB_V4 0x848
+#define PCIE_PHY_TXDEEMPH_M6DB_LS 0x84C
+#define PCIE_PHY_TXDEEMPH_M3P5DB_LS 0x850
+#define PCIE_PHY_ENDPOINT_REFCLK_DRIVE 0x854
+#define PCIE_PHY_RX_IDLE_DTCT_CNTRL 0x858
+#define PCIE_PHY_RATE_SLEW_CNTRL 0x85C
+#define PCIE_PHY_POWER_STATE_CONFIG1 0x860
+#define PCIE_PHY_POWER_STATE_CONFIG2 0x864
+#define PCIE_PHY_POWER_STATE_CONFIG3 0x868
+#define PCIE_PHY_POWER_STATE_CONFIG4 0x86C
+#define PCIE_PHY_RCVR_DTCT_DLY_P1U2_L 0x870
+#define PCIE_PHY_RCVR_DTCT_DLY_P1U2_H 0x874
+#define PCIE_PHY_RCVR_DTCT_DLY_U3_L 0x878
+#define PCIE_PHY_RCVR_DTCT_DLY_U3_H 0x87C
+#define PCIE_PHY_LOCK_DETECT_CONFIG1 0x880
+#define PCIE_PHY_LOCK_DETECT_CONFIG2 0x884
+#define PCIE_PHY_LOCK_DETECT_CONFIG3 0x888
+#define PCIE_PHY_TSYNC_RSYNC_TIME 0x88C
+#define PCIE_PHY_SIGDET_LOW_2_IDLE_TIME 0x890
+#define PCIE_PHY_BEACON_2_IDLE_TIME_L 0x894
+#define PCIE_PHY_BEACON_2_IDLE_TIME_H 0x898
+#define PCIE_PHY_PWRUP_RESET_DLY_TIME_SYSCLK 0x89C
+#define PCIE_PHY_PWRUP_RESET_DLY_TIME_AUXCLK 0x8A0
+#define PCIE_PHY_LP_WAKEUP_DLY_TIME_AUXCLK 0x8A4
+#define PCIE_PHY_PLL_LOCK_CHK_DLY_TIME 0x8A8
+#define PCIE_PHY_LFPS_DET_HIGH_COUNT_VAL 0x8AC
+#define PCIE_PHY_LFPS_TX_ECSTART_EQTLOCK 0x8B0
+#define PCIE_PHY_LFPS_TX_END_CNT_P2U3_START 0x8B4
+#define PCIE_PHY_RXEQTRAINING_WAIT_TIME 0x8B8
+#define PCIE_PHY_RXEQTRAINING_RUN_TIME 0x8BC
+#define PCIE_PHY_TXONESZEROS_RUN_LENGTH 0x8C0
+#define PCIE_PHY_FLL_CNTRL1 0x8C4
+#define PCIE_PHY_FLL_CNTRL2 0x8C8
+#define PCIE_PHY_FLL_CNT_VAL_L 0x8CC
+#define PCIE_PHY_FLL_CNT_VAL_H_TOL 0x8D0
+#define PCIE_PHY_FLL_MAN_CODE 0x8D4
+#define PCIE_PHY_AUTONOMOUS_MODE_CTRL 0x8D8
+#define PCIE_PHY_LFPS_RXTERM_IRQ_CLEAR 0x8DC
+#define PCIE_PHY_ARCVR_DTCT_EN_PERIOD 0x8E0
+#define PCIE_PHY_ARCVR_DTCT_CM_DLY 0x8E4
+#define PCIE_PHY_ALFPS_DEGLITCH_VAL 0x8E8
+#define PCIE_PHY_INSIG_SW_CTRL1 0x8EC
+#define PCIE_PHY_INSIG_SW_CTRL2 0x8F0
+#define PCIE_PHY_INSIG_SW_CTRL3 0x8F4
+#define PCIE_PHY_INSIG_MX_CTRL1 0x8F8
+#define PCIE_PHY_INSIG_MX_CTRL2 0x8FC
+#define PCIE_PHY_INSIG_MX_CTRL3 0x900
+#define PCIE_PHY_OUTSIG_SW_CTRL1 0x904
+#define PCIE_PHY_OUTSIG_MX_CTRL1 0x908
+#define PCIE_PHY_CLK_DEBUG_BYPASS_CTRL 0x90C
+#define PCIE_PHY_TEST_CONTROL 0x910
+#define PCIE_PHY_TEST_CONTROL2 0x914
+#define PCIE_PHY_TEST_CONTROL3 0x918
+#define PCIE_PHY_TEST_CONTROL4 0x91C
+#define PCIE_PHY_TEST_CONTROL5 0x920
+#define PCIE_PHY_TEST_CONTROL6 0x924
+#define PCIE_PHY_TEST_CONTROL7 0x928
+#define PCIE_PHY_COM_RESET_CONTROL 0x92C
+#define PCIE_PHY_BIST_CTRL 0x930
+#define PCIE_PHY_PRBS_POLY0 0x934
+#define PCIE_PHY_PRBS_POLY1 0x938
+#define PCIE_PHY_PRBS_SEED0 0x93C
+#define PCIE_PHY_PRBS_SEED1 0x940
+#define PCIE_PHY_FIXED_PAT_CTRL 0x944
+#define PCIE_PHY_FIXED_PAT0 0x948
+#define PCIE_PHY_FIXED_PAT1 0x94C
+#define PCIE_PHY_FIXED_PAT2 0x950
+#define PCIE_PHY_FIXED_PAT3 0x954
+#define PCIE_PHY_COM_CLK_SWITCH_CTRL 0x958
+#define PCIE_PHY_ELECIDLE_DLY_SEL 0x95C
+#define PCIE_PHY_SPARE1 0x960
+#define PCIE_PHY_BIST_CHK_ERR_CNT_L_STATUS 0x964
+#define PCIE_PHY_BIST_CHK_ERR_CNT_H_STATUS 0x968
+#define PCIE_PHY_BIST_CHK_STATUS 0x96C
+#define PCIE_PHY_LFPS_RXTERM_IRQ_SOURCE_STATUS 0x970
+#define PCIE_PHY_PCS_STATUS 0x974
+#define PCIE_PHY_PCS_STATUS2 0x978
+#define PCIE_PHY_PCS_STATUS3 0x97C
+#define PCIE_PHY_COM_RESET_STATUS 0x980
+#define PCIE_PHY_OSC_DTCT_STATUS 0x984
+#define PCIE_PHY_REVISION_ID0 0x988
+#define PCIE_PHY_REVISION_ID1 0x98C
+#define PCIE_PHY_REVISION_ID2 0x990
+#define PCIE_PHY_REVISION_ID3 0x994
+#define PCIE_PHY_DEBUG_BUS_0_STATUS 0x998
+#define PCIE_PHY_DEBUG_BUS_1_STATUS 0x99C
+#define PCIE_PHY_DEBUG_BUS_2_STATUS 0x9A0
+#define PCIE_PHY_DEBUG_BUS_3_STATUS 0x9A4
+#define PCIE_PHY_LP_WAKEUP_DLY_TIME_AUXCLK_MSB 0x9A8
+#define PCIE_PHY_OSC_DTCT_ACTIONS 0x9AC
+#define PCIE_PHY_SIGDET_CNTRL 0x9B0
+#define PCIE_PHY_IDAC_CAL_CNTRL 0x9B4
+#define PCIE_PHY_CMN_ACK_OUT_SEL 0x9B8
+#define PCIE_PHY_PLL_LOCK_CHK_DLY_TIME_SYSCLK 0x9BC
+#define PCIE_PHY_AUTONOMOUS_MODE_STATUS 0x9C0
+#define PCIE_PHY_ENDPOINT_REFCLK_CNTRL 0x9C4
+#define PCIE_PHY_EPCLK_PRE_PLL_LOCK_DLY_SYSCLK 0x9C8
+#define PCIE_PHY_EPCLK_PRE_PLL_LOCK_DLY_AUXCLK 0x9CC
+#define PCIE_PHY_EPCLK_DLY_COUNT_VAL_L 0x9D0
+#define PCIE_PHY_EPCLK_DLY_COUNT_VAL_H 0x9D4
+#define PCIE_PHY_RX_SIGDET_LVL 0x9D8
+#define PCIE_PHY_L1SS_WAKEUP_DLY_TIME_AUXCLK_LSB 0x9DC
+#define PCIE_PHY_L1SS_WAKEUP_DLY_TIME_AUXCLK_MSB 0x9E0
+#define PCIE_PHY_AUTONOMOUS_MODE_CTRL2 0x9E4
+#define PCIE_PHY_RXTERMINATION_DLY_SEL 0x9E8
+#define PCIE_PHY_LFPS_PER_TIMER_VAL 0x9EC
+#define PCIE_PHY_SIGDET_STARTUP_TIMER_VAL 0x9F0
+#define PCIE_PHY_LOCK_DETECT_CONFIG4 0x9F4
+#endif
diff --git a/drivers/platform/msm/ipa/ipa_api.c b/drivers/platform/msm/ipa/ipa_api.c
index 96b9bd6..7df312e 100644
--- a/drivers/platform/msm/ipa/ipa_api.c
+++ b/drivers/platform/msm/ipa/ipa_api.c
@@ -3135,6 +3135,17 @@ void ipa_ntn_uc_dereg_rdyCB(void)
}
EXPORT_SYMBOL(ipa_ntn_uc_dereg_rdyCB);
+int ipa_get_smmu_params(struct ipa_smmu_in_params *in,
+ struct ipa_smmu_out_params *out)
+{
+ int ret;
+
+ IPA_API_DISPATCH_RETURN(ipa_get_smmu_params, in, out);
+
+ return ret;
+}
+EXPORT_SYMBOL(ipa_get_smmu_params);
+
/**
* ipa_conn_wdi3_pipes() - connect wdi3 pipes
*/
diff --git a/drivers/platform/msm/ipa/ipa_api.h b/drivers/platform/msm/ipa/ipa_api.h
index b526711..0779f34 100644
--- a/drivers/platform/msm/ipa/ipa_api.h
+++ b/drivers/platform/msm/ipa/ipa_api.h
@@ -417,6 +417,9 @@ struct ipa_api_controller {
int (*ipa_tz_unlock_reg)(struct ipa_tz_unlock_reg_info *reg_info,
u16 num_regs);
+
+ int (*ipa_get_smmu_params)(struct ipa_smmu_in_params *in,
+ struct ipa_smmu_out_params *out);
};
#ifdef CONFIG_IPA
diff --git a/drivers/platform/msm/ipa/ipa_clients/ipa_usb.c b/drivers/platform/msm/ipa/ipa_clients/ipa_usb.c
index 745e429..90920d9 100644
--- a/drivers/platform/msm/ipa/ipa_clients/ipa_usb.c
+++ b/drivers/platform/msm/ipa/ipa_clients/ipa_usb.c
@@ -2225,6 +2225,7 @@ static int ipa_usb_xdci_dismiss_channels(u32 ul_clnt_hdl, u32 dl_clnt_hdl,
}
if (!IPA3_USB_IS_TTYPE_DPL(ttype)) {
+ ipa3_xdci_ep_delay_rm(ul_clnt_hdl); /* Remove ep_delay if set */
/* Reset UL channel */
result = ipa3_reset_gsi_channel(ul_clnt_hdl);
if (result) {
diff --git a/drivers/platform/msm/ipa/ipa_common_i.h b/drivers/platform/msm/ipa/ipa_common_i.h
index 0a406d2..98a1cf9 100644
--- a/drivers/platform/msm/ipa/ipa_common_i.h
+++ b/drivers/platform/msm/ipa/ipa_common_i.h
@@ -19,6 +19,10 @@
#include <linux/ipa.h>
#include <linux/ipa_uc_offload.h>
#include <linux/ipa_wdi3.h>
+#include <linux/ratelimit.h>
+
+#define WARNON_RATELIMIT_BURST 1
+#define IPA_RATELIMIT_BURST 1
#define __FILENAME__ \
(strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__)
@@ -104,6 +108,39 @@
ipa_dec_client_disable_clks(&log_info); \
} while (0)
+/*
+ * Printing one warning message in 5 seconds if multiple warning messages
+ * are coming back to back.
+ */
+
+#define WARN_ON_RATELIMIT_IPA(condition) \
+({ \
+ static DEFINE_RATELIMIT_STATE(_rs, \
+ DEFAULT_RATELIMIT_INTERVAL, \
+ WARNON_RATELIMIT_BURST); \
+ int rtn = !!(condition); \
+ \
+ if (unlikely(rtn && __ratelimit(&_rs))) \
+ WARN_ON(rtn); \
+})
+
+/*
+ * Printing one error message in 5 seconds if multiple error messages
+ * are coming back to back.
+ */
+
+#define pr_err_ratelimited_ipa(fmt, ...) \
+ printk_ratelimited_ipa(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
+#define printk_ratelimited_ipa(fmt, ...) \
+({ \
+ static DEFINE_RATELIMIT_STATE(_rs, \
+ DEFAULT_RATELIMIT_INTERVAL, \
+ IPA_RATELIMIT_BURST); \
+ \
+ if (__ratelimit(&_rs)) \
+ printk(fmt, ##__VA_ARGS__); \
+})
+
#define ipa_assert_on(condition)\
do {\
if (unlikely(condition))\
diff --git a/drivers/platform/msm/ipa/ipa_rm_inactivity_timer.c b/drivers/platform/msm/ipa/ipa_rm_inactivity_timer.c
index 8e33d71..613bed3 100644
--- a/drivers/platform/msm/ipa/ipa_rm_inactivity_timer.c
+++ b/drivers/platform/msm/ipa/ipa_rm_inactivity_timer.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2013-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -20,6 +20,8 @@
#include <linux/ipa.h>
#include "ipa_rm_i.h"
+#define MAX_WS_NAME 20
+
/**
* struct ipa_rm_it_private - IPA RM Inactivity Timer private
* data
@@ -45,6 +47,8 @@ struct ipa_rm_it_private {
bool reschedule_work;
bool work_in_progress;
unsigned long jiffies;
+ struct wakeup_source w_lock;
+ char w_lock_name[MAX_WS_NAME];
};
static struct ipa_rm_it_private ipa_rm_it_handles[IPA_RM_RESOURCE_MAX];
@@ -87,6 +91,7 @@ static void ipa_rm_inactivity_timer_func(struct work_struct *work)
} else {
IPA_RM_DBG_LOW("%s: calling release_resource on resource %d!\n",
__func__, me->resource_name);
+ __pm_relax(&ipa_rm_it_handles[me->resource_name].w_lock);
ipa_rm_release_resource(me->resource_name);
ipa_rm_it_handles[me->resource_name].work_in_progress = false;
}
@@ -110,6 +115,9 @@ static void ipa_rm_inactivity_timer_func(struct work_struct *work)
int ipa_rm_inactivity_timer_init(enum ipa_rm_resource_name resource_name,
unsigned long msecs)
{
+ struct wakeup_source *pwlock;
+ char *name;
+
IPA_RM_DBG_LOW("%s: resource %d\n", __func__, resource_name);
if (resource_name < 0 ||
@@ -130,7 +138,10 @@ int ipa_rm_inactivity_timer_init(enum ipa_rm_resource_name resource_name,
ipa_rm_it_handles[resource_name].resource_requested = false;
ipa_rm_it_handles[resource_name].reschedule_work = false;
ipa_rm_it_handles[resource_name].work_in_progress = false;
-
+ pwlock = &(ipa_rm_it_handles[resource_name].w_lock);
+ name = ipa_rm_it_handles[resource_name].w_lock_name;
+ snprintf(name, MAX_WS_NAME, "IPA_RM%d\n", resource_name);
+ wakeup_source_init(pwlock, name);
INIT_DELAYED_WORK(&ipa_rm_it_handles[resource_name].work,
ipa_rm_inactivity_timer_func);
ipa_rm_it_handles[resource_name].initied = 1;
@@ -151,6 +162,8 @@ EXPORT_SYMBOL(ipa_rm_inactivity_timer_init);
*/
int ipa_rm_inactivity_timer_destroy(enum ipa_rm_resource_name resource_name)
{
+ struct wakeup_source *pwlock;
+
IPA_RM_DBG_LOW("%s: resource %d\n", __func__, resource_name);
if (resource_name < 0 ||
@@ -166,6 +179,8 @@ int ipa_rm_inactivity_timer_destroy(enum ipa_rm_resource_name resource_name)
}
cancel_delayed_work_sync(&ipa_rm_it_handles[resource_name].work);
+ pwlock = &(ipa_rm_it_handles[resource_name].w_lock);
+ wakeup_source_trash(pwlock);
memset(&ipa_rm_it_handles[resource_name], 0,
sizeof(struct ipa_rm_it_private));
@@ -261,6 +276,7 @@ int ipa_rm_inactivity_timer_release_resource(
}
ipa_rm_it_handles[resource_name].work_in_progress = true;
ipa_rm_it_handles[resource_name].reschedule_work = false;
+ __pm_stay_awake(&ipa_rm_it_handles[resource_name].w_lock);
IPA_RM_DBG_LOW("%s: setting delayed work\n", __func__);
queue_delayed_work(system_unbound_wq,
&ipa_rm_it_handles[resource_name].work,
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa.c b/drivers/platform/msm/ipa/ipa_v2/ipa.c
index c760f75..07dc7b0 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa.c
@@ -450,7 +450,7 @@ static int ipa_open(struct inode *inode, struct file *filp)
{
struct ipa_context *ctx = NULL;
- IPADBG("ENTER\n");
+ IPADBG_LOW("ENTER\n");
ctx = container_of(inode->i_cdev, struct ipa_context, cdev);
filp->private_data = ctx;
@@ -3051,11 +3051,11 @@ static int ipa_get_clks(struct device *dev)
void _ipa_enable_clks_v2_0(void)
{
- IPADBG("enabling gcc_ipa_clk\n");
+ IPADBG_LOW("enabling gcc_ipa_clk\n");
if (ipa_clk) {
clk_prepare(ipa_clk);
clk_enable(ipa_clk);
- IPADBG("curr_ipa_clk_rate=%d", ipa_ctx->curr_ipa_clk_rate);
+ IPADBG_LOW("curr_ipa_clk_rate=%d", ipa_ctx->curr_ipa_clk_rate);
clk_set_rate(ipa_clk, ipa_ctx->curr_ipa_clk_rate);
ipa_uc_notify_clk_state(true);
} else {
@@ -3187,7 +3187,7 @@ void _ipa_disable_clks_v1_1(void)
void _ipa_disable_clks_v2_0(void)
{
- IPADBG("disabling gcc_ipa_clk\n");
+ IPADBG_LOW("disabling gcc_ipa_clk\n");
ipa_suspend_apps_pipes(true);
ipa_sps_irq_control_all(false);
ipa_uc_notify_clk_state(false);
@@ -3208,7 +3208,7 @@ void _ipa_disable_clks_v2_0(void)
*/
void ipa_disable_clks(void)
{
- IPADBG("disabling IPA clocks and bus voting\n");
+ IPADBG_LOW("disabling IPA clocks and bus voting\n");
ipa_ctx->ctrl->ipa_disable_clks();
@@ -3352,7 +3352,7 @@ void ipa2_inc_client_enable_clks(struct ipa_active_client_logging_info *id)
ipa_ctx->ipa_active_clients.cnt++;
if (ipa_ctx->ipa_active_clients.cnt == 1)
ipa_enable_clks();
- IPADBG("active clients = %d\n", ipa_ctx->ipa_active_clients.cnt);
+ IPADBG_LOW("active clients = %d\n", ipa_ctx->ipa_active_clients.cnt);
ipa_active_clients_unlock();
}
@@ -3384,7 +3384,7 @@ int ipa2_inc_client_enable_clks_no_block(struct ipa_active_client_logging_info
ipa2_active_clients_log_inc(id, true);
ipa_ctx->ipa_active_clients.cnt++;
- IPADBG("active clients = %d\n", ipa_ctx->ipa_active_clients.cnt);
+ IPADBG_LOW("active clients = %d\n", ipa_ctx->ipa_active_clients.cnt);
bail:
ipa_active_clients_trylock_unlock(&flags);
@@ -3412,7 +3412,7 @@ void ipa2_dec_client_disable_clks(struct ipa_active_client_logging_info *id)
ipa_active_clients_lock();
ipa2_active_clients_log_dec(id, false);
ipa_ctx->ipa_active_clients.cnt--;
- IPADBG("active clients = %d\n", ipa_ctx->ipa_active_clients.cnt);
+ IPADBG_LOW("active clients = %d\n", ipa_ctx->ipa_active_clients.cnt);
if (ipa_ctx->ipa_active_clients.cnt == 0) {
if (ipa_ctx->tag_process_before_gating) {
IPA_ACTIVE_CLIENTS_PREP_SPECIAL(log_info,
@@ -3452,7 +3452,7 @@ void ipa_inc_acquire_wakelock(enum ipa_wakelock_ref_client ref_client)
ipa_ctx->wakelock_ref_cnt.cnt |= (1 << ref_client);
if (ipa_ctx->wakelock_ref_cnt.cnt)
__pm_stay_awake(&ipa_ctx->w_lock);
- IPADBG("active wakelock ref cnt = %d client enum %d\n",
+ IPADBG_LOW("active wakelock ref cnt = %d client enum %d\n",
ipa_ctx->wakelock_ref_cnt.cnt, ref_client);
spin_unlock_irqrestore(&ipa_ctx->wakelock_ref_cnt.spinlock, flags);
}
@@ -3473,7 +3473,7 @@ void ipa_dec_release_wakelock(enum ipa_wakelock_ref_client ref_client)
return;
spin_lock_irqsave(&ipa_ctx->wakelock_ref_cnt.spinlock, flags);
ipa_ctx->wakelock_ref_cnt.cnt &= ~(1 << ref_client);
- IPADBG("active wakelock ref cnt = %d client enum %d\n",
+ IPADBG_LOW("active wakelock ref cnt = %d client enum %d\n",
ipa_ctx->wakelock_ref_cnt.cnt, ref_client);
if (ipa_ctx->wakelock_ref_cnt.cnt == 0)
__pm_relax(&ipa_ctx->w_lock);
@@ -3517,7 +3517,7 @@ int ipa2_set_required_perf_profile(enum ipa_voltage_level floor_voltage,
enum ipa_voltage_level needed_voltage;
u32 clk_rate;
- IPADBG("floor_voltage=%d, bandwidth_mbps=%u",
+ IPADBG_LOW("floor_voltage=%d, bandwidth_mbps=%u",
floor_voltage, bandwidth_mbps);
if (floor_voltage < IPA_VOLTAGE_UNSPECIFIED ||
@@ -3527,7 +3527,7 @@ int ipa2_set_required_perf_profile(enum ipa_voltage_level floor_voltage,
}
if (ipa_ctx->enable_clock_scaling) {
- IPADBG("Clock scaling is enabled\n");
+ IPADBG_LOW("Clock scaling is enabled\n");
if (bandwidth_mbps >=
ipa_ctx->ctrl->clock_scaling_bw_threshold_turbo)
needed_voltage = IPA_VOLTAGE_TURBO;
@@ -3537,7 +3537,7 @@ int ipa2_set_required_perf_profile(enum ipa_voltage_level floor_voltage,
else
needed_voltage = IPA_VOLTAGE_SVS;
} else {
- IPADBG("Clock scaling is disabled\n");
+ IPADBG_LOW("Clock scaling is disabled\n");
needed_voltage = IPA_VOLTAGE_NOMINAL;
}
@@ -3559,13 +3559,13 @@ int ipa2_set_required_perf_profile(enum ipa_voltage_level floor_voltage,
}
if (clk_rate == ipa_ctx->curr_ipa_clk_rate) {
- IPADBG("Same voltage\n");
+ IPADBG_LOW("Same voltage\n");
return 0;
}
ipa_active_clients_lock();
ipa_ctx->curr_ipa_clk_rate = clk_rate;
- IPADBG("setting clock rate to %u\n", ipa_ctx->curr_ipa_clk_rate);
+ IPADBG_LOW("setting clock rate to %u\n", ipa_ctx->curr_ipa_clk_rate);
if (ipa_ctx->ipa_active_clients.cnt > 0) {
struct ipa_active_client_logging_info log_info;
@@ -3588,11 +3588,10 @@ int ipa2_set_required_perf_profile(enum ipa_voltage_level floor_voltage,
/* remove the vote added here */
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
} else {
- IPADBG("clocks are gated, not setting rate\n");
- ipa_active_clients_unlock();
+ IPADBG_LOW("clocks are gated, not setting rate\n");
}
- IPADBG("Done\n");
-
+ ipa_active_clients_unlock();
+ IPADBG_LOW("Done\n");
return 0;
}
@@ -3727,6 +3726,11 @@ void ipa_suspend_handler(enum ipa_irq_type interrupt,
atomic_set(
&ipa_ctx->sps_pm.dec_clients,
1);
+ /*
+ * acquire wake lock as long as suspend
+ * vote is held
+ */
+ ipa_inc_acquire_wakelock();
ipa_sps_process_irq_schedule_rel();
}
mutex_unlock(&ipa_ctx->sps_pm.sps_pm_lock);
@@ -3799,6 +3803,7 @@ static void ipa_sps_release_resource(struct work_struct *work)
ipa_sps_process_irq_schedule_rel();
} else {
atomic_set(&ipa_ctx->sps_pm.dec_clients, 0);
+ ipa_dec_release_wakelock();
IPA_ACTIVE_CLIENTS_DEC_SPECIAL("SPS_RESOURCE");
}
}
@@ -3888,6 +3893,13 @@ static int ipa_init(const struct ipa_plat_drv_res *resource_p,
goto fail_mem_ctx;
}
+ ipa_ctx->logbuf = ipc_log_context_create(IPA_IPC_LOG_PAGES, "ipa", 0);
+ if (ipa_ctx->logbuf == NULL) {
+ IPAERR("failed to get logbuf\n");
+ result = -ENOMEM;
+ goto fail_logbuf;
+ }
+
ipa_ctx->pdev = ipa_dev;
ipa_ctx->uc_pdev = ipa_dev;
ipa_ctx->smmu_present = smmu_info.present;
@@ -3899,6 +3911,8 @@ static int ipa_init(const struct ipa_plat_drv_res *resource_p,
ipa_ctx->ipa_wrapper_size = resource_p->ipa_mem_size;
ipa_ctx->ipa_hw_type = resource_p->ipa_hw_type;
ipa_ctx->ipa_hw_mode = resource_p->ipa_hw_mode;
+ ipa_ctx->ipa_uc_monitor_holb =
+ resource_p->ipa_uc_monitor_holb;
ipa_ctx->use_ipa_teth_bridge = resource_p->use_ipa_teth_bridge;
ipa_ctx->ipa_bam_remote_mode = resource_p->ipa_bam_remote_mode;
ipa_ctx->modem_cfg_emb_pipe_flt = resource_p->modem_cfg_emb_pipe_flt;
@@ -4423,6 +4437,8 @@ static int ipa_init(const struct ipa_plat_drv_res *resource_p,
fail_bind:
kfree(ipa_ctx->ctrl);
fail_mem_ctrl:
+ ipc_log_context_destroy(ipa_ctx->logbuf);
+fail_logbuf:
kfree(ipa_ctx);
ipa_ctx = NULL;
fail_mem_ctx:
@@ -4440,6 +4456,7 @@ static int get_ipa_dts_configuration(struct platform_device *pdev,
ipa_drv_res->ipa_pipe_mem_size = IPA_PIPE_MEM_SIZE;
ipa_drv_res->ipa_hw_type = 0;
ipa_drv_res->ipa_hw_mode = 0;
+ ipa_drv_res->ipa_uc_monitor_holb = false;
ipa_drv_res->ipa_bam_remote_mode = false;
ipa_drv_res->modem_cfg_emb_pipe_flt = false;
ipa_drv_res->ipa_wdi2 = false;
@@ -4464,6 +4481,14 @@ static int get_ipa_dts_configuration(struct platform_device *pdev,
IPADBG(": found ipa_drv_res->ipa_hw_mode = %d",
ipa_drv_res->ipa_hw_mode);
+ /* Check ipa_uc_monitor_holb enabled or disabled */
+ ipa_drv_res->ipa_uc_monitor_holb =
+ of_property_read_bool(pdev->dev.of_node,
+ "qcom,ipa-uc-monitor-holb");
+ IPADBG(": ipa uc monitor holb = %s\n",
+ ipa_drv_res->ipa_uc_monitor_holb
+ ? "Enabled" : "Disabled");
+
/* Get IPA WAN / LAN RX pool sizes */
result = of_property_read_u32(pdev->dev.of_node,
"qcom,wan-rx-ring-size",
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_debugfs.c b/drivers/platform/msm/ipa/ipa_v2/ipa_debugfs.c
index a249567..c018fc9 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_debugfs.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_debugfs.c
@@ -1817,6 +1817,44 @@ static ssize_t ipa_write_polling_iteration(struct file *file,
return count;
}
+static ssize_t ipa_enable_ipc_low(struct file *file,
+ const char __user *ubuf, size_t count, loff_t *ppos)
+{
+ unsigned long missing;
+ s8 option = 0;
+
+ if (sizeof(dbg_buff) < count + 1)
+ return -EFAULT;
+
+ missing = copy_from_user(dbg_buff, ubuf, count);
+ if (missing)
+ return -EFAULT;
+
+ dbg_buff[count] = '\0';
+ if (kstrtos8(dbg_buff, 0, &option))
+ return -EFAULT;
+
+ if (option) {
+ if (!ipa_ctx->logbuf_low) {
+ ipa_ctx->logbuf_low =
+ ipc_log_context_create(IPA_IPC_LOG_PAGES,
+ "ipa_low", 0);
+ }
+
+ if (ipa_ctx->logbuf_low == NULL) {
+ IPAERR("failed to get logbuf_low\n");
+ return -EFAULT;
+ }
+
+ } else {
+ if (ipa_ctx->logbuf_low)
+ ipc_log_context_destroy(ipa_ctx->logbuf_low);
+ ipa_ctx->logbuf_low = NULL;
+ }
+
+ return count;
+}
+
const struct file_operations ipa_gen_reg_ops = {
.read = ipa_read_gen_reg,
};
@@ -1895,6 +1933,10 @@ const struct file_operations ipa2_active_clients = {
.write = ipa2_clear_active_clients_log,
};
+const struct file_operations ipa_ipc_low_ops = {
+ .write = ipa_enable_ipc_low,
+};
+
const struct file_operations ipa_rx_poll_time_ops = {
.read = ipa_read_rx_polling_timeout,
.write = ipa_write_rx_polling_timeout,
@@ -2110,6 +2152,13 @@ void ipa_debugfs_init(void)
goto fail;
}
+ file = debugfs_create_file("enable_low_prio_print", write_only_mode,
+ dent, 0, &ipa_ipc_low_ops);
+ if (!file) {
+ IPAERR("could not create enable_low_prio_print file\n");
+ goto fail;
+ }
+
return;
fail:
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_dma.c b/drivers/platform/msm/ipa/ipa_v2/ipa_dma.c
index bee6331..6a3d870 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_dma.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_dma.c
@@ -32,16 +32,39 @@
#define IPADMA_DRV_NAME "ipa_dma"
#define IPADMA_DBG(fmt, args...) \
- pr_debug(IPADMA_DRV_NAME " %s:%d " fmt, \
- __func__, __LINE__, ## args)
+ do { \
+ pr_debug(IPADMA_DRV_NAME " %s:%d " fmt, \
+ __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
+#define IPADMA_DBG_LOW(fmt, args...) \
+ do { \
+ pr_debug(IPADMA_DRV_NAME " %s:%d " fmt, \
+ __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
#define IPADMA_ERR(fmt, args...) \
- pr_err(IPADMA_DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
+ do { \
+ pr_err(IPADMA_DRV_NAME " %s:%d " fmt, \
+ __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
#define IPADMA_FUNC_ENTRY() \
- IPADMA_DBG("ENTRY\n")
+ IPADMA_DBG_LOW("ENTRY\n")
#define IPADMA_FUNC_EXIT() \
- IPADMA_DBG("EXIT\n")
+ IPADMA_DBG_LOW("EXIT\n")
+
#ifdef CONFIG_DEBUG_FS
#define IPADMA_MAX_MSG_LEN 1024
@@ -270,7 +293,7 @@ int ipa2_dma_enable(void)
}
mutex_lock(&ipa_dma_ctx->enable_lock);
if (ipa_dma_ctx->is_enabled) {
- IPADMA_DBG("Already enabled.\n");
+ IPADMA_ERR("Already enabled.\n");
mutex_unlock(&ipa_dma_ctx->enable_lock);
return -EPERM;
}
@@ -296,7 +319,7 @@ static bool ipa_dma_work_pending(void)
IPADMA_DBG("pending uc\n");
return true;
}
- IPADMA_DBG("no pending work\n");
+ IPADMA_DBG_LOW("no pending work\n");
return false;
}
@@ -324,7 +347,7 @@ int ipa2_dma_disable(void)
mutex_lock(&ipa_dma_ctx->enable_lock);
spin_lock_irqsave(&ipa_dma_ctx->pending_lock, flags);
if (!ipa_dma_ctx->is_enabled) {
- IPADMA_DBG("Already disabled.\n");
+ IPADMA_ERR("Already disabled.\n");
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
mutex_unlock(&ipa_dma_ctx->enable_lock);
return -EPERM;
@@ -371,6 +394,8 @@ int ipa2_dma_sync_memcpy(u64 dest, u64 src, int len)
IPADMA_FUNC_ENTRY();
+ IPADMA_DBG_LOW("dest = 0x%llx, src = 0x%llx, len = %d\n",
+ dest, src, len);
if (ipa_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
@@ -398,7 +423,7 @@ int ipa2_dma_sync_memcpy(u64 dest, u64 src, int len)
if (atomic_read(&ipa_dma_ctx->sync_memcpy_pending_cnt) >=
IPA_DMA_MAX_PENDING_SYNC) {
atomic_dec(&ipa_dma_ctx->sync_memcpy_pending_cnt);
- IPADMA_DBG("Reached pending requests limit\n");
+ IPADMA_ERR("Reached pending requests limit\n");
return -EFAULT;
}
@@ -531,6 +556,8 @@ int ipa2_dma_async_memcpy(u64 dest, u64 src, int len,
unsigned long flags;
IPADMA_FUNC_ENTRY();
+ IPADMA_DBG_LOW("dest = 0x%llx, src = 0x%llx, len = %d\n",
+ dest, src, len);
if (ipa_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
@@ -562,7 +589,7 @@ int ipa2_dma_async_memcpy(u64 dest, u64 src, int len,
if (atomic_read(&ipa_dma_ctx->async_memcpy_pending_cnt) >=
IPA_DMA_MAX_PENDING_ASYNC) {
atomic_dec(&ipa_dma_ctx->async_memcpy_pending_cnt);
- IPADMA_DBG("Reached pending requests limit\n");
+ IPADMA_ERR("Reached pending requests limit\n");
return -EFAULT;
}
@@ -692,7 +719,7 @@ void ipa2_dma_destroy(void)
IPADMA_FUNC_ENTRY();
if (!ipa_dma_ctx) {
- IPADMA_DBG("IPADMA isn't initialized\n");
+ IPADMA_ERR("IPADMA isn't initialized\n");
return;
}
@@ -836,7 +863,7 @@ static ssize_t ipa_dma_debugfs_reset_statistics(struct file *file,
switch (in_num) {
case 0:
if (ipa_dma_work_pending())
- IPADMA_DBG("Note, there are pending memcpy\n");
+ IPADMA_ERR("Note, there are pending memcpy\n");
atomic_set(&ipa_dma_ctx->total_async_memcpy, 0);
atomic_set(&ipa_dma_ctx->total_sync_memcpy, 0);
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_dp.c b/drivers/platform/msm/ipa/ipa_v2/ipa_dp.c
index 80b97e7..3cb86d0 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_dp.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_dp.c
@@ -346,7 +346,7 @@ int ipa_send_one(struct ipa_sys_context *sys, struct ipa_desc *desc,
if (desc->type == IPA_IMM_CMD_DESC) {
sps_flags |= SPS_IOVEC_FLAG_IMME;
len = desc->opcode;
- IPADBG("sending cmd=%d pyld_len=%d sps_flags=%x\n",
+ IPADBG_LOW("sending cmd=%d pyld_len=%d sps_flags=%x\n",
desc->opcode, desc->len, sps_flags);
IPA_DUMP_BUFF(desc->pyld, dma_address, desc->len);
} else {
@@ -627,7 +627,7 @@ static void ipa_sps_irq_cmd_ack(void *user1, int user2)
WARN_ON(1);
return;
}
- IPADBG("got ack for cmd=%d\n", desc->opcode);
+ IPADBG_LOW("got ack for cmd=%d\n", desc->opcode);
complete(&desc->xfer_done);
}
@@ -644,11 +644,12 @@ static void ipa_sps_irq_cmd_ack(void *user1, int user2)
int ipa_send_cmd(u16 num_desc, struct ipa_desc *descr)
{
struct ipa_desc *desc;
- int result = 0;
+ int i, result = 0;
struct ipa_sys_context *sys;
int ep_idx;
- IPADBG("sending command\n");
+ for (i = 0; i < num_desc; i++)
+ IPADBG_LOW("sending imm cmd %d\n", descr[i].opcode);
ep_idx = ipa2_get_ep_mapping(IPA_CLIENT_APPS_CMD_PROD);
if (-1 == ep_idx) {
@@ -709,7 +710,7 @@ static void ipa_sps_irq_tx_notify(struct sps_event_notify *notify)
struct ipa_sys_context *sys = (struct ipa_sys_context *)notify->user;
int ret;
- IPADBG("event %d notified\n", notify->event_id);
+ IPADBG_LOW("event %d notified\n", notify->event_id);
switch (notify->event_id) {
case SPS_EVENT_EOT:
@@ -752,7 +753,7 @@ static void ipa_sps_irq_tx_no_aggr_notify(struct sps_event_notify *notify)
{
struct ipa_tx_pkt_wrapper *tx_pkt;
- IPADBG("event %d notified\n", notify->event_id);
+ IPADBG_LOW("event %d notified\n", notify->event_id);
switch (notify->event_id) {
case SPS_EVENT_EOT:
@@ -1599,7 +1600,7 @@ static void ipa_tx_comp_usr_notify_release(void *user1, int user2)
struct sk_buff *skb = (struct sk_buff *)user1;
int ep_idx = user2;
- IPADBG("skb=%p ep=%d\n", skb, ep_idx);
+ IPADBG_LOW("skb=%p ep=%d\n", skb, ep_idx);
IPA_STATS_INC_CNT(ipa_ctx->stats.tx_pkts_compl);
@@ -1920,7 +1921,7 @@ static void ipa_replenish_wlan_rx_cache(struct ipa_sys_context *sys)
int ret;
u32 rx_len_cached = 0;
- IPADBG("\n");
+ IPADBG_LOW("\n");
spin_lock_bh(&ipa_ctx->wc_memb.wlan_spinlock);
rx_len_cached = sys->len;
@@ -2350,7 +2351,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
}
if (sys->len_partial) {
- IPADBG("len_partial %d\n", sys->len_partial);
+ IPADBG_LOW("len_partial %d\n", sys->len_partial);
buf = skb_push(skb, sys->len_partial);
memcpy(buf, sys->prev_skb->data, sys->len_partial);
sys->len_partial = 0;
@@ -2363,7 +2364,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
* (status+data)
*/
if (sys->len_rem) {
- IPADBG("rem %d skb %d pad %d\n", sys->len_rem, skb->len,
+ IPADBG_LOW("rem %d skb %d pad %d\n", sys->len_rem, skb->len,
sys->len_pad);
if (sys->len_rem <= skb->len) {
if (sys->prev_skb) {
@@ -2414,7 +2415,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
begin:
while (skb->len) {
sys->drop_packet = false;
- IPADBG("LEN_REM %d\n", skb->len);
+ IPADBG_LOW("LEN_REM %d\n", skb->len);
if (skb->len < IPA_PKT_STATUS_SIZE) {
WARN_ON(sys->prev_skb != NULL);
@@ -2425,7 +2426,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
}
status = (struct ipa_hw_pkt_status *)skb->data;
- IPADBG("STATUS opcode=%d src=%d dst=%d len=%d\n",
+ IPADBG_LOW("STATUS opcode=%d src=%d dst=%d len=%d\n",
status->status_opcode, status->endp_src_idx,
status->endp_dest_idx, status->pkt_len);
if (sys->status_stat) {
@@ -2463,7 +2464,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
if (status->status_mask & IPA_HW_PKT_STATUS_MASK_TAG_VALID) {
struct ipa_tag_completion *comp;
- IPADBG("TAG packet arrived\n");
+ IPADBG_LOW("TAG packet arrived\n");
if (status->tag_f_2 == IPA_COOKIE) {
skb_pull(skb, IPA_PKT_STATUS_SIZE);
if (skb->len < sizeof(comp)) {
@@ -2503,7 +2504,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
if (skb->len == IPA_PKT_STATUS_SIZE &&
!status->exception) {
WARN_ON(sys->prev_skb != NULL);
- IPADBG("Ins header in next buffer\n");
+ IPADBG_LOW("Ins header in next buffer\n");
sys->prev_skb = skb_copy(skb, GFP_KERNEL);
sys->len_partial = skb->len;
return rc;
@@ -2514,12 +2515,13 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
len = status->pkt_len + pad_len_byte +
IPA_SIZE_DL_CSUM_META_TRAILER;
- IPADBG("pad %d pkt_len %d len %d\n", pad_len_byte,
+ IPADBG_LOW("pad %d pkt_len %d len %d\n", pad_len_byte,
status->pkt_len, len);
if (status->exception ==
IPA_HW_PKT_STATUS_EXCEPTION_DEAGGR) {
- IPADBG("Dropping packet on DeAggr Exception\n");
+ IPADBG_LOW("Dropping packet");
+ IPADBG_LOW(" on DeAggr Exception\n");
sys->drop_packet = true;
}
@@ -2528,7 +2530,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
skb2 = ipa_skb_copy_for_client(skb, skb2_len);
if (likely(skb2)) {
if (skb->len < len + IPA_PKT_STATUS_SIZE) {
- IPADBG("SPL skb len %d len %d\n",
+ IPADBG_LOW("SPL skb len %d len %d\n",
skb->len, len);
sys->prev_skb = skb2;
sys->len_rem = len - skb->len +
@@ -2538,7 +2540,7 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
} else {
skb_trim(skb2, status->pkt_len +
IPA_PKT_STATUS_SIZE);
- IPADBG("rx avail for %d\n",
+ IPADBG_LOW("rx avail for %d\n",
status->endp_dest_idx);
if (sys->drop_packet) {
dev_kfree_skb_any(skb2);
@@ -2582,11 +2584,12 @@ static int ipa_lan_rx_pyld_hdlr(struct sk_buff *skb,
}
/* TX comp */
ipa_wq_write_done_status(src_pipe);
- IPADBG("tx comp imp for %d\n", src_pipe);
+ IPADBG_LOW("tx comp imp for %d\n", src_pipe);
} else {
/* TX comp */
ipa_wq_write_done_status(status->endp_src_idx);
- IPADBG("tx comp exp for %d\n", status->endp_src_idx);
+ IPADBG_LOW
+ ("tx comp exp for %d\n", status->endp_src_idx);
skb_pull(skb, IPA_PKT_STATUS_SIZE);
IPA_STATS_INC_CNT(ipa_ctx->stats.stat_compl);
IPA_STATS_DEC_CNT(
@@ -2622,13 +2625,13 @@ static void wan_rx_handle_splt_pyld(struct sk_buff *skb,
{
struct sk_buff *skb2;
- IPADBG("rem %d skb %d\n", sys->len_rem, skb->len);
+ IPADBG_LOW("rem %d skb %d\n", sys->len_rem, skb->len);
if (sys->len_rem <= skb->len) {
if (sys->prev_skb) {
skb2 = join_prev_skb(sys->prev_skb, skb,
sys->len_rem);
if (likely(skb2)) {
- IPADBG(
+ IPADBG_LOW(
"removing Status element from skb and sending to WAN client");
skb_pull(skb2, IPA_PKT_STATUS_SIZE);
skb2->truesize = skb2->len +
@@ -2691,14 +2694,14 @@ static int ipa_wan_rx_pyld_hdlr(struct sk_buff *skb,
while (skb->len) {
- IPADBG("LEN_REM %d\n", skb->len);
+ IPADBG_LOW("LEN_REM %d\n", skb->len);
if (skb->len < IPA_PKT_STATUS_SIZE) {
IPAERR("status straddles buffer\n");
WARN_ON(1);
goto bail;
}
status = (struct ipa_hw_pkt_status *)skb->data;
- IPADBG("STATUS opcode=%d src=%d dst=%d len=%d\n",
+ IPADBG_LOW("STATUS opcode=%d src=%d dst=%d len=%d\n",
status->status_opcode, status->endp_src_idx,
status->endp_dest_idx, status->pkt_len);
@@ -2729,7 +2732,7 @@ static int ipa_wan_rx_pyld_hdlr(struct sk_buff *skb,
goto bail;
}
if (status->pkt_len == 0) {
- IPADBG("Skip aggr close status\n");
+ IPADBG_LOW("Skip aggr close status\n");
skb_pull(skb, IPA_PKT_STATUS_SIZE);
IPA_STATS_DEC_CNT(ipa_ctx->stats.rx_pkts);
IPA_STATS_INC_CNT(ipa_ctx->stats.wan_aggr_close);
@@ -2756,11 +2759,11 @@ static int ipa_wan_rx_pyld_hdlr(struct sk_buff *skb,
/*QMAP is BE: convert the pkt_len field from BE to LE*/
pkt_len_with_pad = ntohs((qmap_hdr>>16) & 0xffff);
- IPADBG("pkt_len with pad %d\n", pkt_len_with_pad);
+ IPADBG_LOW("pkt_len with pad %d\n", pkt_len_with_pad);
/*get the CHECKSUM_PROCESS bit*/
checksum_trailer_exists = status->status_mask &
IPA_HW_PKT_STATUS_MASK_CKSUM_PROCESS;
- IPADBG("checksum_trailer_exists %d\n",
+ IPADBG_LOW("checksum_trailer_exists %d\n",
checksum_trailer_exists);
frame_len = IPA_PKT_STATUS_SIZE +
@@ -2768,7 +2771,7 @@ static int ipa_wan_rx_pyld_hdlr(struct sk_buff *skb,
pkt_len_with_pad;
if (checksum_trailer_exists)
frame_len += IPA_DL_CHECKSUM_LENGTH;
- IPADBG("frame_len %d\n", frame_len);
+ IPADBG_LOW("frame_len %d\n", frame_len);
skb2 = skb_clone(skb, GFP_KERNEL);
if (likely(skb2)) {
@@ -2777,16 +2780,16 @@ static int ipa_wan_rx_pyld_hdlr(struct sk_buff *skb,
* payload split across 2 buff
*/
if (skb->len < frame_len) {
- IPADBG("SPL skb len %d len %d\n",
+ IPADBG_LOW("SPL skb len %d len %d\n",
skb->len, frame_len);
sys->prev_skb = skb2;
sys->len_rem = frame_len - skb->len;
skb_pull(skb, skb->len);
} else {
skb_trim(skb2, frame_len);
- IPADBG("rx avail for %d\n",
+ IPADBG_LOW("rx avail for %d\n",
status->endp_dest_idx);
- IPADBG(
+ IPADBG_LOW(
"removing Status element from skb and sending to WAN client");
skb_pull(skb2, IPA_PKT_STATUS_SIZE);
skb2->truesize = skb2->len +
@@ -2927,7 +2930,7 @@ void ipa_lan_rx_cb(void *priv, enum ipa_dp_evt_type evt, unsigned long data)
* ------------------------------------------
*/
*(u16 *)rx_skb->cb = ((metadata >> 16) & 0xFFFF);
- IPADBG("meta_data: 0x%x cb: 0x%x\n",
+ IPADBG_LOW("meta_data: 0x%x cb: 0x%x\n",
metadata, *(u32 *)rx_skb->cb);
ep->client_notify(ep->priv, IPA_RECEIVE, (unsigned long)(rx_skb));
@@ -3030,7 +3033,7 @@ static void ipa_wlan_wq_rx_common(struct ipa_sys_context *sys, u32 size)
static void ipa_dma_memcpy_notify(struct ipa_sys_context *sys,
struct sps_iovec *iovec)
{
- IPADBG("ENTER.\n");
+ IPADBG_LOW("ENTER.\n");
if (unlikely(list_empty(&sys->head_desc_list))) {
IPAERR("descriptor list is empty!\n");
WARN_ON(1);
@@ -3077,7 +3080,8 @@ void ipa_sps_irq_rx_no_aggr_notify(struct sps_event_notify *notify)
if (IPA_CLIENT_IS_APPS_CONS(rx_pkt->sys->ep->client))
atomic_set(&ipa_ctx->sps_pm.eot_activity, 1);
rx_pkt->len = notify->data.transfer.iovec.size;
- IPADBG("event %d notified sys=%p len=%u\n", notify->event_id,
+ IPADBG_LOW
+ ("event %d notified sys=%p len=%u\n", notify->event_id,
notify->user, rx_pkt->len);
queue_work(rx_pkt->sys->wq, &rx_pkt->work);
break;
@@ -3383,15 +3387,15 @@ static void ipa_tx_client_rx_notify_release(void *user1, int user2)
struct ipa_tx_data_desc *dd = (struct ipa_tx_data_desc *)user1;
int ep_idx = user2;
- IPADBG("Received data desc anchor:%p\n", dd);
+ IPADBG_LOW("Received data desc anchor:%p\n", dd);
atomic_inc(&ipa_ctx->ep[ep_idx].avail_fifo_desc);
ipa_ctx->ep[ep_idx].wstats.rx_pkts_status_rcvd++;
/* wlan host driver waits till tx complete before unload */
- IPADBG("ep=%d fifo_desc_free_count=%d\n",
+ IPADBG_LOW("ep=%d fifo_desc_free_count=%d\n",
ep_idx, atomic_read(&ipa_ctx->ep[ep_idx].avail_fifo_desc));
- IPADBG("calling client notify callback with priv:%p\n",
+ IPADBG_LOW("calling client notify callback with priv:%p\n",
ipa_ctx->ep[ep_idx].priv);
if (ipa_ctx->ep[ep_idx].client_notify) {
@@ -3455,7 +3459,7 @@ int ipa2_tx_dp_mul(enum ipa_client_type src,
return -EINVAL;
}
- IPADBG("Received data desc anchor:%p\n", data_desc);
+ IPADBG_LOW("Received data desc anchor:%p\n", data_desc);
spin_lock_bh(&ipa_ctx->wc_memb.ipa_tx_mul_spinlock);
@@ -3464,7 +3468,7 @@ int ipa2_tx_dp_mul(enum ipa_client_type src,
IPAERR("dest EP does not exist.\n");
goto fail_send;
}
- IPADBG("ep idx:%d\n", ep_idx);
+ IPADBG_LOW("ep idx:%d\n", ep_idx);
sys = ipa_ctx->ep[ep_idx].sys;
if (unlikely(ipa_ctx->ep[ep_idx].valid == 0)) {
@@ -3478,7 +3482,7 @@ int ipa2_tx_dp_mul(enum ipa_client_type src,
list_for_each_entry(entry, &data_desc->link, link) {
num_desc++;
}
- IPADBG("Number of Data Descriptors:%d", num_desc);
+ IPADBG_LOW("Number of Data Descriptors:%d", num_desc);
if (atomic_read(&sys->ep->avail_fifo_desc) < num_desc) {
IPAERR("Insufficient data descriptors available\n");
@@ -3488,7 +3492,7 @@ int ipa2_tx_dp_mul(enum ipa_client_type src,
/* Assign callback only for last data descriptor */
cnt = 0;
list_for_each_entry(entry, &data_desc->link, link) {
- IPADBG("Parsing data desc :%d\n", cnt);
+ IPADBG_LOW("Parsing data desc :%d\n", cnt);
cnt++;
((u8 *)entry->pyld_buffer)[IPA_WLAN_HDR_QMAP_ID_OFFSET] =
(u8)sys->ep->cfg.meta.qmap_id;
@@ -3497,18 +3501,18 @@ int ipa2_tx_dp_mul(enum ipa_client_type src,
desc.type = IPA_DATA_DESC_SKB;
desc.user1 = data_desc;
desc.user2 = ep_idx;
- IPADBG("priv:%p pyld_buf:0x%p pyld_len:%d\n",
+ IPADBG_LOW("priv:%p pyld_buf:0x%p pyld_len:%d\n",
entry->priv, desc.pyld, desc.len);
/* In case of last descriptor populate callback */
if (cnt == num_desc) {
- IPADBG("data desc:%p\n", data_desc);
+ IPADBG_LOW("data desc:%p\n", data_desc);
desc.callback = ipa_tx_client_rx_notify_release;
} else {
desc.callback = ipa_tx_client_rx_pkt_status;
}
- IPADBG("calling ipa_send_one()\n");
+ IPADBG_LOW("calling ipa_send_one()\n");
if (ipa_send_one(sys, &desc, true)) {
IPAERR("fail to send skb\n");
sys->ep->wstats.rx_pkt_leak += (cnt-1);
@@ -3520,7 +3524,7 @@ int ipa2_tx_dp_mul(enum ipa_client_type src,
atomic_dec(&sys->ep->avail_fifo_desc);
sys->ep->wstats.rx_pkts_rcvd++;
- IPADBG("ep=%d fifo desc=%d\n",
+ IPADBG_LOW("ep=%d fifo desc=%d\n",
ep_idx, atomic_read(&sys->ep->avail_fifo_desc));
}
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_flt.c b/drivers/platform/msm/ipa/ipa_v2/ipa_flt.c
index 0a079f4..bc54023 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_flt.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_flt.c
@@ -209,7 +209,7 @@ static int ipa_generate_flt_hw_rule(enum ipa_ip_type ip,
}
}
- IPADBG("en_rule 0x%x, action=%d, rt_idx=%d, uc=%d, retain_hdr=%d\n",
+ IPADBG_LOW("en_rule 0x%x, action=%d, rt_idx=%d, uc=%d, retain_hdr=%d\n",
en_rule,
hdr->u.hdr.action,
hdr->u.hdr.rt_tbl_idx,
@@ -601,7 +601,7 @@ static void __ipa_reap_sys_flt_tbls(enum ipa_ip_type ip)
tbl = &ipa_ctx->glob_flt_tbl[ip];
if (tbl->prev_mem.phys_base) {
- IPADBG("reaping glob flt tbl (prev) ip=%d\n", ip);
+ IPADBG_LOW("reaping glob flt tbl (prev) ip=%d\n", ip);
dma_free_coherent(ipa_ctx->pdev, tbl->prev_mem.size,
tbl->prev_mem.base, tbl->prev_mem.phys_base);
memset(&tbl->prev_mem, 0, sizeof(tbl->prev_mem));
@@ -609,7 +609,7 @@ static void __ipa_reap_sys_flt_tbls(enum ipa_ip_type ip)
if (list_empty(&tbl->head_flt_rule_list)) {
if (tbl->curr_mem.phys_base) {
- IPADBG("reaping glob flt tbl (curr) ip=%d\n", ip);
+ IPADBG_LOW("reaping glob flt tbl (curr) ip=%d\n", ip);
dma_free_coherent(ipa_ctx->pdev, tbl->curr_mem.size,
tbl->curr_mem.base,
tbl->curr_mem.phys_base);
@@ -620,7 +620,8 @@ static void __ipa_reap_sys_flt_tbls(enum ipa_ip_type ip)
for (i = 0; i < ipa_ctx->ipa_num_pipes; i++) {
tbl = &ipa_ctx->flt_tbl[i][ip];
if (tbl->prev_mem.phys_base) {
- IPADBG("reaping flt tbl (prev) pipe=%d ip=%d\n", i, ip);
+ IPADBG_LOW("reaping flt tbl");
+ IPADBG_LOW("(prev) pipe=%d ip=%d\n", i, ip);
dma_free_coherent(ipa_ctx->pdev, tbl->prev_mem.size,
tbl->prev_mem.base,
tbl->prev_mem.phys_base);
@@ -629,7 +630,8 @@ static void __ipa_reap_sys_flt_tbls(enum ipa_ip_type ip)
if (list_empty(&tbl->head_flt_rule_list)) {
if (tbl->curr_mem.phys_base) {
- IPADBG("reaping flt tbl (curr) pipe=%d ip=%d\n",
+ IPADBG_LOW("reaping flt tbl");
+ IPADBG_LOW("(curr) pipe=%d ip=%d\n",
i, ip);
dma_free_coherent(ipa_ctx->pdev,
tbl->curr_mem.size,
@@ -899,7 +901,7 @@ int __ipa_commit_flt_v2(enum ipa_ip_type ip)
for (i = 0; i < 6; i++) {
if (ipa_ctx->skip_ep_cfg_shadow[i]) {
- IPADBG("skip %d\n", i);
+ IPADBG_LOW("skip %d\n", i);
continue;
}
@@ -908,7 +910,7 @@ int __ipa_commit_flt_v2(enum ipa_ip_type ip)
ipa2_get_ep_mapping(IPA_CLIENT_APPS_CMD_PROD) == i ||
(ipa2_get_ep_mapping(IPA_CLIENT_APPS_LAN_WAN_PROD) == i
&& ipa_ctx->modem_cfg_emb_pipe_flt)) {
- IPADBG("skip %d\n", i);
+ IPADBG_LOW("skip %d\n", i);
continue;
}
@@ -934,12 +936,12 @@ int __ipa_commit_flt_v2(enum ipa_ip_type ip)
for (i = 11; i < ipa_ctx->ipa_num_pipes; i++) {
if (ipa_ctx->skip_ep_cfg_shadow[i]) {
- IPADBG("skip %d\n", i);
+ IPADBG_LOW("skip %d\n", i);
continue;
}
if (ipa2_get_ep_mapping(IPA_CLIENT_APPS_LAN_WAN_PROD) == i &&
ipa_ctx->modem_cfg_emb_pipe_flt) {
- IPADBG("skip %d\n", i);
+ IPADBG_LOW("skip %d\n", i);
continue;
}
if (ip == IPA_IP_v4) {
@@ -1074,7 +1076,7 @@ static int __ipa_add_flt_rule(struct ipa_flt_tbl *tbl, enum ipa_ip_type ip,
}
*rule_hdl = id;
entry->id = id;
- IPADBG("add flt rule rule_cnt=%d\n", tbl->rule_cnt);
+ IPADBG_LOW("add flt rule rule_cnt=%d\n", tbl->rule_cnt);
return 0;
ipa_insert_failed:
@@ -1108,7 +1110,7 @@ static int __ipa_del_flt_rule(u32 rule_hdl)
entry->tbl->rule_cnt--;
if (entry->rt_tbl)
entry->rt_tbl->ref_cnt--;
- IPADBG("del flt rule rule_cnt=%d\n", entry->tbl->rule_cnt);
+ IPADBG_LOW("del flt rule rule_cnt=%d\n", entry->tbl->rule_cnt);
entry->cookie = 0;
kmem_cache_free(ipa_ctx->flt_rule_cache, entry);
@@ -1194,7 +1196,7 @@ static int __ipa_add_global_flt_rule(enum ipa_ip_type ip,
}
tbl = &ipa_ctx->glob_flt_tbl[ip];
- IPADBG("add global flt rule ip=%d\n", ip);
+ IPADBG_LOW("add global flt rule ip=%d\n", ip);
return __ipa_add_flt_rule(tbl, ip, rule, add_rear, rule_hdl);
}
@@ -1221,7 +1223,7 @@ static int __ipa_add_ep_flt_rule(enum ipa_ip_type ip, enum ipa_client_type ep,
IPADBG("ep not connected ep_idx=%d\n", ipa_ep_idx);
tbl = &ipa_ctx->flt_tbl[ipa_ep_idx][ip];
- IPADBG("add ep flt rule ip=%d ep=%d\n", ip, ep);
+ IPADBG_LOW("add ep flt rule ip=%d ep=%d\n", ip, ep);
return __ipa_add_flt_rule(tbl, ip, rule, add_rear, rule_hdl);
}
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_hdr.c b/drivers/platform/msm/ipa/ipa_v2/ipa_hdr.c
index 2f72d88..5569979 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_hdr.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_hdr.c
@@ -43,7 +43,7 @@ static int ipa_generate_hdr_hw_tbl(struct ipa_mem_buffer *mem)
IPAERR("hdr tbl empty\n");
return -EPERM;
}
- IPADBG("tbl_sz=%d\n", ipa_ctx->hdr_tbl.end);
+ IPADBG_LOW("tbl_sz=%d\n", ipa_ctx->hdr_tbl.end);
mem->base = dma_alloc_coherent(ipa_ctx->pdev, mem->size,
&mem->phys_base, GFP_KERNEL);
@@ -57,7 +57,7 @@ static int ipa_generate_hdr_hw_tbl(struct ipa_mem_buffer *mem)
link) {
if (entry->is_hdr_proc_ctx)
continue;
- IPADBG("hdr of len %d ofst=%d\n", entry->hdr_len,
+ IPADBG_LOW("hdr of len %d ofst=%d\n", entry->hdr_len,
entry->offset_entry->offset);
memcpy(mem->base + entry->offset_entry->offset, entry->hdr,
entry->hdr_len);
@@ -74,7 +74,7 @@ static void ipa_hdr_proc_ctx_to_hw_format(struct ipa_mem_buffer *mem,
list_for_each_entry(entry,
&ipa_ctx->hdr_proc_ctx_tbl.head_proc_ctx_entry_list,
link) {
- IPADBG("processing type %d ofst=%d\n",
+ IPADBG_LOW("processing type %d ofst=%d\n",
entry->type, entry->offset_entry->offset);
if (entry->type == IPA_HDR_PROC_NONE) {
struct ipa_hdr_proc_ctx_add_hdr_seq *ctx;
@@ -88,7 +88,7 @@ static void ipa_hdr_proc_ctx_to_hw_format(struct ipa_mem_buffer *mem,
entry->hdr->phys_base :
hdr_base_addr +
entry->hdr->offset_entry->offset;
- IPADBG("header address 0x%x\n",
+ IPADBG_LOW("header address 0x%x\n",
ctx->hdr_add.hdr_addr);
ctx->end.type = IPA_PROC_CTX_TLV_TYPE_END;
ctx->end.length = 0;
@@ -105,7 +105,7 @@ static void ipa_hdr_proc_ctx_to_hw_format(struct ipa_mem_buffer *mem,
entry->hdr->phys_base :
hdr_base_addr +
entry->hdr->offset_entry->offset;
- IPADBG("header address 0x%x\n",
+ IPADBG_LOW("header address 0x%x\n",
ctx->hdr_add.hdr_addr);
ctx->cmd.type = IPA_PROC_CTX_TLV_TYPE_PROC_CMD;
ctx->cmd.length = 0;
@@ -117,7 +117,7 @@ static void ipa_hdr_proc_ctx_to_hw_format(struct ipa_mem_buffer *mem,
ctx->cmd.value = IPA_HDR_UCP_802_3_TO_ETHII;
else if (entry->type == IPA_HDR_PROC_802_3_TO_802_3)
ctx->cmd.value = IPA_HDR_UCP_802_3_TO_802_3;
- IPADBG("command id %d\n", ctx->cmd.value);
+ IPADBG_LOW("command id %d\n", ctx->cmd.value);
ctx->end.type = IPA_PROC_CTX_TLV_TYPE_END;
ctx->end.length = 0;
ctx->end.value = 0;
@@ -144,7 +144,7 @@ static int ipa_generate_hdr_proc_ctx_hw_tbl(u32 hdr_sys_addr,
/* make sure table is aligned */
mem->size += IPA_HDR_PROC_CTX_TABLE_ALIGNMENT_BYTE;
- IPADBG("tbl_sz=%d\n", ipa_ctx->hdr_proc_ctx_tbl.end);
+ IPADBG_LOW("tbl_sz=%d\n", ipa_ctx->hdr_proc_ctx_tbl.end);
mem->base = dma_alloc_coherent(ipa_ctx->pdev, mem->size,
&mem->phys_base, GFP_KERNEL);
@@ -554,7 +554,7 @@ static int __ipa_add_hdr_proc_ctx(struct ipa_hdr_proc_ctx_add *proc_ctx,
int needed_len;
int mem_size;
- IPADBG("processing type %d hdr_hdl %d\n",
+ IPADBG_LOW("processing type %d hdr_hdl %d\n",
proc_ctx->type, proc_ctx->hdr_hdl);
if (!HDR_PROC_TYPE_IS_VALID(proc_ctx->type)) {
@@ -633,7 +633,7 @@ static int __ipa_add_hdr_proc_ctx(struct ipa_hdr_proc_ctx_add *proc_ctx,
entry->offset_entry = offset;
list_add(&entry->link, &htbl->head_proc_ctx_entry_list);
htbl->proc_ctx_cnt++;
- IPADBG("add proc ctx of sz=%d cnt=%d ofst=%d\n", needed_len,
+ IPADBG_LOW("add proc ctx of sz=%d cnt=%d ofst=%d\n", needed_len,
htbl->proc_ctx_cnt, offset->offset);
id = ipa_id_alloc(entry);
@@ -774,12 +774,12 @@ static int __ipa_add_hdr(struct ipa_hdr_add *hdr)
list_add(&entry->link, &htbl->head_hdr_entry_list);
htbl->hdr_cnt++;
if (entry->is_hdr_proc_ctx)
- IPADBG("add hdr of sz=%d hdr_cnt=%d phys_base=%pa\n",
+ IPADBG_LOW("add hdr of sz=%d hdr_cnt=%d phys_base=%pa\n",
hdr->hdr_len,
htbl->hdr_cnt,
&entry->phys_base);
else
- IPADBG("add hdr of sz=%d hdr_cnt=%d ofst=%d\n",
+ IPADBG_LOW("add hdr of sz=%d hdr_cnt=%d ofst=%d\n",
hdr->hdr_len,
htbl->hdr_cnt,
entry->offset_entry->offset);
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_i.h b/drivers/platform/msm/ipa/ipa_v2/ipa_i.h
index 67b0be6..0ed32f8 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_i.h
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_i.h
@@ -55,7 +55,7 @@
#define IPA_QMAP_HEADER_LENGTH (4)
#define IPA_DL_CHECKSUM_LENGTH (8)
#define IPA_NUM_DESC_PER_SW_TX (2)
-#define IPA_GENERIC_RX_POOL_SZ 1000
+#define IPA_GENERIC_RX_POOL_SZ 192
#define IPA_UC_FINISH_MAX 6
#define IPA_UC_WAIT_MIN_SLEEP 1000
#define IPA_UC_WAII_MAX_SLEEP 1200
@@ -65,11 +65,37 @@
#define IPA_MAX_NUM_REQ_CACHE 10
+#define IPA_IPC_LOG_PAGES 50
#define IPADBG(fmt, args...) \
- pr_debug(DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
+ do { \
+ pr_debug(DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args);\
+ if (ipa_ctx) { \
+ IPA_IPC_LOGGING(ipa_ctx->logbuf, \
+ DRV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_ctx->logbuf_low, \
+ DRV_NAME " %s:%d " fmt, ## args); \
+ } \
+ } while (0)
+
+#define IPADBG_LOW(fmt, args...) \
+ do { \
+ pr_debug(DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args);\
+ if (ipa_ctx) \
+ IPA_IPC_LOGGING(ipa_ctx->logbuf_low, \
+ DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
#define IPAERR(fmt, args...) \
- pr_err(DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
+ do { \
+ pr_err(DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args);\
+ if (ipa_ctx) { \
+ IPA_IPC_LOGGING(ipa_ctx->logbuf, \
+ DRV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_ctx->logbuf_low, \
+ DRV_NAME " %s:%d " fmt, ## args); \
+ } \
+ } while (0)
#define IPAERR_RL(fmt, args...) \
do { \
@@ -1040,6 +1066,8 @@ struct ipa_cne_evt {
* @use_ipa_teth_bridge: use tethering bridge driver
* @ipa_bam_remote_mode: ipa bam is in remote mode
* @modem_cfg_emb_pipe_flt: modem configure embedded pipe filtering rules
+ * @logbuf: ipc log buffer for high priority messages
+ * @logbuf_low: ipc log buffer for low priority messages
* @ipa_wdi2: using wdi-2.0
* @ipa_bus_hdl: msm driver handle for the data path bus
* @ctrl: holds the core specific operations based on
@@ -1132,6 +1160,8 @@ struct ipa_context {
/* featurize if memory footprint becomes a concern */
struct ipa_stats stats;
void *smem_pipe_mem;
+ void *logbuf;
+ void *logbuf_low;
u32 ipa_bus_hdl;
struct ipa_controller *ctrl;
struct idr ipa_idr;
@@ -1175,6 +1205,7 @@ struct ipa_context {
struct ipa_cne_evt ipa_cne_evt_req_cache[IPA_MAX_NUM_REQ_CACHE];
int num_ipa_cne_evt_req;
struct mutex ipa_cne_evt_lock;
+ bool ipa_uc_monitor_holb;
};
/**
@@ -1230,6 +1261,7 @@ struct ipa_plat_drv_res {
bool tethered_flow_control;
u32 ipa_rx_polling_sleep_msec;
u32 ipa_polling_iteration;
+ bool ipa_uc_monitor_holb;
};
struct ipa_mem_partition {
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_interrupts.c b/drivers/platform/msm/ipa/ipa_v2/ipa_interrupts.c
index 17f577a..c17dee9 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_interrupts.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_interrupts.c
@@ -103,11 +103,12 @@ static int handle_interrupt(int irq_num, bool isr_context)
switch (interrupt_info.interrupt) {
case IPA_TX_SUSPEND_IRQ:
+ IPADBG_LOW("processing TX_SUSPEND interrupt work-around\n");
suspend_data = ipa_read_reg(ipa_ctx->mmio,
IPA_IRQ_SUSPEND_INFO_EE_n_ADDR(ipa_ee));
if (!is_valid_ep(suspend_data))
return 0;
-
+ IPADBG_LOW("get interrupt %d\n", suspend_data);
suspend_interrupt_data =
kzalloc(sizeof(*suspend_interrupt_data), GFP_ATOMIC);
if (!suspend_interrupt_data) {
@@ -167,9 +168,11 @@ static void ipa_process_interrupts(bool isr_context)
u32 i = 0;
u32 en;
bool uc_irq;
-
en = ipa_read_reg(ipa_ctx->mmio, IPA_IRQ_EN_EE_n_ADDR(ipa_ee));
reg = ipa_read_reg(ipa_ctx->mmio, IPA_IRQ_STTS_EE_n_ADDR(ipa_ee));
+ IPADBG_LOW(
+ "ISR enter\n isr_ctx = %d EN reg = 0x%x STTS reg = 0x%x\n",
+ isr_context, en, reg);
while (en & reg) {
bmsk = 1;
for (i = 0; i < IPA_IRQ_NUM_MAX; i++) {
@@ -206,21 +209,22 @@ static void ipa_process_interrupts(bool isr_context)
reg = ipa_read_reg(ipa_ctx->mmio,
IPA_IRQ_STTS_EE_n_ADDR(ipa_ee));
}
+ IPADBG_LOW("Exit\n");
}
static void ipa_interrupt_defer(struct work_struct *work)
{
- IPADBG("processing interrupts in wq\n");
+ IPADBG_LOW("processing interrupts in wq\n");
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
ipa_process_interrupts(false);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
- IPADBG("Done\n");
+ IPADBG_LOW("Done\n");
}
static irqreturn_t ipa_isr(int irq, void *ctxt)
{
unsigned long flags;
-
+ IPADBG_LOW("Enter\n");
/* defer interrupt handling in case IPA is not clocked on */
if (ipa_active_clients_trylock(&flags) == 0) {
IPADBG("defer interrupt processing\n");
@@ -235,7 +239,7 @@ static irqreturn_t ipa_isr(int irq, void *ctxt)
}
ipa_process_interrupts(true);
-
+ IPADBG_LOW("Exit\n");
bail:
ipa_active_clients_trylock_unlock(&flags);
return IRQ_HANDLED;
@@ -260,7 +264,7 @@ int ipa2_add_interrupt_handler(enum ipa_irq_type interrupt,
u32 bmsk;
int irq_num;
- IPADBG("in ipa2_add_interrupt_handler\n");
+ IPADBG_LOW("in ipa2_add_interrupt_handler\n");
if (interrupt < IPA_BAD_SNOC_ACCESS_IRQ ||
interrupt >= IPA_IRQ_MAX) {
IPAERR("invalid interrupt number %d\n", interrupt);
@@ -284,7 +288,7 @@ int ipa2_add_interrupt_handler(enum ipa_irq_type interrupt,
bmsk = 1 << irq_num;
val |= bmsk;
ipa_write_reg(ipa_ctx->mmio, IPA_IRQ_EN_EE_n_ADDR(ipa_ee), val);
- IPADBG("wrote IPA_IRQ_EN_EE_n_ADDR register. reg = %d\n", val);
+ IPADBG_LOW("wrote IPA_IRQ_EN_EE_n_ADDR register. reg = %d\n", val);
return 0;
}
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_intf.c b/drivers/platform/msm/ipa/ipa_v2/ipa_intf.c
index e6048d1..9e68843 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_intf.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_intf.c
@@ -558,9 +558,8 @@ ssize_t ipa_read(struct file *filp, char __user *buf, size_t count,
list_del(&msg->link);
}
- IPADBG("msg=%p\n", msg);
-
if (msg) {
+ IPADBG("msg=%pK\n", msg);
locked = 0;
mutex_unlock(&ipa_ctx->msg_lock);
if (copy_to_user(buf, &msg->meta,
@@ -588,6 +587,7 @@ ssize_t ipa_read(struct file *filp, char __user *buf, size_t count,
IPA_STATS_INC_CNT(
ipa_ctx->stats.msg_r[msg->meta.msg_type]);
kfree(msg);
+ msg = NULL;
}
ret = -EAGAIN;
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_mhi.c b/drivers/platform/msm/ipa/ipa_v2/ipa_mhi.c
index e8f25c9..0ab4a68 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_mhi.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_mhi.c
@@ -20,16 +20,40 @@
#include "ipa_i.h"
#include "ipa_qmi_service.h"
-#define IPA_MHI_DRV_NAME
+#define IPA_MHI_DRV_NAME "ipa_mhi"
#define IPA_MHI_DBG(fmt, args...) \
- pr_debug(IPA_MHI_DRV_NAME " %s:%d " fmt, \
- __func__, __LINE__, ## args)
+ do { \
+ pr_debug(IPA_MHI_DRV_NAME " %s:%d " fmt, \
+ __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
+#define IPA_MHI_DBG_LOW(fmt, args...) \
+ do { \
+ pr_debug(IPA_MHI_DRV_NAME " %s:%d " fmt, \
+ __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
#define IPA_MHI_ERR(fmt, args...) \
- pr_err(IPA_MHI_DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
+ do { \
+ pr_err(IPA_MHI_DRV_NAME " %s:%d " fmt, \
+ __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
#define IPA_MHI_FUNC_ENTRY() \
- IPA_MHI_DBG("ENTRY\n")
+ IPA_MHI_DBG_LOW("ENTRY\n")
#define IPA_MHI_FUNC_EXIT() \
- IPA_MHI_DBG("EXIT\n")
+ IPA_MHI_DBG_LOW("EXIT\n")
+
bool ipa2_mhi_sps_channel_empty(enum ipa_client_type client)
{
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.c b/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.c
index 825c538..f8a0ded 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.c
@@ -312,7 +312,7 @@ static void ipa_a5_svc_recv_msg(struct work_struct *work)
int rc;
do {
- IPAWANDBG("Notified about a Receive Event");
+ IPAWANDBG_LOW("Notified about a Receive Event");
rc = qmi_recv_msg(ipa_svc_handle);
} while (rc == 0);
if (rc != -ENOMSG)
@@ -386,7 +386,7 @@ static int ipa_check_qmi_response(int rc,
req_id, result, error);
return result;
}
- IPAWANDBG("Received %s successfully\n", resp_type);
+ IPAWANDBG_LOW("Received %s successfully\n", resp_type);
return 0;
}
@@ -766,7 +766,7 @@ static void ipa_q6_clnt_recv_msg(struct work_struct *work)
int rc;
do {
- IPAWANDBG("Notified about a Receive Event");
+ IPAWANDBG_LOW("Notified about a Receive Event");
rc = qmi_recv_msg(ipa_q6_clnt);
} while (rc == 0);
if (rc != -ENOMSG)
@@ -778,7 +778,7 @@ static void ipa_q6_clnt_notify(struct qmi_handle *handle,
{
switch (event) {
case QMI_RECV_MSG:
- IPAWANDBG("client qmi recv message called");
+ IPAWANDBG_LOW("client qmi recv message called");
if (!atomic_read(&workqueues_stopped))
queue_delayed_work(ipa_clnt_resp_workqueue,
&work_recv_msg_client, 0);
@@ -1149,7 +1149,7 @@ int ipa_qmi_get_data_stats(struct ipa_get_data_stats_req_msg_v01 *req,
resp_desc.msg_id = QMI_IPA_GET_DATA_STATS_RESP_V01;
resp_desc.ei_array = ipa_get_data_stats_resp_msg_data_v01_ei;
- IPAWANDBG("Sending QMI_IPA_GET_DATA_STATS_REQ_V01\n");
+ IPAWANDBG_LOW("Sending QMI_IPA_GET_DATA_STATS_REQ_V01\n");
if (unlikely(!ipa_q6_clnt))
return -ETIMEDOUT;
rc = qmi_send_req_wait(ipa_q6_clnt, &req_desc, req,
@@ -1158,7 +1158,7 @@ int ipa_qmi_get_data_stats(struct ipa_get_data_stats_req_msg_v01 *req,
sizeof(struct ipa_get_data_stats_resp_msg_v01),
QMI_SEND_STATS_REQ_TIMEOUT_MS);
- IPAWANDBG("QMI_IPA_GET_DATA_STATS_RESP_V01 received\n");
+ IPAWANDBG_LOW("QMI_IPA_GET_DATA_STATS_RESP_V01 received\n");
return ipa_check_qmi_response(rc,
QMI_IPA_GET_DATA_STATS_REQ_V01, resp->resp.result,
@@ -1179,7 +1179,7 @@ int ipa_qmi_get_network_stats(struct ipa_get_apn_data_stats_req_msg_v01 *req,
resp_desc.msg_id = QMI_IPA_GET_APN_DATA_STATS_RESP_V01;
resp_desc.ei_array = ipa_get_apn_data_stats_resp_msg_data_v01_ei;
- IPAWANDBG("Sending QMI_IPA_GET_APN_DATA_STATS_REQ_V01\n");
+ IPAWANDBG_LOW("Sending QMI_IPA_GET_APN_DATA_STATS_REQ_V01\n");
if (unlikely(!ipa_q6_clnt))
return -ETIMEDOUT;
rc = qmi_send_req_wait(ipa_q6_clnt, &req_desc, req,
@@ -1188,7 +1188,7 @@ int ipa_qmi_get_network_stats(struct ipa_get_apn_data_stats_req_msg_v01 *req,
sizeof(struct ipa_get_apn_data_stats_resp_msg_v01),
QMI_SEND_STATS_REQ_TIMEOUT_MS);
- IPAWANDBG("QMI_IPA_GET_APN_DATA_STATS_RESP_V01 received\n");
+ IPAWANDBG_LOW("QMI_IPA_GET_APN_DATA_STATS_RESP_V01 received\n");
return ipa_check_qmi_response(rc,
QMI_IPA_GET_APN_DATA_STATS_REQ_V01, resp->resp.result,
@@ -1212,7 +1212,7 @@ int ipa_qmi_set_data_quota(struct ipa_set_data_usage_quota_req_msg_v01 *req)
resp_desc.msg_id = QMI_IPA_SET_DATA_USAGE_QUOTA_RESP_V01;
resp_desc.ei_array = ipa_set_data_usage_quota_resp_msg_data_v01_ei;
- IPAWANDBG("Sending QMI_IPA_SET_DATA_USAGE_QUOTA_REQ_V01\n");
+ IPAWANDBG_LOW("Sending QMI_IPA_SET_DATA_USAGE_QUOTA_REQ_V01\n");
if (unlikely(!ipa_q6_clnt))
return -ETIMEDOUT;
rc = qmi_send_req_wait(ipa_q6_clnt, &req_desc, req,
@@ -1220,7 +1220,7 @@ int ipa_qmi_set_data_quota(struct ipa_set_data_usage_quota_req_msg_v01 *req)
&resp_desc, &resp, sizeof(resp),
QMI_SEND_STATS_REQ_TIMEOUT_MS);
- IPAWANDBG("QMI_IPA_SET_DATA_USAGE_QUOTA_RESP_V01 received\n");
+ IPAWANDBG_LOW("QMI_IPA_SET_DATA_USAGE_QUOTA_RESP_V01 received\n");
return ipa_check_qmi_response(rc,
QMI_IPA_SET_DATA_USAGE_QUOTA_REQ_V01, resp.resp.result,
@@ -1247,14 +1247,14 @@ int ipa_qmi_stop_data_qouta(void)
resp_desc.msg_id = QMI_IPA_STOP_DATA_USAGE_QUOTA_RESP_V01;
resp_desc.ei_array = ipa_stop_data_usage_quota_resp_msg_data_v01_ei;
- IPAWANDBG("Sending QMI_IPA_STOP_DATA_USAGE_QUOTA_REQ_V01\n");
+ IPAWANDBG_LOW("Sending QMI_IPA_STOP_DATA_USAGE_QUOTA_REQ_V01\n");
if (unlikely(!ipa_q6_clnt))
return -ETIMEDOUT;
rc = qmi_send_req_wait(ipa_q6_clnt, &req_desc, &req, sizeof(req),
&resp_desc, &resp, sizeof(resp),
QMI_SEND_STATS_REQ_TIMEOUT_MS);
- IPAWANDBG("QMI_IPA_STOP_DATA_USAGE_QUOTA_RESP_V01 received\n");
+ IPAWANDBG_LOW("QMI_IPA_STOP_DATA_USAGE_QUOTA_RESP_V01 received\n");
return ipa_check_qmi_response(rc,
QMI_IPA_STOP_DATA_USAGE_QUOTA_REQ_V01, resp.resp.result,
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.h b/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.h
index 4c504f1..1f5d619 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.h
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_qmi_service.h
@@ -31,9 +31,39 @@
#define SUBSYS_MODEM "modem"
#define IPAWANDBG(fmt, args...) \
- pr_debug(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
+ do { \
+ pr_debug(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ DEV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ DEV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
+#define IPAWANDBG_LOW(fmt, args...) \
+ do { \
+ pr_debug(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ DEV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
#define IPAWANERR(fmt, args...) \
- pr_err(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
+ do { \
+ pr_err(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ DEV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ DEV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
+#define IPAWANINFO(fmt, args...) \
+ do { \
+ pr_info(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ DEV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ DEV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
extern struct ipa_qmi_context *ipa_qmi_ctx;
extern struct mutex ipa_qmi_lock;
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_rt.c b/drivers/platform/msm/ipa/ipa_v2/ipa_rt.c
index 321cc89..c41ddf4 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_rt.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_rt.c
@@ -94,7 +94,7 @@ int __ipa_generate_rt_hw_rule_v2(enum ipa_ip_type ip,
return -EPERM;
}
- IPADBG("en_rule 0x%x\n", en_rule);
+ IPADBG_LOW("en_rule 0x%x\n", en_rule);
rule_hdr->u.hdr.en_rule = en_rule;
ipa_write_32(rule_hdr->u.word, (u8 *)rule_hdr);
@@ -497,7 +497,9 @@ static void __ipa_reap_sys_rt_tbls(enum ipa_ip_type ip)
set = &ipa_ctx->rt_tbl_set[ip];
list_for_each_entry(tbl, &set->head_rt_tbl_list, link) {
if (tbl->prev_mem.phys_base) {
- IPADBG("reaping rt tbl name=%s ip=%d\n", tbl->name, ip);
+ IPADBG_LOW("reaping rt");
+ IPADBG_LOW("tbl name=%s ip=%d\n",
+ tbl->name, ip);
dma_free_coherent(ipa_ctx->pdev, tbl->prev_mem.size,
tbl->prev_mem.base,
tbl->prev_mem.phys_base);
@@ -510,8 +512,9 @@ static void __ipa_reap_sys_rt_tbls(enum ipa_ip_type ip)
list_del(&tbl->link);
WARN_ON(tbl->prev_mem.phys_base != 0);
if (tbl->curr_mem.phys_base) {
- IPADBG("reaping sys rt tbl name=%s ip=%d\n", tbl->name,
- ip);
+ IPADBG_LOW("reaping sys");
+ IPADBG_LOW("rt tbl name=%s ip=%d\n",
+ tbl->name, ip);
dma_free_coherent(ipa_ctx->pdev, tbl->curr_mem.size,
tbl->curr_mem.base,
tbl->curr_mem.phys_base);
@@ -973,7 +976,7 @@ static int __ipa_del_rt_tbl(struct ipa_rt_tbl *entry)
list_del(&entry->link);
clear_bit(entry->idx, &ipa_ctx->rt_idx_bitmap[ip]);
entry->set->tbl_cnt--;
- IPADBG("del rt tbl_idx=%d tbl_cnt=%d\n", entry->idx,
+ IPADBG_LOW("del rt tbl_idx=%d tbl_cnt=%d\n", entry->idx,
entry->set->tbl_cnt);
kmem_cache_free(ipa_ctx->rt_tbl_cache, entry);
} else {
@@ -981,7 +984,7 @@ static int __ipa_del_rt_tbl(struct ipa_rt_tbl *entry)
&ipa_ctx->reap_rt_tbl_set[ip].head_rt_tbl_list);
clear_bit(entry->idx, &ipa_ctx->rt_idx_bitmap[ip]);
entry->set->tbl_cnt--;
- IPADBG("del sys rt tbl_idx=%d tbl_cnt=%d\n", entry->idx,
+ IPADBG_LOW("del sys rt tbl_idx=%d tbl_cnt=%d\n", entry->idx,
entry->set->tbl_cnt);
}
@@ -1062,7 +1065,8 @@ static int __ipa_add_rt_rule(enum ipa_ip_type ip, const char *name,
WARN_ON(1);
goto ipa_insert_failed;
}
- IPADBG("add rt rule tbl_idx=%d rule_cnt=%d\n", tbl->idx, tbl->rule_cnt);
+ IPADBG_LOW("add rt rule tbl_idx=%d", tbl->idx);
+ IPADBG_LOW("rule_cnt=%d\n", tbl->rule_cnt);
*rule_hdl = id;
entry->id = id;
@@ -1147,7 +1151,7 @@ int __ipa_del_rt_rule(u32 rule_hdl)
__ipa_release_hdr_proc_ctx(entry->proc_ctx->id);
list_del(&entry->link);
entry->tbl->rule_cnt--;
- IPADBG("del rt rule tbl_idx=%d rule_cnt=%d\n", entry->tbl->idx,
+ IPADBG_LOW("del rt rule tbl_idx=%d rule_cnt=%d\n", entry->tbl->idx,
entry->tbl->rule_cnt);
if (entry->tbl->rule_cnt == 0 && entry->tbl->ref_cnt == 0) {
if (__ipa_del_rt_tbl(entry->tbl))
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_uc.c b/drivers/platform/msm/ipa/ipa_v2/ipa_uc.c
index a7ecf1c..d9bcc9b 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_uc.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_uc.c
@@ -790,8 +790,12 @@ int ipa_uc_monitor_holb(enum ipa_client_type ipa_client, bool enable)
int ep_idx;
int ret;
- /* HOLB monitoring is applicable only to 2.6L. */
- if (ipa_ctx->ipa_hw_type != IPA_HW_v2_6L) {
+ /*
+ * HOLB monitoring is applicable to 2.6L.
+ * And also could be enabled from dtsi node.
+ */
+ if (ipa_ctx->ipa_hw_type != IPA_HW_v2_6L ||
+ !ipa_ctx->ipa_uc_monitor_holb) {
IPADBG("Not applicable on this target\n");
return 0;
}
diff --git a/drivers/platform/msm/ipa/ipa_v2/ipa_utils.c b/drivers/platform/msm/ipa/ipa_v2/ipa_utils.c
index e611abd..980b1f3 100644
--- a/drivers/platform/msm/ipa/ipa_v2/ipa_utils.c
+++ b/drivers/platform/msm/ipa/ipa_v2/ipa_utils.c
@@ -1643,6 +1643,7 @@ int ipa_generate_hw_rule(enum ipa_ip_type ip,
* OFFSET_MEQ32_0 with mask of 0 and val of 0 and offset 0
*/
if (attrib->attrib_mask == 0) {
+ IPADBG_LOW("building default rule\n");
if (ipa_ofst_meq32[ofst_meq32] == -1) {
IPAERR("ran out of meq32 eq\n");
return -EPERM;
@@ -4886,13 +4887,17 @@ static int ipa2_stop_gsi_channel(u32 clnt_hdl)
static void *ipa2_get_ipc_logbuf(void)
{
- /* no support for IPC logging in IPAv2 */
+ if (ipa_ctx)
+ return ipa_ctx->logbuf;
+
return NULL;
}
static void *ipa2_get_ipc_logbuf_low(void)
{
- /* no support for IPC logging in IPAv2 */
+ if (ipa_ctx)
+ return ipa_ctx->logbuf_low;
+
return NULL;
}
diff --git a/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa.c b/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa.c
index 8ea1d99..92177f1 100644
--- a/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa.c
+++ b/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa.c
@@ -1079,7 +1079,7 @@ static int ipa_wwan_xmit(struct sk_buff *skb, struct net_device *dev)
struct ipa_tx_meta meta;
if (skb->protocol != htons(ETH_P_MAP)) {
- IPAWANDBG
+ IPAWANDBG_LOW
("SW filtering out none QMAP packet received from %s",
current->comm);
dev_kfree_skb_any(skb);
@@ -1104,7 +1104,8 @@ static int ipa_wwan_xmit(struct sk_buff *skb, struct net_device *dev)
if (atomic_read(&wwan_ptr->outstanding_pkts) >=
wwan_ptr->outstanding_high) {
if (!qmap_check) {
- IPAWANDBG("pending(%d)/(%d)- stop(%d), qmap_chk(%d)\n",
+ IPAWANDBG_LOW
+ ("pending(%d)/(%d)- stop(%d), qmap_chk(%d)\n",
atomic_read(&wwan_ptr->outstanding_pkts),
wwan_ptr->outstanding_high,
netif_queue_stopped(dev),
@@ -1198,7 +1199,8 @@ static void apps_ipa_tx_complete_notify(void *priv,
netif_queue_stopped(wwan_ptr->net) &&
atomic_read(&wwan_ptr->outstanding_pkts) <
(wwan_ptr->outstanding_low)) {
- IPAWANDBG("Outstanding low (%d) - wake up queue\n",
+ IPAWANDBG_LOW
+ ("Outstanding low (%d) - wake up queue\n",
wwan_ptr->outstanding_low);
netif_wake_queue(wwan_ptr->net);
}
@@ -1228,7 +1230,7 @@ static void apps_ipa_packet_receive_notify(void *priv,
int result;
unsigned int packet_len = skb->len;
- IPAWANDBG("Rx packet was received");
+ IPAWANDBG_LOW("Rx packet was received");
skb->dev = ipa_netdevs[0];
skb->protocol = htons(ETH_P_MAP);
@@ -1803,10 +1805,10 @@ static void q6_rm_notify_cb(void *user_data,
{
switch (event) {
case IPA_RM_RESOURCE_GRANTED:
- IPAWANDBG("%s: Q6_PROD GRANTED CB\n", __func__);
+ IPAWANDBG_LOW("%s: Q6_PROD GRANTED CB\n", __func__);
break;
case IPA_RM_RESOURCE_RELEASED:
- IPAWANDBG("%s: Q6_PROD RELEASED CB\n", __func__);
+ IPAWANDBG_LOW("%s: Q6_PROD RELEASED CB\n", __func__);
break;
default:
return;
@@ -1915,7 +1917,7 @@ static void wake_tx_queue(struct work_struct *work)
*/
static void ipa_rm_resource_granted(void *dev)
{
- IPAWANDBG("Resource Granted - starting queue\n");
+ IPAWANDBG_LOW("Resource Granted - starting queue\n");
schedule_work(&ipa_tx_wakequeue_work);
}
@@ -2291,7 +2293,7 @@ static int rmnet_ipa_ap_suspend(struct device *dev)
struct net_device *netdev = ipa_netdevs[0];
struct wwan_private *wwan_ptr = netdev_priv(netdev);
- IPAWANDBG("Enter...\n");
+ IPAWANDBG_LOW("Enter...\n");
/* Do not allow A7 to suspend in case there are oustanding packets */
if (atomic_read(&wwan_ptr->outstanding_pkts) != 0) {
IPAWANDBG("Outstanding packets, postponing AP suspend.\n");
@@ -2302,7 +2304,7 @@ static int rmnet_ipa_ap_suspend(struct device *dev)
netif_tx_lock_bh(netdev);
ipa_rm_release_resource(IPA_RM_RESOURCE_WWAN_0_PROD);
netif_tx_unlock_bh(netdev);
- IPAWANDBG("Exit\n");
+ IPAWANDBG_LOW("Exit\n");
return 0;
}
@@ -2321,9 +2323,9 @@ static int rmnet_ipa_ap_resume(struct device *dev)
{
struct net_device *netdev = ipa_netdevs[0];
- IPAWANDBG("Enter...\n");
+ IPAWANDBG_LOW("Enter...\n");
netif_wake_queue(netdev);
- IPAWANDBG("Exit\n");
+ IPAWANDBG_LOW("Exit\n");
return 0;
}
@@ -2425,6 +2427,7 @@ static int ssr_notifier_cb(struct notifier_block *this,
return NOTIFY_DONE;
}
}
+ IPAWANDBG_LOW("Exit\n");
return NOTIFY_DONE;
}
@@ -2868,7 +2871,7 @@ int rmnet_ipa_query_tethering_stats_modem(
IPAWANERR("reset the pipe stats\n");
} else {
/* print tethered-client enum */
- IPAWANDBG("Tethered-client enum(%d)\n", data->ipa_client);
+ IPAWANDBG_LOW("Tethered-client enum(%d)\n", data->ipa_client);
}
rc = ipa_qmi_get_data_stats(req, resp);
@@ -2886,10 +2889,11 @@ int rmnet_ipa_query_tethering_stats_modem(
if (resp->dl_dst_pipe_stats_list_valid) {
for (pipe_len = 0; pipe_len < resp->dl_dst_pipe_stats_list_len;
pipe_len++) {
- IPAWANDBG("Check entry(%d) dl_dst_pipe(%d)\n",
+ IPAWANDBG_LOW("Check entry(%d) dl_dst_pipe(%d)\n",
pipe_len, resp->dl_dst_pipe_stats_list
[pipe_len].pipe_index);
- IPAWANDBG("dl_p_v4(%lu)v6(%lu) dl_b_v4(%lu)v6(%lu)\n",
+ IPAWANDBG_LOW
+ ("dl_p_v4(%lu)v6(%lu) dl_b_v4(%lu)v6(%lu)\n",
(unsigned long int) resp->
dl_dst_pipe_stats_list[pipe_len].
num_ipv4_packets,
@@ -2925,7 +2929,7 @@ int rmnet_ipa_query_tethering_stats_modem(
}
}
}
- IPAWANDBG("v4_rx_p(%lu) v6_rx_p(%lu) v4_rx_b(%lu) v6_rx_b(%lu)\n",
+ IPAWANDBG_LOW("v4_rx_p(%lu) v6_rx_p(%lu) v4_rx_b(%lu) v6_rx_b(%lu)\n",
(unsigned long int) data->ipv4_rx_packets,
(unsigned long int) data->ipv6_rx_packets,
(unsigned long int) data->ipv4_rx_bytes,
@@ -2934,11 +2938,12 @@ int rmnet_ipa_query_tethering_stats_modem(
if (resp->ul_src_pipe_stats_list_valid) {
for (pipe_len = 0; pipe_len < resp->ul_src_pipe_stats_list_len;
pipe_len++) {
- IPAWANDBG("Check entry(%d) ul_dst_pipe(%d)\n",
+ IPAWANDBG_LOW("Check entry(%d) ul_dst_pipe(%d)\n",
pipe_len,
resp->ul_src_pipe_stats_list[pipe_len].
pipe_index);
- IPAWANDBG("ul_p_v4(%lu)v6(%lu)ul_b_v4(%lu)v6(%lu)\n",
+ IPAWANDBG_LOW
+ ("ul_p_v4(%lu)v6(%lu)ul_b_v4(%lu)v6(%lu)\n",
(unsigned long int) resp->
ul_src_pipe_stats_list[pipe_len].
num_ipv4_packets,
@@ -2974,7 +2979,7 @@ int rmnet_ipa_query_tethering_stats_modem(
}
}
}
- IPAWANDBG("tx_p_v4(%lu)v6(%lu)tx_b_v4(%lu) v6(%lu)\n",
+ IPAWANDBG_LOW("tx_p_v4(%lu)v6(%lu)tx_b_v4(%lu) v6(%lu)\n",
(unsigned long int) data->ipv4_tx_packets,
(unsigned long int) data->ipv6_tx_packets,
(unsigned long int) data->ipv4_tx_bytes,
diff --git a/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa_fd_ioctl.c b/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa_fd_ioctl.c
index 793529d..5ef3063 100644
--- a/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa_fd_ioctl.c
+++ b/drivers/platform/msm/ipa/ipa_v2/rmnet_ipa_fd_ioctl.c
@@ -149,8 +149,7 @@ static long wan_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
break;
case WAN_IOC_POLL_TETHERING_STATS:
- IPAWANDBG("device %s got WAN_IOCTL_POLL_TETHERING_STATS :>>>\n",
- DRIVER_NAME);
+ IPAWANDBG_LOW("got WAN_IOCTL_POLL_TETHERING_STATS :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_poll_tethering_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
@@ -174,8 +173,7 @@ static long wan_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
break;
case WAN_IOC_SET_DATA_QUOTA:
- IPAWANDBG("device %s got WAN_IOCTL_SET_DATA_QUOTA :>>>\n",
- DRIVER_NAME);
+ IPAWANDBG_LOW("got WAN_IOCTL_SET_DATA_QUOTA :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_set_data_quota);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
@@ -199,8 +197,7 @@ static long wan_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
break;
case WAN_IOC_SET_TETHER_CLIENT_PIPE:
- IPAWANDBG("device %s got WAN_IOC_SET_TETHER_CLIENT_PIPE :>>>\n",
- DRIVER_NAME);
+ IPAWANDBG_LOW("got WAN_IOC_SET_TETHER_CLIENT_PIPE :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_set_tether_client_pipe);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
@@ -220,8 +217,7 @@ static long wan_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
break;
case WAN_IOC_QUERY_TETHER_STATS:
- IPAWANDBG("device %s got WAN_IOC_QUERY_TETHER_STATS :>>>\n",
- DRIVER_NAME);
+ IPAWANDBG_LOW("got WAN_IOC_QUERY_TETHER_STATS :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_query_tether_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
@@ -273,8 +269,7 @@ static long wan_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
break;
case WAN_IOC_RESET_TETHER_STATS:
- IPAWANDBG("device %s got WAN_IOC_RESET_TETHER_STATS :>>>\n",
- DRIVER_NAME);
+ IPAWANDBG_LOW("got WAN_IOC_RESET_TETHER_STATS :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_reset_tether_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa.c b/drivers/platform/msm/ipa/ipa_v3/ipa.c
index d3c2ca3..af4d448 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa.c
@@ -2116,6 +2116,12 @@ static void ipa3_q6_avoid_holb(void)
if (ep_idx == -1)
continue;
+ /* from IPA 4.0 pipe suspend is not supported */
+ if (ipa3_ctx->ipa_hw_type < IPA_HW_v4_0)
+ ipahal_write_reg_n_fields(
+ IPA_ENDP_INIT_CTRL_n,
+ ep_idx, &ep_suspend);
+
/*
* ipa3_cfg_ep_holb is not used here because we are
* setting HOLB on Q6 pipes, and from APPS perspective
@@ -2128,12 +2134,6 @@ static void ipa3_q6_avoid_holb(void)
ipahal_write_reg_n_fields(
IPA_ENDP_INIT_HOL_BLOCK_EN_n,
ep_idx, &ep_holb);
-
- /* from IPA 4.0 pipe suspend is not supported */
- if (ipa3_ctx->ipa_hw_type < IPA_HW_v4_0)
- ipahal_write_reg_n_fields(
- IPA_ENDP_INIT_CTRL_n,
- ep_idx, &ep_suspend);
}
}
}
@@ -4030,6 +4030,11 @@ void ipa3_suspend_handler(enum ipa_irq_type interrupt,
atomic_set(
&ipa3_ctx->transport_pm.dec_clients,
1);
+ /*
+ * acquire wake lock as long as suspend
+ * vote is held
+ */
+ ipa3_inc_acquire_wakelock();
ipa3_process_irq_schedule_rel();
}
mutex_unlock(&ipa3_ctx->transport_pm.
@@ -4110,6 +4115,7 @@ static void ipa3_transport_release_resource(struct work_struct *work)
ipa3_process_irq_schedule_rel();
} else {
atomic_set(&ipa3_ctx->transport_pm.dec_clients, 0);
+ ipa3_dec_release_wakelock();
IPA_ACTIVE_CLIENTS_DEC_SPECIAL("TRANSPORT_RESOURCE");
}
}
@@ -4518,6 +4524,7 @@ static int ipa3_post_init(const struct ipa3_plat_drv_res *resource_p,
ipa3_register_panic_hdlr();
ipa3_ctx->q6_proxy_clk_vote_valid = true;
+ ipa3_ctx->q6_proxy_clk_vote_cnt++;
mutex_lock(&ipa3_ctx->lock);
ipa3_ctx->ipa_initialization_complete = true;
@@ -5138,6 +5145,7 @@ static int ipa3_pre_init(const struct ipa3_plat_drv_res *resource_p,
mutex_init(&ipa3_ctx->lock);
mutex_init(&ipa3_ctx->q6_proxy_clk_vote_mutex);
mutex_init(&ipa3_ctx->ipa_cne_evt_lock);
+ ipa3_ctx->q6_proxy_clk_vote_cnt = 0;
idr_init(&ipa3_ctx->ipa_idr);
spin_lock_init(&ipa3_ctx->idr_lock);
@@ -6403,5 +6411,39 @@ int ipa3_iommu_map(struct iommu_domain *domain,
return iommu_map(domain, iova, paddr, size, prot);
}
+/**
+ * ipa3_get_smmu_params()- Return the ipa3 smmu related params.
+ */
+int ipa3_get_smmu_params(struct ipa_smmu_in_params *in,
+ struct ipa_smmu_out_params *out)
+{
+ bool is_smmu_enable = 0;
+
+ if (out == NULL || in == NULL) {
+ IPAERR("bad parms for Client SMMU out params\n");
+ return -EINVAL;
+ }
+
+ if (!ipa3_ctx) {
+ IPAERR("IPA not yet initialized\n");
+ return -EINVAL;
+ }
+
+ switch (in->smmu_client) {
+ case IPA_SMMU_WLAN_CLIENT:
+ is_smmu_enable = !(ipa3_ctx->s1_bypass_arr[IPA_SMMU_CB_UC] |
+ ipa3_ctx->s1_bypass_arr[IPA_SMMU_CB_WLAN]);
+ break;
+ default:
+ is_smmu_enable = 0;
+ IPAERR("Trying to get illegal clients SMMU status");
+ return -EINVAL;
+ }
+
+ out->smmu_enable = is_smmu_enable;
+
+ return 0;
+}
+
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("IPA HW device driver");
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_client.c b/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
index 59fe07f..35b6dff 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
@@ -601,6 +601,22 @@ int ipa3_request_gsi_channel(struct ipa_request_gsi_channel_params *params,
ep->priv = params->priv;
ep->keep_ipa_awake = params->keep_ipa_awake;
+
+ /* Config QMB for USB_CONS ep */
+ if (!IPA_CLIENT_IS_PROD(ep->client)) {
+ IPADBG("Configuring QMB on USB CONS pipe\n");
+ if (ipa_ep_idx >= ipa3_ctx->ipa_num_pipes ||
+ ipa3_ctx->ep[ipa_ep_idx].valid == 0) {
+ IPAERR("bad parm.\n");
+ return -EINVAL;
+ }
+ result = ipa3_cfg_ep_cfg(ipa_ep_idx, ¶ms->ipa_ep_cfg.cfg);
+ if (result) {
+ IPAERR("fail to configure QMB.\n");
+ return result;
+ }
+ }
+
if (!ep->skip_ep_cfg) {
if (ipa3_cfg_ep(ipa_ep_idx, ¶ms->ipa_ep_cfg)) {
IPAERR("fail to configure EP.\n");
@@ -1163,6 +1179,48 @@ static int ipa3_stop_ul_chan_with_data_drain(u32 qmi_req_id,
return result;
}
+void ipa3_xdci_ep_delay_rm(u32 clnt_hdl)
+{
+ struct ipa3_ep_context *ep;
+ struct ipa_ep_cfg_ctrl ep_cfg_ctrl;
+ int result;
+
+ if (clnt_hdl >= ipa3_ctx->ipa_num_pipes ||
+ ipa3_ctx->ep[clnt_hdl].valid == 0) {
+ IPAERR("bad parm.\n");
+ return;
+ }
+
+ ep = &ipa3_ctx->ep[clnt_hdl];
+
+ if (ep->ep_delay_set == true) {
+
+ memset(&ep_cfg_ctrl, 0, sizeof(struct ipa_ep_cfg_ctrl));
+ ep_cfg_ctrl.ipa_ep_delay = false;
+
+ if (!ep->keep_ipa_awake)
+ IPA_ACTIVE_CLIENTS_INC_EP
+ (ipa3_get_client_mapping(clnt_hdl));
+
+ result = ipa3_cfg_ep_ctrl(clnt_hdl,
+ &ep_cfg_ctrl);
+
+ if (!ep->keep_ipa_awake)
+ IPA_ACTIVE_CLIENTS_DEC_EP
+ (ipa3_get_client_mapping(clnt_hdl));
+
+ if (result) {
+ IPAERR
+ ("client (ep: %d) failed to remove delay result=%d\n",
+ clnt_hdl, result);
+ } else {
+ IPADBG("client (ep: %d) delay removed\n",
+ clnt_hdl);
+ ep->ep_delay_set = false;
+ }
+ }
+}
+
int ipa3_xdci_disconnect(u32 clnt_hdl, bool should_force_clear, u32 qmi_req_id)
{
struct ipa3_ep_context *ep;
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c b/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
index 6a89f49..0f3940f 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
@@ -62,7 +62,7 @@ static int ipa3_generate_flt_hw_rule(enum ipa_ip_type ip,
res = ipahal_flt_generate_hw_rule(&gen_params, &entry->hw_len, buf);
if (res)
- IPAERR("failed to generate flt h/w rule\n");
+ IPAERR_RL("failed to generate flt h/w rule\n");
return 0;
}
@@ -311,7 +311,7 @@ static int ipa_generate_flt_hw_tbl_img(enum ipa_ip_type ip,
}
if (ipahal_fltrt_allocate_hw_tbl_imgs(alloc_params)) {
- IPAERR("fail to allocate FLT HW TBL images. IP %d\n", ip);
+ IPAERR_RL("fail to allocate FLT HW TBL images. IP %d\n", ip);
rc = -ENOMEM;
goto allocate_failed;
}
@@ -319,14 +319,14 @@ static int ipa_generate_flt_hw_tbl_img(enum ipa_ip_type ip,
if (ipa_translate_flt_tbl_to_hw_fmt(ip, IPA_RULE_HASHABLE,
alloc_params->hash_bdy.base, alloc_params->hash_hdr.base,
hash_bdy_start_ofst)) {
- IPAERR("fail to translate hashable flt tbls to hw format\n");
+ IPAERR_RL("fail to translate hashable flt tbls to hw format\n");
rc = -EPERM;
goto translate_fail;
}
if (ipa_translate_flt_tbl_to_hw_fmt(ip, IPA_RULE_NON_HASHABLE,
alloc_params->nhash_bdy.base, alloc_params->nhash_hdr.base,
nhash_bdy_start_ofst)) {
- IPAERR("fail to translate non-hash flt tbls to hw format\n");
+ IPAERR_RL("fail to translate non-hash flt tbls to hw format\n");
rc = -EPERM;
goto translate_fail;
}
@@ -530,7 +530,7 @@ int __ipa_commit_flt_v3(enum ipa_ip_type ip)
}
if (ipa_generate_flt_hw_tbl_img(ip, &alloc_params)) {
- IPAERR("fail to generate FLT HW TBL image. IP %d\n", ip);
+ IPAERR_RL("fail to generate FLT HW TBL image. IP %d\n", ip);
rc = -EFAULT;
goto prep_failed;
}
@@ -745,25 +745,25 @@ static int __ipa_validate_flt_rule(const struct ipa_flt_rule *rule,
if (rule->action != IPA_PASS_TO_EXCEPTION) {
if (!rule->eq_attrib_type) {
if (!rule->rt_tbl_hdl) {
- IPAERR("invalid RT tbl\n");
+ IPAERR_RL("invalid RT tbl\n");
goto error;
}
*rt_tbl = ipa3_id_find(rule->rt_tbl_hdl);
if (*rt_tbl == NULL) {
- IPAERR("RT tbl not found\n");
+ IPAERR_RL("RT tbl not found\n");
goto error;
}
if ((*rt_tbl)->cookie != IPA_RT_TBL_COOKIE) {
- IPAERR("RT table cookie is invalid\n");
+ IPAERR_RL("RT table cookie is invalid\n");
goto error;
}
} else {
if (rule->rt_tbl_idx > ((ip == IPA_IP_v4) ?
IPA_MEM_PART(v4_modem_rt_index_hi) :
IPA_MEM_PART(v6_modem_rt_index_hi))) {
- IPAERR("invalid RT tbl\n");
+ IPAERR_RL("invalid RT tbl\n");
goto error;
}
}
@@ -778,12 +778,12 @@ static int __ipa_validate_flt_rule(const struct ipa_flt_rule *rule,
if (rule->pdn_idx) {
if (rule->action == IPA_PASS_TO_EXCEPTION ||
rule->action == IPA_PASS_TO_ROUTING) {
- IPAERR(
+ IPAERR_RL(
"PDN index should be 0 when action is not pass to NAT\n");
goto error;
} else {
if (rule->pdn_idx >= IPA_MAX_PDN_NUM) {
- IPAERR("PDN index %d is too large\n",
+ IPAERR_RL("PDN index %d is too large\n",
rule->pdn_idx);
goto error;
}
@@ -794,7 +794,7 @@ static int __ipa_validate_flt_rule(const struct ipa_flt_rule *rule,
if (rule->rule_id) {
if ((rule->rule_id < ipahal_get_rule_id_hi_bit()) ||
(rule->rule_id >= ((ipahal_get_rule_id_hi_bit()<<1)-1))) {
- IPAERR("invalid rule_id provided 0x%x\n"
+ IPAERR_RL("invalid rule_id provided 0x%x\n"
"rule_id with bit 0x%x are auto generated\n",
rule->rule_id, ipahal_get_rule_id_hi_bit());
goto error;
@@ -828,8 +828,8 @@ static int __ipa_create_flt_entry(struct ipa3_flt_entry **entry,
} else {
id = ipa3_alloc_rule_id(tbl->rule_ids);
if (id < 0) {
- IPAERR("failed to allocate rule id\n");
- WARN_ON(1);
+ IPAERR_RL("failed to allocate rule id\n");
+ WARN_ON_RATELIMIT_IPA(1);
goto rule_id_fail;
}
}
@@ -853,8 +853,8 @@ static int __ipa_finish_flt_rule_add(struct ipa3_flt_tbl *tbl,
entry->rt_tbl->ref_cnt++;
id = ipa3_id_alloc(entry);
if (id < 0) {
- IPAERR("failed to add to tree\n");
- WARN_ON(1);
+ IPAERR_RL("failed to add to tree\n");
+ WARN_ON_RATELIMIT_IPA(1);
goto ipa_insert_failed;
}
*rule_hdl = id;
@@ -1399,7 +1399,7 @@ int ipa3_reset_flt(enum ipa_ip_type ip)
list_for_each_entry_safe(entry, next, &tbl->head_flt_rule_list,
link) {
if (ipa3_id_find(entry->id) == NULL) {
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
mutex_unlock(&ipa3_ctx->lock);
return -EFAULT;
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_hdr.c b/drivers/platform/msm/ipa/ipa_v3/ipa_hdr.c
index a89bd78..a37df7e 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_hdr.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_hdr.c
@@ -343,7 +343,7 @@ static int __ipa_add_hdr_proc_ctx(struct ipa_hdr_proc_ctx_add *proc_ctx,
}
if (hdr_entry->cookie != IPA_HDR_COOKIE) {
IPAERR_RL("Invalid header cookie %u\n", hdr_entry->cookie);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EINVAL;
}
IPADBG("Associated header is name=%s is_hdr_proc_ctx=%d\n",
@@ -373,7 +373,7 @@ static int __ipa_add_hdr_proc_ctx(struct ipa_hdr_proc_ctx_add *proc_ctx,
bin = IPA_HDR_PROC_CTX_BIN1;
} else {
IPAERR_RL("unexpected needed len %d\n", needed_len);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
goto bad_len;
}
@@ -418,8 +418,8 @@ static int __ipa_add_hdr_proc_ctx(struct ipa_hdr_proc_ctx_add *proc_ctx,
id = ipa3_id_alloc(entry);
if (id < 0) {
- IPAERR("failed to alloc id\n");
- WARN_ON(1);
+ IPAERR_RL("failed to alloc id\n");
+ WARN_ON_RATELIMIT_IPA(1);
goto ipa_insert_failed;
}
entry->id = id;
@@ -555,8 +555,8 @@ static int __ipa_add_hdr(struct ipa_hdr_add *hdr)
id = ipa3_id_alloc(entry);
if (id < 0) {
- IPAERR("failed to alloc id\n");
- WARN_ON(1);
+ IPAERR_RL("failed to alloc id\n");
+ WARN_ON_RATELIMIT_IPA(1);
goto ipa_insert_failed;
}
entry->id = id;
@@ -984,7 +984,7 @@ int ipa3_reset_hdr(void)
if (entry->is_hdr_proc_ctx) {
IPAERR("default header is proc ctx\n");
mutex_unlock(&ipa3_ctx->lock);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EFAULT;
}
continue;
@@ -992,7 +992,7 @@ int ipa3_reset_hdr(void)
if (ipa3_id_find(entry->id) == NULL) {
mutex_unlock(&ipa3_ctx->lock);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EFAULT;
}
if (entry->is_hdr_proc_ctx) {
@@ -1046,7 +1046,7 @@ int ipa3_reset_hdr(void)
if (ipa3_id_find(ctx_entry->id) == NULL) {
mutex_unlock(&ipa3_ctx->lock);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EFAULT;
}
list_del(&ctx_entry->link);
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
index ad925c5..adbd7b8 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
@@ -106,7 +106,7 @@
#define IPAERR_RL(fmt, args...) \
do { \
- pr_err_ratelimited(DRV_NAME " %s:%d " fmt, __func__,\
+ pr_err_ratelimited_ipa(DRV_NAME " %s:%d " fmt, __func__,\
__LINE__, ## args);\
if (ipa3_ctx) { \
IPA_IPC_LOGGING(ipa3_ctx->logbuf, \
@@ -1331,6 +1331,7 @@ struct ipa3_context {
u32 curr_ipa_clk_rate;
bool q6_proxy_clk_vote_valid;
struct mutex q6_proxy_clk_vote_mutex;
+ u32 q6_proxy_clk_vote_cnt;
u32 ipa_num_pipes;
dma_addr_t pkt_init_imm[IPA3_MAX_NUM_PIPES];
u32 pkt_init_imm_opcode;
@@ -1681,6 +1682,8 @@ int ipa3_xdci_connect(u32 clnt_hdl);
int ipa3_xdci_disconnect(u32 clnt_hdl, bool should_force_clear, u32 qmi_req_id);
+void ipa3_xdci_ep_delay_rm(u32 clnt_hdl);
+
int ipa3_xdci_suspend(u32 ul_clnt_hdl, u32 dl_clnt_hdl,
bool should_force_clear, u32 qmi_req_id, bool is_dpl);
@@ -2020,6 +2023,9 @@ bool ipa3_get_modem_cfg_emb_pipe_flt(void);
u8 ipa3_get_qmb_master_sel(enum ipa_client_type client);
+int ipa3_get_smmu_params(struct ipa_smmu_in_params *in,
+ struct ipa_smmu_out_params *out);
+
/* internal functions */
int ipa3_bind_api_controller(enum ipa_hw_type ipa_hw_type,
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c b/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c
index 4ada018..40ef59a 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c
@@ -221,7 +221,7 @@ int ipa3_query_intf(struct ipa_ioc_query_intf *lookup)
int result = -EINVAL;
if (lookup == NULL) {
- IPAERR("invalid param lookup=%p\n", lookup);
+ IPAERR_RL("invalid param lookup=%p\n", lookup);
return result;
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c b/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
index 9f27c4f..c2daa05 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
@@ -30,7 +30,6 @@
#define IPA_NAT_MAX_NUM_OF_INIT_CMD_DESC 3
#define IPA_IPV6CT_MAX_NUM_OF_INIT_CMD_DESC 2
#define IPA_MAX_NUM_OF_TABLE_DMA_CMD_DESC 4
-#define IPA_MAX_NUM_OF_DEL_TABLE_CMD_DESC 2
enum ipa_nat_ipv6ct_table_type {
IPA_NAT_BASE_TBL = 0,
@@ -657,6 +656,153 @@ static void ipa3_nat_create_init_cmd(
IPADBG("return\n");
}
+static void ipa3_nat_create_modify_pdn_cmd(
+ struct ipahal_imm_cmd_dma_shared_mem *mem_cmd, bool zero_mem)
+{
+ size_t pdn_entry_size, mem_size;
+
+ IPADBG("\n");
+
+ ipahal_nat_entry_size(IPAHAL_NAT_IPV4_PDN, &pdn_entry_size);
+ mem_size = pdn_entry_size * IPA_MAX_PDN_NUM;
+
+ if (zero_mem)
+ memset(ipa3_ctx->nat_mem.pdn_mem.base, 0, mem_size);
+
+ /* Copy the PDN config table to SRAM */
+ mem_cmd->is_read = false;
+ mem_cmd->skip_pipeline_clear = false;
+ mem_cmd->pipeline_clear_options = IPAHAL_HPS_CLEAR;
+ mem_cmd->size = mem_size;
+ mem_cmd->system_addr = ipa3_ctx->nat_mem.pdn_mem.phys_base;
+ mem_cmd->local_addr = ipa3_ctx->smem_restricted_bytes +
+ IPA_MEM_PART(pdn_config_ofst);
+
+ IPADBG("return\n");
+}
+
+static int ipa3_nat_send_init_cmd(struct ipahal_imm_cmd_ip_v4_nat_init *cmd,
+ bool zero_pdn_table)
+{
+ struct ipa3_desc desc[IPA_NAT_MAX_NUM_OF_INIT_CMD_DESC];
+ struct ipahal_imm_cmd_pyld *cmd_pyld[IPA_NAT_MAX_NUM_OF_INIT_CMD_DESC];
+ int i, num_cmd = 0, result;
+
+ IPADBG("\n");
+
+ /* NO-OP IC for ensuring that IPA pipeline is empty */
+ cmd_pyld[num_cmd] =
+ ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
+ if (!cmd_pyld[num_cmd]) {
+ IPAERR("failed to construct NOP imm cmd\n");
+ return -ENOMEM;
+ }
+
+ ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
+ ++num_cmd;
+
+ cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
+ IPA_IMM_CMD_IP_V4_NAT_INIT, cmd, false);
+ if (!cmd_pyld[num_cmd]) {
+ IPAERR_RL("fail to construct NAT init imm cmd\n");
+ result = -EPERM;
+ goto destroy_imm_cmd;
+ }
+
+ ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
+ ++num_cmd;
+
+ if (ipa3_ctx->ipa_hw_type >= IPA_HW_v4_0) {
+ struct ipahal_imm_cmd_dma_shared_mem mem_cmd = { 0 };
+
+ if (num_cmd >= IPA_NAT_MAX_NUM_OF_INIT_CMD_DESC) {
+ IPAERR("number of commands is out of range\n");
+ result = -ENOBUFS;
+ goto destroy_imm_cmd;
+ }
+
+ /* Copy the PDN config table to SRAM */
+ ipa3_nat_create_modify_pdn_cmd(&mem_cmd, zero_pdn_table);
+ cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
+ IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
+ if (!cmd_pyld[num_cmd]) {
+ IPAERR(
+ "fail construct dma_shared_mem cmd: for pdn table");
+ result = -ENOMEM;
+ goto destroy_imm_cmd;
+ }
+ ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
+ ++num_cmd;
+ IPADBG("added PDN table copy cmd\n");
+ }
+
+ result = ipa3_send_cmd(num_cmd, desc);
+ if (result) {
+ IPAERR("fail to send NAT init immediate command\n");
+ goto destroy_imm_cmd;
+ }
+
+ IPADBG("return\n");
+
+destroy_imm_cmd:
+ for (i = 0; i < num_cmd; ++i)
+ ipahal_destroy_imm_cmd(cmd_pyld[i]);
+
+ return result;
+}
+
+static int ipa3_ipv6ct_send_init_cmd(struct ipahal_imm_cmd_ip_v6_ct_init *cmd)
+{
+ struct ipa3_desc desc[IPA_IPV6CT_MAX_NUM_OF_INIT_CMD_DESC];
+ struct ipahal_imm_cmd_pyld
+ *cmd_pyld[IPA_IPV6CT_MAX_NUM_OF_INIT_CMD_DESC];
+ int i, num_cmd = 0, result;
+
+ IPADBG("\n");
+
+ /* NO-OP IC for ensuring that IPA pipeline is empty */
+ cmd_pyld[num_cmd] =
+ ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
+ if (!cmd_pyld[num_cmd]) {
+ IPAERR("failed to construct NOP imm cmd\n");
+ return -ENOMEM;
+ }
+
+ ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
+ ++num_cmd;
+
+ if (num_cmd >= IPA_IPV6CT_MAX_NUM_OF_INIT_CMD_DESC) {
+ IPAERR("number of commands is out of range\n");
+ result = -ENOBUFS;
+ goto destroy_imm_cmd;
+ }
+
+ cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
+ IPA_IMM_CMD_IP_V6_CT_INIT, cmd, false);
+ if (!cmd_pyld[num_cmd]) {
+ IPAERR_RL("fail to construct IPv6CT init imm cmd\n");
+ result = -EPERM;
+ goto destroy_imm_cmd;
+ }
+
+ ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
+ ++num_cmd;
+
+ result = ipa3_send_cmd(num_cmd, desc);
+ if (result) {
+ IPAERR("Fail to send IPv6CT init immediate command\n");
+ goto destroy_imm_cmd;
+ }
+
+ IPADBG("return\n");
+
+destroy_imm_cmd:
+ for (i = 0; i < num_cmd; ++i)
+ ipahal_destroy_imm_cmd(cmd_pyld[i]);
+
+ return result;
+}
+
/* IOCTL function handlers */
/**
* ipa3_nat_init_cmd() - Post IP_V4_NAT_INIT command to IPA HW
@@ -668,11 +814,7 @@ static void ipa3_nat_create_init_cmd(
*/
int ipa3_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
{
- struct ipa3_desc desc[IPA_NAT_MAX_NUM_OF_INIT_CMD_DESC];
struct ipahal_imm_cmd_ip_v4_nat_init cmd;
- int i, num_cmd = 0;
- struct ipahal_imm_cmd_pyld *cmd_pyld[IPA_NAT_MAX_NUM_OF_INIT_CMD_DESC];
- struct ipahal_imm_cmd_dma_shared_mem mem_cmd = { 0 };
int result;
IPADBG("\n");
@@ -733,18 +875,6 @@ int ipa3_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
return result;
}
- /* NO-OP IC for ensuring that IPA pipeline is empty */
- cmd_pyld[num_cmd] =
- ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
- if (!cmd_pyld[num_cmd]) {
- IPAERR("failed to construct NOP imm cmd\n");
- result = -ENOMEM;
- goto bail;
- }
-
- ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
- ++num_cmd;
-
if (ipa3_ctx->nat_mem.dev.is_sys_mem) {
IPADBG("using system memory for nat table\n");
/*
@@ -757,26 +887,10 @@ int ipa3_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
IPADBG("using shared(local) memory for nat table\n");
ipa3_nat_create_init_cmd(init, true, IPA_RAM_NAT_OFST, &cmd);
}
- cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
- IPA_IMM_CMD_IP_V4_NAT_INIT, &cmd, false);
- if (!cmd_pyld[num_cmd]) {
- IPAERR_RL("Fail to construct ip_v4_nat_init imm cmd\n");
- result = -EPERM;
- goto destroy_imm_cmd;
- }
-
- ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
- ++num_cmd;
if (ipa3_ctx->ipa_hw_type >= IPA_HW_v4_0) {
struct ipa_pdn_entry *pdn_entries;
- if (num_cmd >= IPA_NAT_MAX_NUM_OF_INIT_CMD_DESC) {
- IPAERR("number of commands is out of range\n");
- result = -ENOBUFS;
- goto destroy_imm_cmd;
- }
-
/* store ip in pdn entries cache array */
pdn_entries = ipa3_ctx->nat_mem.pdn_mem.base;
pdn_entries[0].public_ip = init->ip_addr;
@@ -785,33 +899,13 @@ int ipa3_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
pdn_entries[0].resrvd = 0;
IPADBG("Public ip address:0x%x\n", init->ip_addr);
-
- /* Copy the PDN config table to SRAM */
- mem_cmd.is_read = false;
- mem_cmd.skip_pipeline_clear = false;
- mem_cmd.pipeline_clear_options = IPAHAL_HPS_CLEAR;
- mem_cmd.size = sizeof(struct ipa_pdn_entry) * IPA_MAX_PDN_NUM;
- mem_cmd.system_addr = ipa3_ctx->nat_mem.pdn_mem.phys_base;
- mem_cmd.local_addr = ipa3_ctx->smem_restricted_bytes +
- IPA_MEM_PART(pdn_config_ofst);
- cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
- IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
- if (!cmd_pyld[num_cmd]) {
- IPAERR(
- "fail construct dma_shared_mem cmd: for pdn table");
- result = -ENOMEM;
- goto destroy_imm_cmd;
- }
- ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
- ++num_cmd;
- IPADBG("added PDN table copy cmd\n");
}
- IPADBG("posting v4 init command\n");
- if (ipa3_send_cmd(num_cmd, desc)) {
- IPAERR("Fail to send immediate command\n");
- result = -EPERM;
- goto destroy_imm_cmd;
+ IPADBG("posting NAT init command\n");
+ result = ipa3_nat_send_init_cmd(&cmd, false);
+ if (result) {
+ IPAERR("Fail to send NAT init immediate command\n");
+ return result;
}
ipa3_nat_ipv6ct_init_device_structure(
@@ -837,11 +931,7 @@ int ipa3_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
ipa3_ctx->nat_mem.dev.is_hw_init = true;
IPADBG("return\n");
-destroy_imm_cmd:
- for (i = 0; i < num_cmd; ++i)
- ipahal_destroy_imm_cmd(cmd_pyld[i]);
-bail:
- return result;
+ return 0;
}
/**
@@ -854,11 +944,7 @@ int ipa3_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
*/
int ipa3_ipv6ct_init_cmd(struct ipa_ioc_ipv6ct_init *init)
{
- struct ipa3_desc desc[IPA_IPV6CT_MAX_NUM_OF_INIT_CMD_DESC];
struct ipahal_imm_cmd_ip_v6_ct_init cmd;
- int i, num_cmd = 0;
- struct ipahal_imm_cmd_pyld
- *cmd_pyld[IPA_IPV6CT_MAX_NUM_OF_INIT_CMD_DESC];
int result;
IPADBG("\n");
@@ -904,18 +990,6 @@ int ipa3_ipv6ct_init_cmd(struct ipa_ioc_ipv6ct_init *init)
return result;
}
- /* NO-OP IC for ensuring that IPA pipeline is empty */
- cmd_pyld[num_cmd] =
- ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
- if (!cmd_pyld[num_cmd]) {
- IPAERR("failed to construct NOP imm cmd\n");
- result = -ENOMEM;
- goto bail;
- }
-
- ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
- ++num_cmd;
-
if (ipa3_ctx->ipv6ct_mem.dev.is_sys_mem) {
IPADBG("using system memory for nat table\n");
/*
@@ -946,28 +1020,11 @@ int ipa3_ipv6ct_init_cmd(struct ipa_ioc_ipv6ct_init *init)
ipa3_ctx->ipv6ct_mem.dev.name);
}
- if (num_cmd >= IPA_IPV6CT_MAX_NUM_OF_INIT_CMD_DESC) {
- IPAERR("number of commands is out of range\n");
- result = -ENOBUFS;
- goto destroy_imm_cmd;
- }
-
- cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
- IPA_IMM_CMD_IP_V6_CT_INIT, &cmd, false);
- if (!cmd_pyld[num_cmd]) {
- IPAERR_RL("Fail to construct ip_v6_ct_init imm cmd\n");
- result = -EPERM;
- goto destroy_imm_cmd;
- }
-
- ipa3_init_imm_cmd_desc(&desc[num_cmd], cmd_pyld[num_cmd]);
- ++num_cmd;
-
IPADBG("posting ip_v6_ct_init imm command\n");
- if (ipa3_send_cmd(num_cmd, desc)) {
- IPAERR("Fail to send immediate command\n");
- result = -EPERM;
- goto destroy_imm_cmd;
+ result = ipa3_ipv6ct_send_init_cmd(&cmd);
+ if (result) {
+ IPAERR("fail to send IPv6CT init immediate command\n");
+ return result;
}
ipa3_nat_ipv6ct_init_device_structure(
@@ -979,11 +1036,7 @@ int ipa3_ipv6ct_init_cmd(struct ipa_ioc_ipv6ct_init *init)
ipa3_ctx->ipv6ct_mem.dev.is_hw_init = true;
IPADBG("return\n");
-destroy_imm_cmd:
- for (i = 0; i < num_cmd; ++i)
- ipahal_destroy_imm_cmd(cmd_pyld[i]);
-bail:
- return result;
+ return 0;
}
/**
@@ -1036,13 +1089,7 @@ int ipa3_nat_mdfy_pdn(struct ipa_ioc_nat_pdn_entry *mdfy_pdn)
mdfy_pdn->dst_metadata, mdfy_pdn->src_metadata);
/* Copy the PDN config table to SRAM */
- mem_cmd.is_read = false;
- mem_cmd.skip_pipeline_clear = false;
- mem_cmd.pipeline_clear_options = IPAHAL_HPS_CLEAR;
- mem_cmd.size = sizeof(struct ipa_pdn_entry) * IPA_MAX_PDN_NUM;
- mem_cmd.system_addr = nat_ctx->pdn_mem.phys_base;
- mem_cmd.local_addr = ipa3_ctx->smem_restricted_bytes +
- IPA_MEM_PART(pdn_config_ofst);
+ ipa3_nat_create_modify_pdn_cmd(&mem_cmd, false);
cmd_pyld = ipahal_construct_imm_cmd(
IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
if (!cmd_pyld) {
@@ -1054,10 +1101,9 @@ int ipa3_nat_mdfy_pdn(struct ipa_ioc_nat_pdn_entry *mdfy_pdn)
ipa3_init_imm_cmd_desc(&desc, cmd_pyld);
IPADBG("sending PDN table copy cmd\n");
- if (ipa3_send_cmd(1, &desc)) {
- IPAERR("Fail to send immediate command\n");
- result = -EPERM;
- }
+ result = ipa3_send_cmd(1, &desc);
+ if (result)
+ IPAERR("Fail to send PDN table copy immediate command\n");
ipahal_destroy_imm_cmd(cmd_pyld);
@@ -1233,7 +1279,7 @@ int ipa3_table_dma_cmd(struct ipa_ioc_nat_dma_cmd *dma)
++num_cmd;
}
result = ipa3_send_cmd(num_cmd, desc);
- if (result == -EPERM)
+ if (result)
IPAERR("Fail to send table_dma immediate command\n");
IPADBG("return\n");
@@ -1292,104 +1338,26 @@ static void ipa3_nat_ipv6ct_free_mem(struct ipa3_nat_ipv6ct_common_mem *dev)
IPADBG("return\n");
}
-/**
- * ipa3_nat_free_mem() - free the NAT memory
- *
- * Called by NAT client driver to free the NAT memory
- */
-static int ipa3_nat_free_mem(void)
-{
- struct ipahal_imm_cmd_dma_shared_mem mem_cmd = { 0 };
- struct ipa3_desc desc;
- struct ipahal_imm_cmd_pyld *cmd_pyld;
- int result = 0;
-
- IPADBG("\n");
- mutex_lock(&ipa3_ctx->nat_mem.dev.lock);
-
- ipa3_nat_ipv6ct_free_mem(&ipa3_ctx->nat_mem.dev);
-
- if (ipa3_ctx->ipa_hw_type >= IPA_HW_v4_0) {
- size_t pdn_entry_size;
-
- ipahal_nat_entry_size(IPAHAL_NAT_IPV4_PDN, &pdn_entry_size);
-
- /* zero the PDN table and copy the PDN config table to SRAM */
- IPADBG("zeroing the PDN config table\n");
- memset(ipa3_ctx->nat_mem.pdn_mem.base, 0,
- pdn_entry_size * IPA_MAX_PDN_NUM);
- mem_cmd.is_read = false;
- mem_cmd.skip_pipeline_clear = false;
- mem_cmd.pipeline_clear_options = IPAHAL_HPS_CLEAR;
- mem_cmd.size = pdn_entry_size * IPA_MAX_PDN_NUM;
- mem_cmd.system_addr = ipa3_ctx->nat_mem.pdn_mem.phys_base;
- mem_cmd.local_addr = ipa3_ctx->smem_restricted_bytes +
- IPA_MEM_PART(pdn_config_ofst);
- cmd_pyld = ipahal_construct_imm_cmd(
- IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
- if (!cmd_pyld) {
- IPAERR(
- "fail construct dma_shared_mem cmd: for pdn table");
- result = -ENOMEM;
- goto lbl_free_pdn;
- }
- ipa3_init_imm_cmd_desc(&desc, cmd_pyld);
-
- IPADBG("sending PDN table copy cmd\n");
- if (ipa3_send_cmd(1, &desc)) {
- IPAERR("Fail to send immediate command\n");
- result = -ENOMEM;
- }
-
- ipahal_destroy_imm_cmd(cmd_pyld);
-lbl_free_pdn:
- IPADBG("freeing the PDN memory\n");
- dma_free_coherent(ipa3_ctx->pdev,
- ipa3_ctx->nat_mem.pdn_mem.size,
- ipa3_ctx->nat_mem.pdn_mem.base,
- ipa3_ctx->nat_mem.pdn_mem.phys_base);
- }
-
- mutex_unlock(&ipa3_ctx->nat_mem.dev.lock);
- IPADBG("return\n");
- return result;
-}
-
-static int ipa3_nat_ipv6ct_send_del_table_cmd(
+static int ipa3_nat_ipv6ct_create_del_table_cmd(
uint8_t tbl_index,
u32 base_addr,
- bool mem_type_shared,
struct ipa3_nat_ipv6ct_common_mem *dev,
- enum ipahal_imm_cmd_name cmd_name,
- struct ipahal_imm_cmd_nat_ipv6ct_init_common *table_init_cmd,
- const void *cmd)
+ struct ipahal_imm_cmd_nat_ipv6ct_init_common *table_init_cmd)
{
- struct ipahal_imm_cmd_pyld *nop_cmd_pyld = NULL, *cmd_pyld = NULL;
- struct ipa3_desc desc[IPA_MAX_NUM_OF_DEL_TABLE_CMD_DESC];
- int result = 0;
+ bool mem_type_shared = true;
IPADBG("\n");
- if (!dev->is_hw_init) {
- IPADBG("attempt to delete %s before HW int\n", dev->name);
- /* Deletion of partly initialized table is not an error */
- return 0;
- }
-
if (tbl_index >= 1) {
IPAERR_RL("Unsupported table index %d\n", tbl_index);
return -EPERM;
}
- /* NO-OP IC for ensuring that IPA pipeline is empty */
- nop_cmd_pyld =
- ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
- if (!nop_cmd_pyld) {
- IPAERR("Failed to construct NOP imm cmd\n");
- result = -ENOMEM;
- goto bail;
+ if (dev->tmp_mem != NULL) {
+ IPADBG("using temp memory during %s del\n", dev->name);
+ mem_type_shared = false;
+ base_addr = dev->tmp_mem->dma_handle;
}
- ipa3_init_imm_cmd_desc(&desc[0], nop_cmd_pyld);
table_init_cmd->table_index = tbl_index;
table_init_cmd->base_table_addr = base_addr;
@@ -1398,29 +1366,73 @@ static int ipa3_nat_ipv6ct_send_del_table_cmd(
table_init_cmd->expansion_table_addr_shared = mem_type_shared;
table_init_cmd->size_base_table = 0;
table_init_cmd->size_expansion_table = 0;
- cmd_pyld = ipahal_construct_imm_cmd(cmd_name, &cmd, false);
- if (!cmd_pyld) {
- IPAERR_RL("Fail to construct table init imm cmd for %s\n",
- dev->name);
- result = -EPERM;
- goto destroy_nop_imm_cmd;
- }
- ipa3_init_imm_cmd_desc(&desc[1], cmd_pyld);
+ IPADBG("return\n");
- if (ipa3_send_cmd(IPA_MAX_NUM_OF_DEL_TABLE_CMD_DESC, desc)) {
- IPAERR("Fail to send immediate command\n");
- result = -EPERM;
- goto destroy_imm_cmd;
+ return 0;
+}
+
+static int ipa3_nat_send_del_table_cmd(uint8_t tbl_index)
+{
+ struct ipahal_imm_cmd_ip_v4_nat_init cmd;
+ int result;
+
+ IPADBG("\n");
+
+ result = ipa3_nat_ipv6ct_create_del_table_cmd(
+ tbl_index,
+ IPA_NAT_PHYS_MEM_OFFSET,
+ &ipa3_ctx->nat_mem.dev,
+ &cmd.table_init);
+ if (result) {
+ IPAERR(
+ "Fail to create immediate command to delete NAT table\n");
+ return result;
+ }
+
+ cmd.index_table_addr = cmd.table_init.base_table_addr;
+ cmd.index_table_addr_shared = cmd.table_init.base_table_addr_shared;
+ cmd.index_table_expansion_addr = cmd.index_table_addr;
+ cmd.index_table_expansion_addr_shared = cmd.index_table_addr_shared;
+ cmd.public_addr_info = 0;
+
+ IPADBG("posting NAT delete command\n");
+ result = ipa3_nat_send_init_cmd(&cmd, true);
+ if (result) {
+ IPAERR("Fail to send NAT delete immediate command\n");
+ return result;
}
IPADBG("return\n");
+ return 0;
+}
-destroy_imm_cmd:
- ipahal_destroy_imm_cmd(cmd_pyld);
-destroy_nop_imm_cmd:
- ipahal_destroy_imm_cmd(nop_cmd_pyld);
-bail:
- return result;
+static int ipa3_ipv6ct_send_del_table_cmd(uint8_t tbl_index)
+{
+ struct ipahal_imm_cmd_ip_v6_ct_init cmd;
+ int result;
+
+ IPADBG("\n");
+
+ result = ipa3_nat_ipv6ct_create_del_table_cmd(
+ tbl_index,
+ IPA_IPV6CT_PHYS_MEM_OFFSET,
+ &ipa3_ctx->ipv6ct_mem.dev,
+ &cmd.table_init);
+ if (result) {
+ IPAERR(
+ "Fail to create immediate command to delete IPv6CT table\n");
+ return result;
+ }
+
+ IPADBG("posting IPv6CT delete command\n");
+ result = ipa3_ipv6ct_send_init_cmd(&cmd);
+ if (result) {
+ IPAERR("Fail to send IPv6CT delete immediate command\n");
+ return result;
+ }
+
+ IPADBG("return\n");
+ return 0;
}
/**
@@ -1456,10 +1468,7 @@ int ipa3_nat_del_cmd(struct ipa_ioc_v4_nat_del *del)
*/
int ipa3_del_nat_table(struct ipa_ioc_nat_ipv6ct_table_del *del)
{
- struct ipahal_imm_cmd_ip_v4_nat_init cmd;
- bool mem_type_shared = true;
- u32 base_addr = IPA_NAT_PHYS_MEM_OFFSET;
- int result;
+ int result = 0;
IPADBG("\n");
if (!ipa3_ctx->nat_mem.dev.is_dev_init) {
@@ -1467,36 +1476,35 @@ int ipa3_del_nat_table(struct ipa_ioc_nat_ipv6ct_table_del *del)
return -EPERM;
}
- if (ipa3_ctx->nat_mem.dev.tmp_mem != NULL) {
- IPADBG("using temp memory during nat del\n");
- mem_type_shared = false;
- base_addr = ipa3_ctx->nat_mem.dev.tmp_mem->dma_handle;
+ mutex_lock(&ipa3_ctx->nat_mem.dev.lock);
+
+ if (ipa3_ctx->nat_mem.dev.is_hw_init) {
+ result = ipa3_nat_send_del_table_cmd(del->table_index);
+ if (result) {
+ IPAERR(
+ "Fail to send immediate command to delete NAT table\n");
+ goto bail;
+ }
}
- cmd.index_table_addr = base_addr;
- cmd.index_table_addr_shared = mem_type_shared;
- cmd.index_table_expansion_addr = base_addr;
- cmd.index_table_expansion_addr_shared = mem_type_shared;
- cmd.public_addr_info = 0;
-
- result = ipa3_nat_ipv6ct_send_del_table_cmd(
- del->table_index,
- base_addr,
- mem_type_shared,
- &ipa3_ctx->nat_mem.dev,
- IPA_IMM_CMD_IP_V4_NAT_INIT,
- &cmd.table_init,
- &cmd);
- if (result)
- goto bail;
-
ipa3_ctx->nat_mem.public_ip_addr = 0;
ipa3_ctx->nat_mem.index_table_addr = 0;
ipa3_ctx->nat_mem.index_table_expansion_addr = 0;
- result = ipa3_nat_free_mem();
+ if (ipa3_ctx->ipa_hw_type >= IPA_HW_v4_0 &&
+ ipa3_ctx->nat_mem.dev.is_mem_allocated) {
+ IPADBG("freeing the PDN memory\n");
+ dma_free_coherent(ipa3_ctx->pdev,
+ ipa3_ctx->nat_mem.pdn_mem.size,
+ ipa3_ctx->nat_mem.pdn_mem.base,
+ ipa3_ctx->nat_mem.pdn_mem.phys_base);
+ }
+
+ ipa3_nat_ipv6ct_free_mem(&ipa3_ctx->nat_mem.dev);
IPADBG("return\n");
+
bail:
+ mutex_unlock(&ipa3_ctx->nat_mem.dev.lock);
return result;
}
@@ -1510,10 +1518,7 @@ int ipa3_del_nat_table(struct ipa_ioc_nat_ipv6ct_table_del *del)
*/
int ipa3_del_ipv6ct_table(struct ipa_ioc_nat_ipv6ct_table_del *del)
{
- struct ipahal_imm_cmd_ip_v6_ct_init cmd;
- bool mem_type_shared = true;
- u32 base_addr = IPA_IPV6CT_PHYS_MEM_OFFSET;
- int result;
+ int result = 0;
IPADBG("\n");
@@ -1527,28 +1532,22 @@ int ipa3_del_ipv6ct_table(struct ipa_ioc_nat_ipv6ct_table_del *del)
return -EPERM;
}
- if (ipa3_ctx->ipv6ct_mem.dev.tmp_mem != NULL) {
- IPADBG("using temp memory during IPv6CT del\n");
- mem_type_shared = false;
- base_addr = ipa3_ctx->ipv6ct_mem.dev.tmp_mem->dma_handle;
+ mutex_lock(&ipa3_ctx->ipv6ct_mem.dev.lock);
+
+ if (ipa3_ctx->ipv6ct_mem.dev.is_hw_init) {
+ result = ipa3_ipv6ct_send_del_table_cmd(del->table_index);
+ if (result) {
+ IPAERR(
+ "Fail to send immediate command to delete IPv6CT table\n");
+ goto bail;
+ }
}
- result = ipa3_nat_ipv6ct_send_del_table_cmd(
- del->table_index,
- base_addr,
- mem_type_shared,
- &ipa3_ctx->ipv6ct_mem.dev,
- IPA_IMM_CMD_IP_V6_CT_INIT,
- &cmd.table_init,
- &cmd);
- if (result)
- goto bail;
-
- mutex_lock(&ipa3_ctx->ipv6ct_mem.dev.lock);
ipa3_nat_ipv6ct_free_mem(&ipa3_ctx->ipv6ct_mem.dev);
- mutex_unlock(&ipa3_ctx->ipv6ct_mem.dev.lock);
IPADBG("return\n");
+
bail:
+ mutex_unlock(&ipa3_ctx->ipv6ct_mem.dev.lock);
return result;
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_pm.h b/drivers/platform/msm/ipa/ipa_v3/ipa_pm.h
index ca022b5..b2f203a 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_pm.h
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_pm.h
@@ -18,7 +18,7 @@
/* internal to ipa */
#define IPA_PM_MAX_CLIENTS 10
#define IPA_PM_MAX_EX_CL 64
-#define IPA_PM_THRESHOLD_MAX 2
+#define IPA_PM_THRESHOLD_MAX 5
#define IPA_PM_EXCEPTION_MAX 2
#define IPA_PM_DEFERRED_TIMEOUT 100
#define IPA_PM_STATE_MAX 7
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_rt.c b/drivers/platform/msm/ipa/ipa_v3/ipa_rt.c
index 2536bf4..fc76604 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_rt.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_rt.c
@@ -59,15 +59,15 @@ static int ipa_generate_rt_hw_rule(enum ipa_ip_type ip,
gen_params.ipt = ip;
gen_params.dst_pipe_idx = ipa3_get_ep_mapping(entry->rule.dst);
if (gen_params.dst_pipe_idx == -1) {
- IPAERR("Wrong destination pipe specified in RT rule\n");
- WARN_ON(1);
+ IPAERR_RL("Wrong destination pipe specified in RT rule\n");
+ WARN_ON_RATELIMIT_IPA(1);
return -EPERM;
}
if (!IPA_CLIENT_IS_CONS(entry->rule.dst)) {
- IPAERR("No RT rule on IPA_client_producer pipe.\n");
- IPAERR("pipe_idx: %d dst_pipe: %d\n",
+ IPAERR_RL("No RT rule on IPA_client_producer pipe.\n");
+ IPAERR_RL("pipe_idx: %d dst_pipe: %d\n",
gen_params.dst_pipe_idx, entry->rule.dst);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EPERM;
}
@@ -145,14 +145,14 @@ static int ipa_translate_rt_tbl_to_hw_fmt(enum ipa_ip_type ip,
tbl_mem.size = tbl->sz[rlt] -
ipahal_get_hw_tbl_hdr_width();
if (ipahal_fltrt_allocate_hw_sys_tbl(&tbl_mem)) {
- IPAERR("fail to alloc sys tbl of size %d\n",
+ IPAERR_RL("fail to alloc sys tbl of size %d\n",
tbl_mem.size);
goto err;
}
if (ipahal_fltrt_write_addr_to_hdr(tbl_mem.phys_base,
hdr, tbl->idx - apps_start_idx, true)) {
- IPAERR("fail to wrt sys tbl addr to hdr\n");
+ IPAERR_RL("fail to wrt sys tbl addr to hdr\n");
goto hdr_update_fail;
}
@@ -166,7 +166,7 @@ static int ipa_translate_rt_tbl_to_hw_fmt(enum ipa_ip_type ip,
res = ipa_generate_rt_hw_rule(ip, entry,
tbl_mem_buf);
if (res) {
- IPAERR("failed to gen HW RT rule\n");
+ IPAERR_RL("failed to gen HW RT rule\n");
goto hdr_update_fail;
}
tbl_mem_buf += entry->hw_len;
@@ -183,7 +183,7 @@ static int ipa_translate_rt_tbl_to_hw_fmt(enum ipa_ip_type ip,
/* update the hdr at the right index */
if (ipahal_fltrt_write_addr_to_hdr(offset, hdr,
tbl->idx - apps_start_idx, true)) {
- IPAERR("fail to wrt lcl tbl ofst to hdr\n");
+ IPAERR_RL("fail to wrt lcl tbl ofst to hdr\n");
goto hdr_update_fail;
}
@@ -195,7 +195,7 @@ static int ipa_translate_rt_tbl_to_hw_fmt(enum ipa_ip_type ip,
res = ipa_generate_rt_hw_rule(ip, entry,
body_i);
if (res) {
- IPAERR("failed to gen HW RT rule\n");
+ IPAERR_RL("failed to gen HW RT rule\n");
goto err;
}
body_i += entry->hw_len;
@@ -296,7 +296,7 @@ static int ipa_prep_rt_tbl_for_cmt(enum ipa_ip_type ip,
res = ipa_generate_rt_hw_rule(ip, entry, NULL);
if (res) {
- IPAERR("failed to calculate HW RT rule size\n");
+ IPAERR_RL("failed to calculate HW RT rule size\n");
return -EPERM;
}
@@ -311,8 +311,8 @@ static int ipa_prep_rt_tbl_for_cmt(enum ipa_ip_type ip,
if ((tbl->sz[IPA_RULE_HASHABLE] +
tbl->sz[IPA_RULE_NON_HASHABLE]) == 0) {
- WARN_ON(1);
- IPAERR("rt tbl %s is with zero total size\n", tbl->name);
+ WARN_ON_RATELIMIT_IPA(1);
+ IPAERR_RL("rt tbl %s is with zero total size\n", tbl->name);
}
hdr_width = ipahal_get_hw_tbl_hdr_width();
@@ -819,8 +819,8 @@ static struct ipa3_rt_tbl *__ipa_add_rt_tbl(enum ipa_ip_type ip,
id = ipa3_id_alloc(entry);
if (id < 0) {
- IPAERR("failed to add to tree\n");
- WARN_ON(1);
+ IPAERR_RL("failed to add to tree\n");
+ WARN_ON_RATELIMIT_IPA(1);
goto ipa_insert_failed;
}
entry->id = id;
@@ -859,7 +859,7 @@ static int __ipa_del_rt_tbl(struct ipa3_rt_tbl *entry)
else if (entry->set == &ipa3_ctx->rt_tbl_set[IPA_IP_v6])
ip = IPA_IP_v6;
else {
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EPERM;
}
@@ -892,14 +892,14 @@ static int __ipa_rt_validate_hndls(const struct ipa_rt_rule *rule,
struct ipa3_hdr_proc_ctx_entry **proc_ctx)
{
if (rule->hdr_hdl && rule->hdr_proc_ctx_hdl) {
- IPAERR("rule contains both hdr_hdl and hdr_proc_ctx_hdl\n");
+ IPAERR_RL("rule contains both hdr_hdl and hdr_proc_ctx_hdl\n");
return -EPERM;
}
if (rule->hdr_hdl) {
*hdr = ipa3_id_find(rule->hdr_hdl);
if ((*hdr == NULL) || ((*hdr)->cookie != IPA_HDR_COOKIE)) {
- IPAERR("rt rule does not point to valid hdr\n");
+ IPAERR_RL("rt rule does not point to valid hdr\n");
return -EPERM;
}
} else if (rule->hdr_proc_ctx_hdl) {
@@ -907,7 +907,7 @@ static int __ipa_rt_validate_hndls(const struct ipa_rt_rule *rule,
if ((*proc_ctx == NULL) ||
((*proc_ctx)->cookie != IPA_PROC_HDR_COOKIE)) {
- IPAERR("rt rule does not point to valid proc ctx\n");
+ IPAERR_RL("rt rule does not point to valid proc ctx\n");
return -EPERM;
}
}
@@ -940,8 +940,8 @@ static int __ipa_create_rt_entry(struct ipa3_rt_entry **entry,
} else {
id = ipa3_alloc_rule_id(tbl->rule_ids);
if (id < 0) {
- IPAERR("failed to allocate rule id\n");
- WARN_ON(1);
+ IPAERR_RL("failed to allocate rule id\n");
+ WARN_ON_RATELIMIT_IPA(1);
goto alloc_rule_id_fail;
}
}
@@ -967,8 +967,8 @@ static int __ipa_finish_rt_rule_add(struct ipa3_rt_entry *entry, u32 *rule_hdl,
entry->proc_ctx->ref_cnt++;
id = ipa3_id_alloc(entry);
if (id < 0) {
- IPAERR("failed to add to tree\n");
- WARN_ON(1);
+ IPAERR_RL("failed to add to tree\n");
+ WARN_ON_RATELIMIT_IPA(1);
goto ipa_insert_failed;
}
IPADBG("add rt rule tbl_idx=%d rule_cnt=%d rule_id=%d\n",
@@ -1433,7 +1433,7 @@ int ipa3_reset_rt(enum ipa_ip_type ip)
list_for_each_entry_safe(rule, rule_next,
&tbl->head_rt_rule_list, link) {
if (ipa3_id_find(rule->id) == NULL) {
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
mutex_unlock(&ipa3_ctx->lock);
return -EFAULT;
}
@@ -1461,7 +1461,7 @@ int ipa3_reset_rt(enum ipa_ip_type ip)
}
if (ipa3_id_find(tbl->id) == NULL) {
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
mutex_unlock(&ipa3_ctx->lock);
return -EFAULT;
}
@@ -1520,7 +1520,7 @@ int ipa3_get_rt_tbl(struct ipa_ioc_get_rt_tbl *lookup)
entry = __ipa3_find_rt_tbl(lookup->ip, lookup->name);
if (entry && entry->cookie == IPA_RT_TBL_COOKIE) {
if (entry->ref_cnt == U32_MAX) {
- IPAERR("fail: ref count crossed limit\n");
+ IPAERR_RL("fail: ref count crossed limit\n");
goto ret;
}
entry->ref_cnt++;
@@ -1572,7 +1572,7 @@ int ipa3_put_rt_tbl(u32 rt_tbl_hdl)
else if (entry->set == &ipa3_ctx->rt_tbl_set[IPA_IP_v6])
ip = IPA_IP_v6;
else {
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
result = -EINVAL;
goto ret;
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_uc_wdi.c b/drivers/platform/msm/ipa/ipa_v3/ipa_uc_wdi.c
index b8928da..941e489 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_uc_wdi.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_uc_wdi.c
@@ -620,8 +620,9 @@ static void ipa_save_uc_smmu_mapping_pa(int res_idx, phys_addr_t pa,
unsigned long iova, size_t len)
{
IPADBG("--res_idx=%d pa=0x%pa iova=0x%lx sz=0x%zx\n", res_idx,
- &pa, iova, len);
- wdi_res[res_idx].res = kzalloc(sizeof(struct ipa_wdi_res), GFP_KERNEL);
+ &pa, iova, len);
+ wdi_res[res_idx].res = kzalloc(sizeof(*wdi_res[res_idx].res),
+ GFP_KERNEL);
if (!wdi_res[res_idx].res)
BUG();
wdi_res[res_idx].nents = 1;
@@ -647,7 +648,8 @@ static void ipa_save_uc_smmu_mapping_sgt(int res_idx, struct sg_table *sgt,
return;
}
- wdi_res[res_idx].res = kcalloc(sgt->nents, sizeof(struct ipa_wdi_res),
+ wdi_res[res_idx].res = kcalloc(sgt->nents,
+ sizeof(*wdi_res[res_idx].res),
GFP_KERNEL);
if (!wdi_res[res_idx].res)
BUG();
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c b/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c
index 979369a..fb29d00 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c
@@ -1113,12 +1113,6 @@ static const struct ipa_ep_configuration ipa3_ep_mapping
{ 31, 31, 8, 8, IPA_EE_AP } },
/* IPA_4_0 */
- [IPA_4_0][IPA_CLIENT_WLAN1_PROD] = {
- true, IPA_v4_0_GROUP_UL_DL,
- true,
- IPA_DPS_HPS_SEQ_TYPE_2ND_PKT_PROCESS_PASS_NO_DEC_UCP,
- QMB_MASTER_SELECT_DDR,
- { 7, 9, 8, 16, IPA_EE_AP } },
[IPA_4_0][IPA_CLIENT_USB_PROD] = {
true, IPA_v4_0_GROUP_UL_DL,
true,
@@ -1348,13 +1342,13 @@ static const struct ipa_ep_configuration ipa3_ep_mapping
true,
IPA_DPS_HPS_SEQ_TYPE_2ND_PKT_PROCESS_PASS_NO_DEC_UCP,
QMB_MASTER_SELECT_DDR,
- { 3, 0, 16, 32, IPA_EE_Q6 } },
+ { 6, 2, 12, 24, IPA_EE_Q6 } },
[IPA_4_0_MHI][IPA_CLIENT_Q6_WAN_PROD] = {
true, IPA_v4_0_GROUP_UL_DL,
true,
IPA_DPS_HPS_SEQ_TYPE_PKT_PROCESS_NO_DEC_UCP,
QMB_MASTER_SELECT_DDR,
- { 6, 2, 12, 24, IPA_EE_Q6 } },
+ { 3, 0, 16, 32, IPA_EE_Q6 } },
[IPA_4_0_MHI][IPA_CLIENT_Q6_CMD_PROD] = {
true, IPA_v4_0_MHI_GROUP_PCIE,
false,
@@ -4194,7 +4188,9 @@ void ipa3_proxy_clk_unvote(void)
mutex_lock(&ipa3_ctx->q6_proxy_clk_vote_mutex);
if (ipa3_ctx->q6_proxy_clk_vote_valid) {
IPA_ACTIVE_CLIENTS_DEC_SPECIAL("PROXY_CLK_VOTE");
- ipa3_ctx->q6_proxy_clk_vote_valid = false;
+ ipa3_ctx->q6_proxy_clk_vote_cnt--;
+ if (ipa3_ctx->q6_proxy_clk_vote_cnt == 0)
+ ipa3_ctx->q6_proxy_clk_vote_valid = false;
}
mutex_unlock(&ipa3_ctx->q6_proxy_clk_vote_mutex);
}
@@ -4210,8 +4206,10 @@ void ipa3_proxy_clk_vote(void)
return;
mutex_lock(&ipa3_ctx->q6_proxy_clk_vote_mutex);
- if (!ipa3_ctx->q6_proxy_clk_vote_valid) {
+ if (!ipa3_ctx->q6_proxy_clk_vote_valid ||
+ (ipa3_ctx->q6_proxy_clk_vote_cnt > 0)) {
IPA_ACTIVE_CLIENTS_INC_SPECIAL("PROXY_CLK_VOTE");
+ ipa3_ctx->q6_proxy_clk_vote_cnt++;
ipa3_ctx->q6_proxy_clk_vote_valid = true;
}
mutex_unlock(&ipa3_ctx->q6_proxy_clk_vote_mutex);
@@ -4505,6 +4503,7 @@ int ipa3_bind_api_controller(enum ipa_hw_type ipa_hw_type,
api_ctrl->ipa_enable_wdi3_pipes = ipa3_enable_wdi3_pipes;
api_ctrl->ipa_disable_wdi3_pipes = ipa3_disable_wdi3_pipes;
api_ctrl->ipa_tz_unlock_reg = ipa3_tz_unlock_reg;
+ api_ctrl->ipa_get_smmu_params = ipa3_get_smmu_params;
return 0;
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_fltrt.c b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_fltrt.c
index d6dbc85..a677046 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_fltrt.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_fltrt.c
@@ -187,17 +187,17 @@ static int ipa_fltrt_rule_generation_err_check(
if (attrib->attrib_mask & IPA_FLT_NEXT_HDR ||
attrib->attrib_mask & IPA_FLT_TC ||
attrib->attrib_mask & IPA_FLT_FLOW_LABEL) {
- IPAHAL_ERR("v6 attrib's specified for v4 rule\n");
+ IPAHAL_ERR_RL("v6 attrib's specified for v4 rule\n");
return -EPERM;
}
} else if (ipt == IPA_IP_v6) {
if (attrib->attrib_mask & IPA_FLT_TOS ||
attrib->attrib_mask & IPA_FLT_PROTOCOL) {
- IPAHAL_ERR("v4 attrib's specified for v6 rule\n");
+ IPAHAL_ERR_RL("v4 attrib's specified for v6 rule\n");
return -EPERM;
}
} else {
- IPAHAL_ERR("unsupported ip %d\n", ipt);
+ IPAHAL_ERR_RL("unsupported ip %d\n", ipt);
return -EPERM;
}
@@ -236,7 +236,7 @@ static int ipa_rt_gen_hw_rule(struct ipahal_rt_rule_gen_params *params,
break;
default:
IPAHAL_ERR("Invalid HDR type %d\n", params->hdr_type);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EINVAL;
};
@@ -294,8 +294,8 @@ static int ipa_flt_gen_hw_rule(struct ipahal_flt_rule_gen_params *params,
rule_hdr->u.hdr.action = 0x3;
break;
default:
- IPAHAL_ERR("Invalid Rule Action %d\n", params->rule->action);
- WARN_ON(1);
+ IPAHAL_ERR_RL("Invalid Rule Action %d\n", params->rule->action);
+ WARN_ON_RATELIMIT_IPA(1);
return -EINVAL;
}
ipa_assert_on(params->rt_tbl_idx & ~0x1F);
@@ -316,14 +316,14 @@ static int ipa_flt_gen_hw_rule(struct ipahal_flt_rule_gen_params *params,
if (params->rule->eq_attrib_type) {
if (ipa_fltrt_generate_hw_rule_bdy_from_eq(
¶ms->rule->eq_attrib, &buf)) {
- IPAHAL_ERR("fail to generate hw rule from eq\n");
+ IPAHAL_ERR_RL("fail to generate hw rule from eq\n");
return -EPERM;
}
en_rule = params->rule->eq_attrib.rule_eq_bitmap;
} else {
if (ipa_fltrt_generate_hw_rule_bdy(params->ipt,
¶ms->rule->attrib, &buf, &en_rule)) {
- IPAHAL_ERR("fail to generate hw rule\n");
+ IPAHAL_ERR_RL("fail to generate hw rule\n");
return -EPERM;
}
}
@@ -343,7 +343,7 @@ static int ipa_flt_gen_hw_rule(struct ipahal_flt_rule_gen_params *params,
if (*hw_len == 0) {
*hw_len = buf - start;
} else if (*hw_len != (buf - start)) {
- IPAHAL_ERR("hw_len differs b/w passed=0x%x calc=%td\n",
+ IPAHAL_ERR_RL("hw_len differs b/w passed=0x%x calc=%td\n",
*hw_len, (buf - start));
return -EPERM;
}
@@ -376,7 +376,7 @@ static int ipa_flt_gen_hw_rule_ipav4(struct ipahal_flt_rule_gen_params *params,
break;
default:
IPAHAL_ERR("Invalid Rule Action %d\n", params->rule->action);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
return -EINVAL;
}
@@ -1381,7 +1381,7 @@ static int ipa_fltrt_generate_hw_rule_bdy(enum ipa_ip_type ipt,
sz = IPA3_0_HW_TBL_WIDTH * 2 + IPA3_0_HW_RULE_START_ALIGNMENT;
extra_wrd_buf = kzalloc(sz, GFP_KERNEL);
if (!extra_wrd_buf) {
- IPAHAL_ERR("failed to allocate %d bytes\n", sz);
+ IPAHAL_ERR_RL("failed to allocate %d bytes\n", sz);
rc = -ENOMEM;
goto fail_extra_alloc;
}
@@ -1389,7 +1389,7 @@ static int ipa_fltrt_generate_hw_rule_bdy(enum ipa_ip_type ipt,
sz = IPA3_0_HW_RULE_BUF_SIZE + IPA3_0_HW_RULE_START_ALIGNMENT;
rest_wrd_buf = kzalloc(sz, GFP_KERNEL);
if (!rest_wrd_buf) {
- IPAHAL_ERR("failed to allocate %d bytes\n", sz);
+ IPAHAL_ERR_RL("failed to allocate %d bytes\n", sz);
rc = -ENOMEM;
goto fail_rest_alloc;
}
@@ -1407,14 +1407,14 @@ static int ipa_fltrt_generate_hw_rule_bdy(enum ipa_ip_type ipt,
rc = ipa_fltrt_rule_generation_err_check(ipt, attrib);
if (rc) {
- IPAHAL_ERR("rule generation err check failed\n");
+ IPAHAL_ERR_RL("rule generation err check failed\n");
goto fail_err_check;
}
if (ipt == IPA_IP_v4) {
if (ipa_fltrt_generate_hw_rule_bdy_ip4(en_rule, attrib,
&extra_wrd_i, &rest_wrd_i)) {
- IPAHAL_ERR("failed to build ipv4 hw rule\n");
+ IPAHAL_ERR_RL("failed to build ipv4 hw rule\n");
rc = -EPERM;
goto fail_err_check;
}
@@ -1422,12 +1422,12 @@ static int ipa_fltrt_generate_hw_rule_bdy(enum ipa_ip_type ipt,
} else if (ipt == IPA_IP_v6) {
if (ipa_fltrt_generate_hw_rule_bdy_ip6(en_rule, attrib,
&extra_wrd_i, &rest_wrd_i)) {
- IPAHAL_ERR("failed to build ipv6 hw rule\n");
+ IPAHAL_ERR_RL("failed to build ipv6 hw rule\n");
rc = -EPERM;
goto fail_err_check;
}
} else {
- IPAHAL_ERR("unsupported ip %d\n", ipt);
+ IPAHAL_ERR_RL("unsupported ip %d\n", ipt);
goto fail_err_check;
}
@@ -1514,7 +1514,7 @@ static int ipa_fltrt_generate_hw_rule_bdy_from_eq(
* of equations that needs extra word param
*/
if (extra_bytes > 13) {
- IPAHAL_ERR("too much extra bytes\n");
+ IPAHAL_ERR_RL("too much extra bytes\n");
return -EPERM;
} else if (extra_bytes > IPA3_0_HW_TBL_HDR_WIDTH) {
/* two extra words */
@@ -2041,7 +2041,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_SRC_ADDR) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq128, ofst_meq128)) {
- IPAHAL_ERR("ran out of meq128 eq\n");
+ IPAHAL_ERR_RL("ran out of meq128 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2069,7 +2069,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_DST_ADDR) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq128, ofst_meq128)) {
- IPAHAL_ERR("ran out of meq128 eq\n");
+ IPAHAL_ERR_RL("ran out of meq128 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2097,7 +2097,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_TOS_MASKED) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq128, ofst_meq128)) {
- IPAHAL_ERR("ran out of meq128 eq\n");
+ IPAHAL_ERR_RL("ran out of meq128 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2114,7 +2114,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_MAC_DST_ADDR_ETHER_II) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq128, ofst_meq128)) {
- IPAHAL_ERR("ran out of meq128 eq\n");
+ IPAHAL_ERR_RL("ran out of meq128 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2130,7 +2130,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_MAC_SRC_ADDR_ETHER_II) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq128, ofst_meq128)) {
- IPAHAL_ERR("ran out of meq128 eq\n");
+ IPAHAL_ERR_RL("ran out of meq128 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2146,7 +2146,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_MAC_DST_ADDR_802_3) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq128, ofst_meq128)) {
- IPAHAL_ERR("ran out of meq128 eq\n");
+ IPAHAL_ERR_RL("ran out of meq128 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2162,7 +2162,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_MAC_SRC_ADDR_802_3) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq128, ofst_meq128)) {
- IPAHAL_ERR("ran out of meq128 eq\n");
+ IPAHAL_ERR_RL("ran out of meq128 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2180,7 +2180,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_meq32,
ihl_ofst_meq32) || IPA_IS_RAN_OUT_OF_EQ(
ipa3_0_ihl_ofst_meq32, ihl_ofst_meq32 + 1)) {
- IPAHAL_ERR("ran out of ihl_meq32 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_meq32 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2213,7 +2213,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_TCP_SYN) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_meq32,
ihl_ofst_meq32)) {
- IPAHAL_ERR("ran out of ihl_meq32 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_meq32 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2229,7 +2229,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_meq32,
ihl_ofst_meq32) || IPA_IS_RAN_OUT_OF_EQ(
ipa3_0_ihl_ofst_meq32, ihl_ofst_meq32 + 1)) {
- IPAHAL_ERR("ran out of ihl_meq32 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_meq32 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2271,7 +2271,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_MAC_ETHER_TYPE) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ofst_meq32, ofst_meq32)) {
- IPAHAL_ERR("ran out of meq32 eq\n");
+ IPAHAL_ERR_RL("ran out of meq32 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2287,7 +2287,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_TYPE) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_meq32,
ihl_ofst_meq32)) {
- IPAHAL_ERR("ran out of ihl_meq32 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_meq32 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2302,7 +2302,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_CODE) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_meq32,
ihl_ofst_meq32)) {
- IPAHAL_ERR("ran out of ihl_meq32 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_meq32 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2317,7 +2317,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_SPI) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_meq32,
ihl_ofst_meq32)) {
- IPAHAL_ERR("ran out of ihl_meq32 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_meq32 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2342,7 +2342,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_SRC_PORT) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_rng16,
ihl_ofst_rng16)) {
- IPAHAL_ERR("ran out of ihl_rng16 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_rng16 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2358,7 +2358,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_DST_PORT) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_rng16,
ihl_ofst_rng16)) {
- IPAHAL_ERR("ran out of ihl_rng16 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_rng16 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2374,11 +2374,11 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_SRC_PORT_RANGE) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_rng16,
ihl_ofst_rng16)) {
- IPAHAL_ERR("ran out of ihl_rng16 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_rng16 eq\n");
return -EPERM;
}
if (attrib->src_port_hi < attrib->src_port_lo) {
- IPAHAL_ERR("bad src port range param\n");
+ IPAHAL_ERR_RL("bad src port range param\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2394,11 +2394,11 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_DST_PORT_RANGE) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_rng16,
ihl_ofst_rng16)) {
- IPAHAL_ERR("ran out of ihl_rng16 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_rng16 eq\n");
return -EPERM;
}
if (attrib->dst_port_hi < attrib->dst_port_lo) {
- IPAHAL_ERR("bad dst port range param\n");
+ IPAHAL_ERR_RL("bad dst port range param\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2414,7 +2414,7 @@ static int ipa_flt_generate_eq_ip6(enum ipa_ip_type ip,
if (attrib->attrib_mask & IPA_FLT_TCP_SYN_L2TP) {
if (IPA_IS_RAN_OUT_OF_EQ(ipa3_0_ihl_ofst_rng16,
ihl_ofst_rng16)) {
- IPAHAL_ERR("ran out of ihl_rng16 eq\n");
+ IPAHAL_ERR_RL("ran out of ihl_rng16 eq\n");
return -EPERM;
}
*en_rule |= IPA_GET_RULE_EQ_BIT_PTRN(
@@ -2713,7 +2713,7 @@ static int ipa_flt_parse_hw_rule(u8 *addr, struct ipahal_flt_rule_entry *rule)
break;
default:
IPAHAL_ERR("Invalid Rule Action %d\n", rule_hdr->u.hdr.action);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
rule->rule.action = rule_hdr->u.hdr.action;
}
@@ -2760,7 +2760,7 @@ static int ipa_flt_parse_hw_rule_ipav4(u8 *addr,
break;
default:
IPAHAL_ERR("Invalid Rule Action %d\n", rule_hdr->u.hdr.action);
- WARN_ON(1);
+ WARN_ON_RATELIMIT_IPA(1);
rule->rule.action = rule_hdr->u.hdr.action;
}
@@ -3221,7 +3221,7 @@ static int ipa_fltrt_alloc_init_tbl_hdr(
obj = &ipahal_fltrt_objs[ipahal_ctx->hw_type];
if (!params) {
- IPAHAL_ERR("Input error: params=%p\n", params);
+ IPAHAL_ERR_RL("Input error: params=%p\n", params);
return -EINVAL;
}
@@ -3230,7 +3230,7 @@ static int ipa_fltrt_alloc_init_tbl_hdr(
params->nhash_hdr.size,
¶ms->nhash_hdr.phys_base, GFP_KERNEL);
if (!params->nhash_hdr.base) {
- IPAHAL_ERR("fail to alloc DMA buff of size %d\n",
+ IPAHAL_ERR_RL("fail to alloc DMA buff of size %d\n",
params->nhash_hdr.size);
goto nhash_alloc_fail;
}
@@ -3241,7 +3241,7 @@ static int ipa_fltrt_alloc_init_tbl_hdr(
params->hash_hdr.size, ¶ms->hash_hdr.phys_base,
GFP_KERNEL);
if (!params->hash_hdr.base) {
- IPAHAL_ERR("fail to alloc DMA buff of size %d\n",
+ IPAHAL_ERR_RL("fail to alloc DMA buff of size %d\n",
params->hash_hdr.size);
goto hash_alloc_fail;
}
@@ -3374,21 +3374,21 @@ int ipahal_fltrt_allocate_hw_tbl_imgs(
/* Input validation */
if (!params) {
- IPAHAL_ERR("Input err: no params\n");
+ IPAHAL_ERR_RL("Input err: no params\n");
return -EINVAL;
}
if (params->ipt >= IPA_IP_MAX) {
- IPAHAL_ERR("Input err: Invalid ip type %d\n", params->ipt);
+ IPAHAL_ERR_RL("Input err: Invalid ip type %d\n", params->ipt);
return -EINVAL;
}
if (ipa_fltrt_alloc_init_tbl_hdr(params)) {
- IPAHAL_ERR("fail to alloc and init tbl hdr\n");
+ IPAHAL_ERR_RL("fail to alloc and init tbl hdr\n");
return -ENOMEM;
}
if (ipa_fltrt_alloc_lcl_bdy(params)) {
- IPAHAL_ERR("fail to alloc tbl bodies\n");
+ IPAHAL_ERR_RL("fail to alloc tbl bodies\n");
goto bdy_alloc_fail;
}
@@ -3649,12 +3649,12 @@ int ipahal_flt_generate_equation(enum ipa_ip_type ipt,
IPAHAL_DBG_LOW("Entry\n");
if (ipt >= IPA_IP_MAX) {
- IPAHAL_ERR("Input err: Invalid ip type %d\n", ipt);
+ IPAHAL_ERR_RL("Input err: Invalid ip type %d\n", ipt);
return -EINVAL;
}
if (!attrib || !eq_atrb) {
- IPAHAL_ERR("Input err: attrib=%p eq_atrb=%p\n",
+ IPAHAL_ERR_RL("Input err: attrib=%p eq_atrb=%p\n",
attrib, eq_atrb);
return -EINVAL;
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_i.h b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_i.h
index 4ccb7e0..8f78d56 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_i.h
+++ b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_i.h
@@ -46,6 +46,16 @@
IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
+#define IPAHAL_ERR_RL(fmt, args...) \
+ do { \
+ pr_err_ratelimited_ipa(IPAHAL_DRV_NAME " %s:%d " fmt, \
+ __func__, __LINE__, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
+ IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
+ IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
+ IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
+ } while (0)
+
#define IPAHAL_MEM_ALLOC(__size, __is_atomic_ctx) \
(kzalloc((__size), ((__is_atomic_ctx) ? GFP_ATOMIC : GFP_KERNEL)))
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c
index 74f5bbd..1d8eb13 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c
@@ -1910,6 +1910,8 @@ void ipahal_get_aggr_force_close_valmask(int ep_idx,
return;
}
+ memset(valmask, 0, sizeof(struct ipahal_reg_valmask));
+
if (ipahal_ctx->hw_type <= IPA_HW_v3_1) {
shft = IPA_AGGR_FORCE_CLOSE_AGGR_FORCE_CLOSE_PIPE_BITMAP_SHFT;
bmsk = IPA_AGGR_FORCE_CLOSE_AGGR_FORCE_CLOSE_PIPE_BITMAP_BMSK;
diff --git a/drivers/platform/msm/msm_11ad/msm_11ad.c b/drivers/platform/msm/msm_11ad/msm_11ad.c
index d55e655..f64e9de 100644
--- a/drivers/platform/msm/msm_11ad/msm_11ad.c
+++ b/drivers/platform/msm/msm_11ad/msm_11ad.c
@@ -1086,6 +1086,10 @@ static int msm_11ad_probe(struct platform_device *pdev)
ctx->keep_radio_on_during_sleep = of_property_read_bool(of_node,
"qcom,keep-radio-on-during-sleep");
ctx->bus_scale = msm_bus_cl_get_pdata(pdev);
+ if (!ctx->bus_scale) {
+ dev_err(ctx->dev, "Unable to read bus-scaling from DT\n");
+ return -EINVAL;
+ }
ctx->smmu_s1_en = of_property_read_bool(of_node, "qcom,smmu-s1-en");
if (ctx->smmu_s1_en) {
@@ -1114,7 +1118,7 @@ static int msm_11ad_probe(struct platform_device *pdev)
rc = msm_11ad_init_vregs(ctx);
if (rc) {
dev_err(ctx->dev, "msm_11ad_init_vregs failed: %d\n", rc);
- return rc;
+ goto out_bus_scale;
}
rc = msm_11ad_enable_vregs(ctx);
if (rc) {
@@ -1173,6 +1177,18 @@ static int msm_11ad_probe(struct platform_device *pdev)
}
ctx->pcidev = pcidev;
+ rc = msm_pcie_pm_control(MSM_PCIE_RESUME, pcidev->bus->number,
+ pcidev, NULL, 0);
+ if (rc) {
+ dev_err(ctx->dev, "msm_pcie_pm_control(RESUME) failed:%d\n",
+ rc);
+ goto out_rc;
+ }
+
+ pci_set_power_state(pcidev, PCI_D0);
+
+ pci_restore_state(ctx->pcidev);
+
/* Read current state */
rc = pci_read_config_dword(pcidev,
PCIE20_CAP_LINKCTRLSTATUS, &val);
@@ -1180,7 +1196,7 @@ static int msm_11ad_probe(struct platform_device *pdev)
dev_err(ctx->dev,
"reading PCIE20_CAP_LINKCTRLSTATUS failed:%d\n",
rc);
- goto out_rc;
+ goto out_suspend;
}
ctx->l1_enabled_in_enum = val & PCI_EXP_LNKCTL_ASPM_L1;
@@ -1193,7 +1209,7 @@ static int msm_11ad_probe(struct platform_device *pdev)
if (rc) {
dev_err(ctx->dev,
"failed to disable L1, rc %d\n", rc);
- goto out_rc;
+ goto out_suspend;
}
}
@@ -1213,7 +1229,7 @@ static int msm_11ad_probe(struct platform_device *pdev)
rc = msm_11ad_ssr_init(ctx);
if (rc) {
dev_err(ctx->dev, "msm_11ad_ssr_init failed: %d\n", rc);
- goto out_rc;
+ goto out_suspend;
}
msm_11ad_init_cpu_boost(ctx);
@@ -1235,6 +1251,9 @@ static int msm_11ad_probe(struct platform_device *pdev)
msm_11ad_suspend_power_off(ctx);
return 0;
+out_suspend:
+ msm_pcie_pm_control(MSM_PCIE_SUSPEND, pcidev->bus->number,
+ pcidev, NULL, 0);
out_rc:
if (ctx->gpio_en >= 0)
gpio_direction_output(ctx->gpio_en, 0);
@@ -1248,6 +1267,8 @@ static int msm_11ad_probe(struct platform_device *pdev)
msm_11ad_release_clocks(ctx);
msm_11ad_disable_vregs(ctx);
msm_11ad_release_vregs(ctx);
+out_bus_scale:
+ msm_bus_cl_clear_pdata(ctx->bus_scale);
return rc;
}
@@ -1262,7 +1283,6 @@ static int msm_11ad_remove(struct platform_device *pdev)
ctx->pcidev);
kfree(ctx->pristine_state);
- msm_bus_cl_clear_pdata(ctx->bus_scale);
pci_dev_put(ctx->pcidev);
if (ctx->gpio_en >= 0) {
gpio_direction_output(ctx->gpio_en, 0);
@@ -1423,6 +1443,7 @@ static int msm_11ad_notify_crash(struct msm11ad_ctx *ctx)
dev_info(ctx->dev, "SSR requested\n");
(void)msm_11ad_ssr_copy_ramdump(ctx);
ctx->recovery_in_progress = true;
+ subsys_set_crash_status(ctx->subsys, CRASH_STATUS_ERR_FATAL);
rc = subsystem_restart_dev(ctx->subsys);
if (rc) {
dev_err(ctx->dev,
diff --git a/drivers/platform/msm/msm_ext_display.c b/drivers/platform/msm/msm_ext_display.c
index 73bf935..bc4df04 100644
--- a/drivers/platform/msm/msm_ext_display.c
+++ b/drivers/platform/msm/msm_ext_display.c
@@ -147,6 +147,12 @@ static int msm_ext_disp_process_audio(struct msm_ext_disp *ext_disp,
int ret = 0;
int state;
+ if (!ext_disp->ops) {
+ pr_err("codec not registered, skip notification\n");
+ ret = -EPERM;
+ goto end;
+ }
+
state = ext_disp->audio_sdev.state;
ret = extcon_set_state_sync(&ext_disp->audio_sdev,
ext_disp->current_disp, !!new_state);
@@ -155,7 +161,7 @@ static int msm_ext_disp_process_audio(struct msm_ext_disp *ext_disp,
ext_disp->audio_sdev.state == state ?
"is same" : "switched to",
ext_disp->audio_sdev.state);
-
+end:
return ret;
}
@@ -218,15 +224,10 @@ static int msm_ext_disp_update_audio_ops(struct msm_ext_disp *ext_disp,
goto end;
}
- if (!ext_disp->ops) {
- pr_err("codec ops not registered\n");
- ret = -EINVAL;
- goto end;
- }
-
if (state == EXT_DISPLAY_CABLE_CONNECT) {
/* connect codec with interface */
- *ext_disp->ops = data->codec_ops;
+ if (ext_disp->ops)
+ *ext_disp->ops = data->codec_ops;
/* update pdev for interface to use */
ext_disp->ext_disp_data.intf_pdev = data->pdev;
@@ -236,7 +237,10 @@ static int msm_ext_disp_update_audio_ops(struct msm_ext_disp *ext_disp,
pr_debug("codec ops set for %s\n", msm_ext_disp_name(type));
} else if (state == EXT_DISPLAY_CABLE_DISCONNECT) {
- *ext_disp->ops = (struct msm_ext_disp_audio_codec_ops){NULL};
+ if (ext_disp->ops)
+ *ext_disp->ops =
+ (struct msm_ext_disp_audio_codec_ops){NULL};
+
ext_disp->current_disp = EXT_DISPLAY_TYPE_MAX;
pr_debug("codec ops cleared for %s\n", msm_ext_disp_name(type));
@@ -285,6 +289,28 @@ static int msm_ext_disp_audio_notify(struct platform_device *pdev,
return ret;
}
+static void msm_ext_disp_ready_for_display(struct msm_ext_disp *ext_disp)
+{
+ int ret;
+ struct msm_ext_disp_init_data *data = NULL;
+
+ if (!ext_disp) {
+ pr_err("invalid input\n");
+ return;
+ }
+
+ ret = msm_ext_disp_get_intf_data(ext_disp,
+ ext_disp->current_disp, &data);
+ if (ret) {
+ pr_err("%s not found\n",
+ msm_ext_disp_name(ext_disp->current_disp));
+ return;
+ }
+
+ *ext_disp->ops = data->codec_ops;
+ data->codec_ops.ready(ext_disp->pdev);
+}
+
int msm_hdmi_register_audio_codec(struct platform_device *pdev,
struct msm_ext_disp_audio_codec_ops *ops)
{
@@ -334,6 +360,8 @@ int msm_ext_disp_register_audio_codec(struct platform_device *pdev,
end:
mutex_unlock(&ext_disp->lock);
+ if (ext_disp->current_disp != EXT_DISPLAY_TYPE_MAX)
+ msm_ext_disp_ready_for_display(ext_disp);
return ret;
}
@@ -341,6 +369,8 @@ EXPORT_SYMBOL(msm_ext_disp_register_audio_codec);
static int msm_ext_disp_validate_intf(struct msm_ext_disp_init_data *init_data)
{
+ struct msm_ext_disp_audio_codec_ops *ops;
+
if (!init_data) {
pr_err("Invalid init_data\n");
return -EINVAL;
@@ -351,9 +381,15 @@ static int msm_ext_disp_validate_intf(struct msm_ext_disp_init_data *init_data)
return -EINVAL;
}
- if (!init_data->codec_ops.get_audio_edid_blk ||
- !init_data->codec_ops.cable_status ||
- !init_data->codec_ops.audio_info_setup) {
+ ops = &init_data->codec_ops;
+
+ if (!ops->audio_info_setup ||
+ !ops->get_audio_edid_blk ||
+ !ops->cable_status ||
+ !ops->get_intf_id ||
+ !ops->teardown_done ||
+ !ops->acknowledge ||
+ !ops->ready) {
pr_err("Invalid codec operation pointers\n");
return -EINVAL;
}
diff --git a/drivers/platform/msm/qcom-geni-se.c b/drivers/platform/msm/qcom-geni-se.c
index 94736d4..bec16dd 100644
--- a/drivers/platform/msm/qcom-geni-se.c
+++ b/drivers/platform/msm/qcom-geni-se.c
@@ -402,7 +402,7 @@ EXPORT_SYMBOL(geni_setup_s_cmd);
*/
void geni_cancel_m_cmd(void __iomem *base)
{
- geni_write_reg(M_GENI_CMD_CANCEL, base, SE_GENI_S_CMD_CTRL_REG);
+ geni_write_reg(M_GENI_CMD_CANCEL, base, SE_GENI_M_CMD_CTRL_REG);
}
EXPORT_SYMBOL(geni_cancel_m_cmd);
@@ -684,16 +684,14 @@ int se_geni_resources_off(struct se_geni_rsc *rsc)
if (unlikely(!geni_se_dev || !geni_se_dev->bus_bw))
return -ENODEV;
- ret = pinctrl_select_state(rsc->geni_pinctrl, rsc->geni_gpio_sleep);
- if (ret) {
- GENI_SE_ERR(geni_se_dev->log_ctx, false, NULL,
- "%s: Error %d pinctrl_select_state\n", __func__, ret);
- return ret;
- }
ret = se_geni_clks_off(rsc);
if (ret)
GENI_SE_ERR(geni_se_dev->log_ctx, false, NULL,
"%s: Error %d turning off clocks\n", __func__, ret);
+ ret = pinctrl_select_state(rsc->geni_pinctrl, rsc->geni_gpio_sleep);
+ if (ret)
+ GENI_SE_ERR(geni_se_dev->log_ctx, false, NULL,
+ "%s: Error %d pinctrl_select_state\n", __func__, ret);
return ret;
}
EXPORT_SYMBOL(se_geni_resources_off);
@@ -802,19 +800,20 @@ int se_geni_resources_on(struct se_geni_rsc *rsc)
if (unlikely(!geni_se_dev))
return -EPROBE_DEFER;
- ret = se_geni_clks_on(rsc);
- if (ret) {
- GENI_SE_ERR(geni_se_dev->log_ctx, false, NULL,
- "%s: Error %d during clks_on\n", __func__, ret);
- return ret;
- }
-
ret = pinctrl_select_state(rsc->geni_pinctrl, rsc->geni_gpio_active);
if (ret) {
GENI_SE_ERR(geni_se_dev->log_ctx, false, NULL,
"%s: Error %d pinctrl_select_state\n", __func__, ret);
- se_geni_clks_off(rsc);
+ return ret;
}
+
+ ret = se_geni_clks_on(rsc);
+ if (ret) {
+ GENI_SE_ERR(geni_se_dev->log_ctx, false, NULL,
+ "%s: Error %d during clks_on\n", __func__, ret);
+ pinctrl_select_state(rsc->geni_pinctrl, rsc->geni_gpio_sleep);
+ }
+
return ret;
}
EXPORT_SYMBOL(se_geni_resources_on);
diff --git a/drivers/platform/msm/sps/bam.c b/drivers/platform/msm/sps/bam.c
index 8d8af1b..c9c52f7 100644
--- a/drivers/platform/msm/sps/bam.c
+++ b/drivers/platform/msm/sps/bam.c
@@ -707,7 +707,7 @@ static inline u32 bam_get_register_offset(void *base, enum bam_regs reg,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
@@ -756,7 +756,7 @@ static inline u32 bam_read_reg(void *base, enum bam_regs reg, u32 param)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
@@ -767,7 +767,7 @@ static inline u32 bam_read_reg(void *base, enum bam_regs reg, u32 param)
return offset;
}
val = ioread32(dev->base + offset);
- SPS_DBG(dev, "sps:bam 0x%p(va) offset 0x%x reg 0x%x r_val 0x%x.\n",
+ SPS_DBG(dev, "sps:bam 0x%pK(va) offset 0x%x reg 0x%x r_val 0x%x.\n",
dev->base, offset, reg, val);
return val;
}
@@ -788,7 +788,7 @@ static inline u32 bam_read_reg_field(void *base, enum bam_regs reg, u32 param,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
@@ -802,7 +802,7 @@ static inline u32 bam_read_reg_field(void *base, enum bam_regs reg, u32 param,
val = ioread32(dev->base + offset);
val &= mask; /* clear other bits */
val >>= shift;
- SPS_DBG(dev, "sps:bam 0x%p(va) read reg 0x%x mask 0x%x r_val 0x%x.\n",
+ SPS_DBG(dev, "sps:bam 0x%pK(va) read reg 0x%x mask 0x%x r_val 0x%x.\n",
dev->base, offset, mask, val);
return val;
}
@@ -823,7 +823,7 @@ static inline void bam_write_reg(void *base, enum bam_regs reg,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
@@ -834,7 +834,7 @@ static inline void bam_write_reg(void *base, enum bam_regs reg,
return;
}
iowrite32(val, dev->base + offset);
- SPS_DBG(dev, "sps:bam 0x%p(va) write reg 0x%x w_val 0x%x.\n",
+ SPS_DBG(dev, "sps:bam 0x%pK(va) write reg 0x%x w_val 0x%x.\n",
dev->base, offset, val);
}
@@ -854,7 +854,7 @@ static inline void bam_write_reg_field(void *base, enum bam_regs reg,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
@@ -870,7 +870,7 @@ static inline void bam_write_reg_field(void *base, enum bam_regs reg,
tmp &= ~mask; /* clear written bits */
val = tmp | (val << shift);
iowrite32(val, dev->base + offset);
- SPS_DBG(dev, "sps:bam 0x%p(va) write reg 0x%x w_val 0x%x.\n",
+ SPS_DBG(dev, "sps:bam 0x%pK(va) write reg 0x%x w_val 0x%x.\n",
dev->base, offset, val);
}
@@ -888,29 +888,29 @@ int bam_init(void *base, u32 ee,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
- SPS_DBG3(dev, "sps:%s:bam=%pa 0x%p(va).ee=%d.", __func__,
+ SPS_DBG3(dev, "sps:%s:bam=%pa 0x%pK(va).ee=%d.", __func__,
BAM_ID(dev), dev->base, ee);
ver = bam_read_reg_field(base, REVISION, 0, BAM_REVISION);
if ((ver < BAM_MIN_VERSION) || (ver > BAM_MAX_VERSION)) {
- SPS_ERR(dev, "sps:bam 0x%p(va) Invalid BAM REVISION 0x%x.\n",
+ SPS_ERR(dev, "sps:bam 0x%pK(va) Invalid BAM REVISION 0x%x.\n",
dev->base, ver);
return -ENODEV;
}
- SPS_DBG(dev, "sps:REVISION of BAM 0x%p is 0x%x.\n",
+ SPS_DBG(dev, "sps:REVISION of BAM 0x%pK is 0x%x.\n",
dev->base, ver);
if (summing_threshold == 0) {
summing_threshold = 4;
SPS_ERR(dev,
- "sps:bam 0x%p(va) summing_threshold is zero,use default 4.\n",
+ "sps:bam 0x%pK(va) summing_threshold is zero,use default 4.\n",
dev->base);
}
@@ -1010,12 +1010,12 @@ int bam_security_init(void *base, u32 ee, u32 vmid, u32 pipe_mask)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
- SPS_DBG3(dev, "sps:%s:bam=%pa 0x%p(va).", __func__,
+ SPS_DBG3(dev, "sps:%s:bam=%pa 0x%pK(va).", __func__,
BAM_ID(dev), dev->base);
/*
@@ -1026,14 +1026,14 @@ int bam_security_init(void *base, u32 ee, u32 vmid, u32 pipe_mask)
num_pipes = bam_read_reg_field(base, NUM_PIPES, 0, BAM_NUM_PIPES);
if (version < 3 || version > 0x1F) {
SPS_ERR(dev,
- "sps:bam 0x%p(va) security is not supported for this BAM version 0x%x.\n",
+ "sps:bam 0x%pK(va) security is not supported for this BAM version 0x%x.\n",
dev->base, version);
return -ENODEV;
}
if (num_pipes > BAM_MAX_PIPES) {
SPS_ERR(dev,
- "sps:bam 0x%p(va) the number of pipes is more than the maximum number allowed.\n",
+ "sps:bam 0x%pK(va) the number of pipes is more than the maximum number allowed.\n",
dev->base);
return -ENODEV;
}
@@ -1081,12 +1081,12 @@ int bam_check(void *base, u32 *version, u32 ee, u32 *num_pipes)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
- SPS_DBG3(dev, "sps:%s:bam=%pa 0x%p(va).",
+ SPS_DBG3(dev, "sps:%s:bam=%pa 0x%pK(va).",
__func__, BAM_ID(dev), dev->base);
if (!enhd_pipe)
@@ -1095,7 +1095,7 @@ int bam_check(void *base, u32 *version, u32 ee, u32 *num_pipes)
enabled = bam_get_pipe_attr(base, ee, true);
if (!enabled) {
- SPS_ERR(dev, "sps:%s:bam 0x%p(va) is not enabled.\n",
+ SPS_ERR(dev, "sps:%s:bam 0x%pK(va) is not enabled.\n",
__func__, dev->base);
return -ENODEV;
}
@@ -1111,7 +1111,7 @@ int bam_check(void *base, u32 *version, u32 ee, u32 *num_pipes)
/* Check BAM version */
if ((ver < BAM_MIN_VERSION) || (ver > BAM_MAX_VERSION)) {
- SPS_ERR(dev, "sps:%s:bam 0x%p(va) Invalid BAM version 0x%x.\n",
+ SPS_ERR(dev, "sps:%s:bam 0x%pK(va) Invalid BAM version 0x%x.\n",
__func__, dev->base, ver);
return -ENODEV;
}
@@ -1128,11 +1128,11 @@ void bam_exit(void *base, u32 ee)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
- SPS_DBG3(dev, "sps:%s:bam=%pa 0x%p(va).ee=%d.",
+ SPS_DBG3(dev, "sps:%s:bam=%pa 0x%pK(va).ee=%d.",
__func__, BAM_ID(dev), dev->base, ee);
bam_write_reg_field(base, IRQ_SRCS_MSK_EE, ee, BAM_IRQ, 0);
@@ -1156,7 +1156,7 @@ void bam_output_register_content(void *base, u32 ee)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
@@ -1167,7 +1167,7 @@ void bam_output_register_content(void *base, u32 ee)
num_pipes = bam_read_reg_field(base, NUM_PIPES, 0,
BAM_NUM_PIPES);
- SPS_INFO(dev, "sps:bam %pa 0x%p(va) has %d pipes.",
+ SPS_INFO(dev, "sps:bam %pa 0x%pK(va) has %d pipes.",
BAM_ID(dev), dev->base, num_pipes);
pipe_attr = enhd_pipe ?
@@ -1194,7 +1194,7 @@ u32 bam_check_irq_source(void *base, u32 ee, u32 mask,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
@@ -1208,26 +1208,26 @@ u32 bam_check_irq_source(void *base, u32 ee, u32 mask,
if (status & IRQ_STTS_BAM_ERROR_IRQ) {
SPS_ERR(dev,
- "sps:bam %pa 0x%p(va);bam irq status=0x%x.\nsps: BAM_ERROR_IRQ\n",
+ "sps:bam %pa 0x%pK(va);bam irq status=0x%x.\nsps: BAM_ERROR_IRQ\n",
BAM_ID(dev), dev->base, status);
bam_output_register_content(base, ee);
*cb_case = SPS_CALLBACK_BAM_ERROR_IRQ;
} else if (status & IRQ_STTS_BAM_HRESP_ERR_IRQ) {
SPS_ERR(dev,
- "sps:bam %pa 0x%p(va);bam irq status=0x%x.\nsps: BAM_HRESP_ERR_IRQ\n",
+ "sps:bam %pa 0x%pK(va);bam irq status=0x%x.\nsps: BAM_HRESP_ERR_IRQ\n",
BAM_ID(dev), dev->base, status);
bam_output_register_content(base, ee);
*cb_case = SPS_CALLBACK_BAM_HRESP_ERR_IRQ;
#ifdef CONFIG_SPS_SUPPORT_NDP_BAM
} else if (status & IRQ_STTS_BAM_TIMER_IRQ) {
SPS_DBG1(dev,
- "sps:bam 0x%p(va);receive BAM_TIMER_IRQ\n",
+ "sps:bam 0x%pK(va);receive BAM_TIMER_IRQ\n",
dev->base);
*cb_case = SPS_CALLBACK_BAM_TIMER_IRQ;
#endif
} else
SPS_INFO(dev,
- "sps:bam %pa 0x%p(va);bam irq status=0x%x.\n",
+ "sps:bam %pa 0x%pK(va);bam irq status=0x%x.\n",
BAM_ID(dev), dev->base, status);
bam_write_reg(base, IRQ_CLR, 0, status);
@@ -1245,11 +1245,11 @@ void bam_pipe_reset(void *base, u32 pipe)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
- SPS_DBG2(dev, "sps:%s:bam=%pa 0x%p(va).pipe=%d.",
+ SPS_DBG2(dev, "sps:%s:bam=%pa 0x%pK(va).pipe=%d.",
__func__, BAM_ID(dev), dev->base, pipe);
bam_write_reg(base, P_RST, pipe, 1);
@@ -1266,11 +1266,11 @@ void bam_disable_pipe(void *base, u32 pipe)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
- SPS_DBG2(dev, "sps:%s:bam=0x%p(va).pipe=%d.", __func__, base, pipe);
+ SPS_DBG2(dev, "sps:%s:bam=0x%pK(va).pipe=%d.", __func__, base, pipe);
bam_write_reg_field(base, P_CTRL, pipe, P_EN, 0);
wmb(); /* ensure pipe is disabled */
}
@@ -1283,20 +1283,20 @@ bool bam_pipe_check_zlt(void *base, u32 pipe)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return false;
}
if (bam_read_reg_field(base, P_HALT, pipe, P_HALT_P_LAST_DESC_ZLT)) {
SPS_DBG(dev,
- "sps:%s:bam=0x%p(va).pipe=%d: the last desc is ZLT.",
+ "sps:%s:bam=0x%pK(va).pipe=%d: the last desc is ZLT.",
__func__, base, pipe);
return true;
}
SPS_DBG(dev,
- "sps:%s:bam=0x%p(va).pipe=%d: the last desc is not ZLT.",
+ "sps:%s:bam=0x%pK(va).pipe=%d: the last desc is not ZLT.",
__func__, base, pipe);
return false;
}
@@ -1309,20 +1309,20 @@ bool bam_pipe_check_pipe_empty(void *base, u32 pipe)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return false;
}
if (bam_read_reg_field(base, P_HALT, pipe, P_HALT_P_PIPE_EMPTY)) {
SPS_DBG(dev,
- "sps:%s:bam=0x%p(va).pipe=%d: desc FIFO is empty.",
+ "sps:%s:bam=0x%pK(va).pipe=%d: desc FIFO is empty.",
__func__, base, pipe);
return true;
}
SPS_DBG(dev,
- "sps:%s:bam=0x%p(va).pipe=%d: desc FIFO is not empty.",
+ "sps:%s:bam=0x%pK(va).pipe=%d: desc FIFO is not empty.",
__func__, base, pipe);
return false;
}
@@ -1336,11 +1336,11 @@ int bam_pipe_init(void *base, u32 pipe, struct bam_pipe_parameters *param,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return SPS_ERROR;
}
- SPS_DBG2(dev, "sps:%s:bam=%pa 0x%p(va).pipe=%d.",
+ SPS_DBG2(dev, "sps:%s:bam=%pa 0x%pK(va).pipe=%d.",
__func__, BAM_ID(dev), dev->base, pipe);
/* Reset the BAM pipe */
@@ -1374,7 +1374,7 @@ int bam_pipe_init(void *base, u32 pipe, struct bam_pipe_parameters *param,
bam_write_reg_field(base, P_CTRL, pipe, P_LOCK_GROUP,
param->lock_group);
- SPS_DBG(dev, "sps:bam=0x%p(va).pipe=%d.lock_group=%d.\n",
+ SPS_DBG(dev, "sps:bam=0x%pK(va).pipe=%d.lock_group=%d.\n",
dev->base, pipe, param->lock_group);
#endif
@@ -1391,7 +1391,7 @@ int bam_pipe_init(void *base, u32 pipe, struct bam_pipe_parameters *param,
bam_write_reg(base, P_EVNT_DEST_ADDR, pipe, peer_dest_addr);
SPS_DBG2(dev,
- "sps:bam=0x%p(va).pipe=%d.peer_bam=0x%x.peer_pipe=%d.\n",
+ "sps:bam=0x%pK(va).pipe=%d.peer_bam=0x%x.peer_pipe=%d.\n",
dev->base, pipe,
(u32) param->peer_phys_addr,
param->peer_pipe);
@@ -1426,11 +1426,11 @@ void bam_pipe_exit(void *base, u32 pipe, u32 ee)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
- SPS_DBG2(dev, "sps:%s:bam=%pa 0x%p(va).pipe=%d.",
+ SPS_DBG2(dev, "sps:%s:bam=%pa 0x%pK(va).pipe=%d.",
__func__, BAM_ID(dev), dev->base, pipe);
bam_write_reg(base, P_IRQ_EN, pipe, 0);
@@ -1451,15 +1451,15 @@ void bam_pipe_enable(void *base, u32 pipe)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
- SPS_DBG2(dev, "sps:%s:bam=%pa 0x%p(va).pipe=%d.",
+ SPS_DBG2(dev, "sps:%s:bam=%pa 0x%pK(va).pipe=%d.",
__func__, BAM_ID(dev), dev->base, pipe);
if (bam_read_reg_field(base, P_CTRL, pipe, P_EN))
- SPS_DBG2(dev, "sps:bam=0x%p(va).pipe=%d is already enabled.\n",
+ SPS_DBG2(dev, "sps:bam=0x%pK(va).pipe=%d is already enabled.\n",
dev->base, pipe);
else
bam_write_reg_field(base, P_CTRL, pipe, P_EN, 1);
@@ -1474,11 +1474,11 @@ void bam_pipe_disable(void *base, u32 pipe)
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
- SPS_DBG2(dev, "sps:%s:bam=%pa 0x%p(va).pipe=%d.",
+ SPS_DBG2(dev, "sps:%s:bam=%pa 0x%pK(va).pipe=%d.",
__func__, BAM_ID(dev), dev->base, pipe);
bam_write_reg_field(base, P_CTRL, pipe, P_EN, 0);
@@ -1503,12 +1503,12 @@ void bam_pipe_set_irq(void *base, u32 pipe, enum bam_enable irq_en,
struct sps_bam *dev = to_sps_bam_dev(base);
if ((dev == NULL) || (&dev->base != base)) {
- SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%p\n",
+ SPS_ERR(sps, "%s:Failed to get dev for base addr 0x%pK\n",
__func__, base);
return;
}
SPS_DBG2(dev,
- "sps:%s:bam=%pa 0x%p(va).pipe=%d; irq_en:%d; src_mask:0x%x; ee:%d.\n",
+ "sps:%s:bam=%pa 0x%pK(va).pipe=%d; irq_en:%d; src_mask:0x%x; ee:%d.\n",
__func__, BAM_ID(dev), dev->base, pipe,
irq_en, src_mask, ee);
if (src_mask & BAM_PIPE_IRQ_RST_ERROR) {
diff --git a/drivers/platform/msm/sps/sps.c b/drivers/platform/msm/sps/sps.c
index f9ba30e..e01839f 100644
--- a/drivers/platform/msm/sps/sps.c
+++ b/drivers/platform/msm/sps/sps.c
@@ -945,7 +945,7 @@ static int sps_device_init(void)
goto exit_err;
}
- SPS_DBG3(sps, "sps:bamdma_bam.phys=%pa.virt=0x%p.",
+ SPS_DBG3(sps, "sps:bamdma_bam.phys=%pa.virt=0x%pK.",
&bamdma_props.phys_addr,
bamdma_props.virt_addr);
@@ -960,7 +960,7 @@ static int sps_device_init(void)
goto exit_err;
}
- SPS_DBG3(sps, "sps:bamdma_dma.phys=%pa.virt=0x%p.",
+ SPS_DBG3(sps, "sps:bamdma_dma.phys=%pa.virt=0x%pK.",
&bamdma_props.periph_phys_addr,
bamdma_props.periph_virt_addr);
diff --git a/drivers/platform/msm/sps/sps_bam.c b/drivers/platform/msm/sps/sps_bam.c
index be4a2cc..c1ab20c 100644
--- a/drivers/platform/msm/sps/sps_bam.c
+++ b/drivers/platform/msm/sps/sps_bam.c
@@ -512,12 +512,12 @@ int sps_bam_enable(struct sps_bam *dev)
if (dev->props.logging_number > 0)
dev->props.logging_number--;
SPS_INFO(dev,
- "sps:BAM %pa (va:0x%p) enabled: ver:0x%x, number of pipes:%d\n",
+ "sps:BAM %pa (va:0x%pK) enabled: ver:0x%x, number of pipes:%d\n",
BAM_ID(dev), dev->base, dev->version,
dev->props.num_pipes);
} else
SPS_DBG3(dev,
- "sps:BAM %pa (va:0x%p) enabled: ver:0x%x, number of pipes:%d\n",
+ "sps:BAM %pa (va:0x%pK) enabled: ver:0x%x, number of pipes:%d\n",
BAM_ID(dev), dev->base, dev->version,
dev->props.num_pipes);
@@ -2134,7 +2134,7 @@ int sps_bam_pipe_get_event(struct sps_bam *dev,
if (pipe->sys.no_queue) {
SPS_ERR(dev,
- "sps:Invalid connection for event: BAM %pa pipe %d context 0x%p\n",
+ "sps:Invalid connection for event: BAM %pa pipe %d context 0x%pK\n",
BAM_ID(dev), pipe_index, pipe);
notify->event_id = SPS_EVENT_INVALID;
return SPS_ERROR;
diff --git a/drivers/platform/msm/sps/sps_mem.c b/drivers/platform/msm/sps/sps_mem.c
index 16556bd..105135a0 100644
--- a/drivers/platform/msm/sps/sps_mem.c
+++ b/drivers/platform/msm/sps/sps_mem.c
@@ -1,5 +1,5 @@
-/* Copyright (c) 2011-2013, 2015, 2017, The Linux Foundation. All rights
- * reserved.
+/* Copyright (c) 2011-2013, 2015, 2017, The Linux Foundation.
+ * All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -129,7 +129,7 @@ int sps_mem_init(phys_addr_t pipemem_phys_base, u32 pipemem_size)
iomem_offset = 0;
SPS_DBG(sps,
- "sps:sps_mem_init.iomem_phys=%pa,iomem_virt=0x%p.",
+ "sps:sps_mem_init.iomem_phys=%pa,iomem_virt=0x%pK.",
&iomem_phys, iomem_virt);
}
diff --git a/drivers/platform/msm/sps/sps_rm.c b/drivers/platform/msm/sps/sps_rm.c
index 602a256..276b847 100644
--- a/drivers/platform/msm/sps/sps_rm.c
+++ b/drivers/platform/msm/sps/sps_rm.c
@@ -724,7 +724,7 @@ int sps_rm_state_change(struct sps_pipe *pipe, u32 state)
state == SPS_STATE_ALLOCATE) {
if (sps_rm_alloc(pipe)) {
SPS_ERR(pipe->bam,
- "sps:Fail to allocate resource for BAM 0x%p pipe %d.\n",
+ "sps:Fail to allocate resource for BAM 0x%pK pipe %d.\n",
pipe->bam, pipe->pipe_index);
return SPS_ERROR;
}
@@ -746,7 +746,7 @@ int sps_rm_state_change(struct sps_pipe *pipe, u32 state)
result = sps_bam_pipe_connect(pipe, ¶ms);
if (result) {
SPS_ERR(pipe->bam,
- "sps:Failed to connect BAM 0x%p pipe %d",
+ "sps:Failed to connect BAM 0x%pK pipe %d",
pipe->bam, pipe->pipe_index);
return SPS_ERROR;
}
diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
index 96ffda4..454cb2e 100644
--- a/drivers/platform/x86/hp-wmi.c
+++ b/drivers/platform/x86/hp-wmi.c
@@ -248,7 +248,7 @@ static int hp_wmi_display_state(void)
int ret = hp_wmi_perform_query(HPWMI_DISPLAY_QUERY, 0, &state,
sizeof(state), sizeof(state));
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return state;
}
@@ -258,7 +258,7 @@ static int hp_wmi_hddtemp_state(void)
int ret = hp_wmi_perform_query(HPWMI_HDDTEMP_QUERY, 0, &state,
sizeof(state), sizeof(state));
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return state;
}
@@ -268,7 +268,7 @@ static int hp_wmi_als_state(void)
int ret = hp_wmi_perform_query(HPWMI_ALS_QUERY, 0, &state,
sizeof(state), sizeof(state));
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return state;
}
@@ -279,7 +279,7 @@ static int hp_wmi_dock_state(void)
sizeof(state), sizeof(state));
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return state & 0x1;
}
@@ -290,7 +290,7 @@ static int hp_wmi_tablet_state(void)
int ret = hp_wmi_perform_query(HPWMI_HARDWARE_QUERY, 0, &state,
sizeof(state), sizeof(state));
if (ret)
- return ret;
+ return ret < 0 ? ret : -EINVAL;
return (state & 0x4) ? 1 : 0;
}
@@ -323,7 +323,7 @@ static int __init hp_wmi_enable_hotkeys(void)
int ret = hp_wmi_perform_query(HPWMI_BIOS_QUERY, 1, &value,
sizeof(value), 0);
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return 0;
}
@@ -336,7 +336,7 @@ static int hp_wmi_set_block(void *data, bool blocked)
ret = hp_wmi_perform_query(HPWMI_WIRELESS_QUERY, 1,
&query, sizeof(query), 0);
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return 0;
}
@@ -428,7 +428,7 @@ static int hp_wmi_post_code_state(void)
int ret = hp_wmi_perform_query(HPWMI_POSTCODEERROR_QUERY, 0, &state,
sizeof(state), sizeof(state));
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return state;
}
@@ -494,7 +494,7 @@ static ssize_t set_als(struct device *dev, struct device_attribute *attr,
int ret = hp_wmi_perform_query(HPWMI_ALS_QUERY, 1, &tmp,
sizeof(tmp), sizeof(tmp));
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return count;
}
@@ -515,7 +515,7 @@ static ssize_t set_postcode(struct device *dev, struct device_attribute *attr,
ret = hp_wmi_perform_query(HPWMI_POSTCODEERROR_QUERY, 1, &tmp,
sizeof(tmp), sizeof(tmp));
if (ret)
- return -EINVAL;
+ return ret < 0 ? ret : -EINVAL;
return count;
}
@@ -572,10 +572,12 @@ static void hp_wmi_notify(u32 value, void *context)
switch (event_id) {
case HPWMI_DOCK_EVENT:
- input_report_switch(hp_wmi_input_dev, SW_DOCK,
- hp_wmi_dock_state());
- input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
- hp_wmi_tablet_state());
+ if (test_bit(SW_DOCK, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_DOCK,
+ hp_wmi_dock_state());
+ if (test_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
+ hp_wmi_tablet_state());
input_sync(hp_wmi_input_dev);
break;
case HPWMI_PARK_HDD:
@@ -644,6 +646,7 @@ static int __init hp_wmi_input_setup(void)
{
acpi_status status;
int err;
+ int val;
hp_wmi_input_dev = input_allocate_device();
if (!hp_wmi_input_dev)
@@ -654,17 +657,26 @@ static int __init hp_wmi_input_setup(void)
hp_wmi_input_dev->id.bustype = BUS_HOST;
__set_bit(EV_SW, hp_wmi_input_dev->evbit);
- __set_bit(SW_DOCK, hp_wmi_input_dev->swbit);
- __set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
+
+ /* Dock */
+ val = hp_wmi_dock_state();
+ if (!(val < 0)) {
+ __set_bit(SW_DOCK, hp_wmi_input_dev->swbit);
+ input_report_switch(hp_wmi_input_dev, SW_DOCK, val);
+ }
+
+ /* Tablet mode */
+ val = hp_wmi_tablet_state();
+ if (!(val < 0)) {
+ __set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
+ input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE, val);
+ }
err = sparse_keymap_setup(hp_wmi_input_dev, hp_wmi_keymap, NULL);
if (err)
goto err_free_dev;
/* Set initial hardware state */
- input_report_switch(hp_wmi_input_dev, SW_DOCK, hp_wmi_dock_state());
- input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
- hp_wmi_tablet_state());
input_sync(hp_wmi_input_dev);
if (!hp_wmi_bios_2009_later() && hp_wmi_bios_2008_later())
@@ -950,10 +962,12 @@ static int hp_wmi_resume_handler(struct device *device)
* changed.
*/
if (hp_wmi_input_dev) {
- input_report_switch(hp_wmi_input_dev, SW_DOCK,
- hp_wmi_dock_state());
- input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
- hp_wmi_tablet_state());
+ if (test_bit(SW_DOCK, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_DOCK,
+ hp_wmi_dock_state());
+ if (test_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
+ hp_wmi_tablet_state());
input_sync(hp_wmi_input_dev);
}
diff --git a/drivers/platform/x86/intel_mid_thermal.c b/drivers/platform/x86/intel_mid_thermal.c
index 9f713b8..5c768c4 100644
--- a/drivers/platform/x86/intel_mid_thermal.c
+++ b/drivers/platform/x86/intel_mid_thermal.c
@@ -550,6 +550,7 @@ static const struct platform_device_id therm_id_table[] = {
{ "msic_thermal", 1 },
{ }
};
+MODULE_DEVICE_TABLE(platform, therm_id_table);
static struct platform_driver mid_thermal_driver = {
.driver = {
diff --git a/drivers/power/reset/msm-poweroff.c b/drivers/power/reset/msm-poweroff.c
index c090b2a..bfc401a 100644
--- a/drivers/power/reset/msm-poweroff.c
+++ b/drivers/power/reset/msm-poweroff.c
@@ -33,6 +33,7 @@
#include <soc/qcom/scm.h>
#include <soc/qcom/restart.h>
#include <soc/qcom/watchdog.h>
+#include <soc/qcom/minidump.h>
#define EMERGENCY_DLOAD_MAGIC1 0x322A4F99
#define EMERGENCY_DLOAD_MAGIC2 0xC67E4350
@@ -42,10 +43,11 @@
#define SCM_IO_DISABLE_PMIC_ARBITER 1
#define SCM_IO_DEASSERT_PS_HOLD 2
#define SCM_WDOG_DEBUG_BOOT_PART 0x9
-#define SCM_DLOAD_MODE 0X10
+#define SCM_DLOAD_FULLDUMP 0X10
#define SCM_EDLOAD_MODE 0X01
#define SCM_DLOAD_CMD 0x10
-
+#define SCM_DLOAD_MINIDUMP 0X20
+#define SCM_DLOAD_BOTHDUMPS (SCM_DLOAD_MINIDUMP | SCM_DLOAD_FULLDUMP)
static int restart_mode;
static void __iomem *restart_reason, *dload_type_addr;
@@ -65,6 +67,7 @@ static struct kobject dload_kobj;
#endif
static int in_panic;
+static int dload_type = SCM_DLOAD_FULLDUMP;
static void *dload_mode_addr;
static bool dload_mode_enabled;
static void *emergency_dload_mode_addr;
@@ -137,7 +140,7 @@ static void set_dload_mode(int on)
mb();
}
- ret = scm_set_dload_mode(on ? SCM_DLOAD_MODE : 0, 0);
+ ret = scm_set_dload_mode(on ? dload_type : 0, 0);
if (ret)
pr_err("Failed to set secure DLOAD mode: %d\n", ret);
@@ -452,6 +455,9 @@ static ssize_t show_emmc_dload(struct kobject *kobj, struct attribute *attr,
{
uint32_t read_val, show_val;
+ if (!dload_type_addr)
+ return -ENODEV;
+
read_val = __raw_readl(dload_type_addr);
if (read_val == EMMC_DLOAD_TYPE)
show_val = 1;
@@ -467,6 +473,9 @@ static size_t store_emmc_dload(struct kobject *kobj, struct attribute *attr,
uint32_t enabled;
int ret;
+ if (!dload_type_addr)
+ return -ENODEV;
+
ret = kstrtouint(buf, 0, &enabled);
if (ret < 0)
return ret;
@@ -481,10 +490,57 @@ static size_t store_emmc_dload(struct kobject *kobj, struct attribute *attr,
return count;
}
+
+#ifdef CONFIG_QCOM_MINIDUMP
+static DEFINE_MUTEX(tcsr_lock);
+
+static ssize_t show_dload_mode(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ return scnprintf(buf, PAGE_SIZE, "DLOAD dump type: %s\n",
+ (dload_type == SCM_DLOAD_BOTHDUMPS) ? "both" :
+ ((dload_type == SCM_DLOAD_MINIDUMP) ? "mini" : "full"));
+}
+
+static size_t store_dload_mode(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t count)
+{
+ if (sysfs_streq(buf, "full")) {
+ dload_type = SCM_DLOAD_FULLDUMP;
+ } else if (sysfs_streq(buf, "mini")) {
+ if (!msm_minidump_enabled()) {
+ pr_err("Minidump is not enabled\n");
+ return -ENODEV;
+ }
+ dload_type = SCM_DLOAD_MINIDUMP;
+ } else if (sysfs_streq(buf, "both")) {
+ if (!msm_minidump_enabled()) {
+ pr_err("Minidump not enabled, setting fulldump only\n");
+ dload_type = SCM_DLOAD_FULLDUMP;
+ return count;
+ }
+ dload_type = SCM_DLOAD_BOTHDUMPS;
+ } else{
+ pr_err("Invalid Dump setup request..\n");
+ pr_err("Supported dumps:'full', 'mini', or 'both'\n");
+ return -EINVAL;
+ }
+
+ mutex_lock(&tcsr_lock);
+ /*Overwrite TCSR reg*/
+ set_dload_mode(dload_type);
+ mutex_unlock(&tcsr_lock);
+ return count;
+}
+RESET_ATTR(dload_mode, 0644, show_dload_mode, store_dload_mode);
+#endif
RESET_ATTR(emmc_dload, 0644, show_emmc_dload, store_emmc_dload);
static struct attribute *reset_attrs[] = {
&reset_attr_emmc_dload.attr,
+#ifdef CONFIG_QCOM_MINIDUMP
+ &reset_attr_dload_mode.attr,
+#endif
NULL
};
diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
index f62f9df..089056c 100644
--- a/drivers/power/supply/axp288_fuel_gauge.c
+++ b/drivers/power/supply/axp288_fuel_gauge.c
@@ -29,6 +29,7 @@
#include <linux/iio/consumer.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>
+#include <asm/unaligned.h>
#define CHRG_STAT_BAT_SAFE_MODE (1 << 3)
#define CHRG_STAT_BAT_VALID (1 << 4)
@@ -73,17 +74,15 @@
#define FG_CNTL_CC_EN (1 << 6)
#define FG_CNTL_GAUGE_EN (1 << 7)
+#define FG_15BIT_WORD_VALID (1 << 15)
+#define FG_15BIT_VAL_MASK 0x7fff
+
#define FG_REP_CAP_VALID (1 << 7)
#define FG_REP_CAP_VAL_MASK 0x7F
#define FG_DES_CAP1_VALID (1 << 7)
-#define FG_DES_CAP1_VAL_MASK 0x7F
-#define FG_DES_CAP0_VAL_MASK 0xFF
#define FG_DES_CAP_RES_LSB 1456 /* 1.456mAhr */
-#define FG_CC_MTR1_VALID (1 << 7)
-#define FG_CC_MTR1_VAL_MASK 0x7F
-#define FG_CC_MTR0_VAL_MASK 0xFF
#define FG_DES_CC_RES_LSB 1456 /* 1.456mAhr */
#define FG_OCV_CAP_VALID (1 << 7)
@@ -189,6 +188,44 @@ static int fuel_gauge_reg_writeb(struct axp288_fg_info *info, int reg, u8 val)
return ret;
}
+static int fuel_gauge_read_15bit_word(struct axp288_fg_info *info, int reg)
+{
+ unsigned char buf[2];
+ int ret;
+
+ ret = regmap_bulk_read(info->regmap, reg, buf, 2);
+ if (ret < 0) {
+ dev_err(&info->pdev->dev, "Error reading reg 0x%02x err: %d\n",
+ reg, ret);
+ return ret;
+ }
+
+ ret = get_unaligned_be16(buf);
+ if (!(ret & FG_15BIT_WORD_VALID)) {
+ dev_err(&info->pdev->dev, "Error reg 0x%02x contents not valid\n",
+ reg);
+ return -ENXIO;
+ }
+
+ return ret & FG_15BIT_VAL_MASK;
+}
+
+static int fuel_gauge_read_12bit_word(struct axp288_fg_info *info, int reg)
+{
+ unsigned char buf[2];
+ int ret;
+
+ ret = regmap_bulk_read(info->regmap, reg, buf, 2);
+ if (ret < 0) {
+ dev_err(&info->pdev->dev, "Error reading reg 0x%02x err: %d\n",
+ reg, ret);
+ return ret;
+ }
+
+ /* 12-bit data values have upper 8 bits in buf[0], lower 4 in buf[1] */
+ return (buf[0] << 4) | ((buf[1] >> 4) & 0x0f);
+}
+
static int pmic_read_adc_val(const char *name, int *raw_val,
struct axp288_fg_info *info)
{
@@ -249,24 +286,15 @@ static int fuel_gauge_debug_show(struct seq_file *s, void *data)
seq_printf(s, " FG_RDC0[%02x] : %02x\n",
AXP288_FG_RDC0_REG,
fuel_gauge_reg_readb(info, AXP288_FG_RDC0_REG));
- seq_printf(s, " FG_OCVH[%02x] : %02x\n",
+ seq_printf(s, " FG_OCV[%02x] : %04x\n",
AXP288_FG_OCVH_REG,
- fuel_gauge_reg_readb(info, AXP288_FG_OCVH_REG));
- seq_printf(s, " FG_OCVL[%02x] : %02x\n",
- AXP288_FG_OCVL_REG,
- fuel_gauge_reg_readb(info, AXP288_FG_OCVL_REG));
- seq_printf(s, "FG_DES_CAP1[%02x] : %02x\n",
+ fuel_gauge_read_12bit_word(info, AXP288_FG_OCVH_REG));
+ seq_printf(s, " FG_DES_CAP[%02x] : %04x\n",
AXP288_FG_DES_CAP1_REG,
- fuel_gauge_reg_readb(info, AXP288_FG_DES_CAP1_REG));
- seq_printf(s, "FG_DES_CAP0[%02x] : %02x\n",
- AXP288_FG_DES_CAP0_REG,
- fuel_gauge_reg_readb(info, AXP288_FG_DES_CAP0_REG));
- seq_printf(s, " FG_CC_MTR1[%02x] : %02x\n",
+ fuel_gauge_read_15bit_word(info, AXP288_FG_DES_CAP1_REG));
+ seq_printf(s, " FG_CC_MTR[%02x] : %04x\n",
AXP288_FG_CC_MTR1_REG,
- fuel_gauge_reg_readb(info, AXP288_FG_CC_MTR1_REG));
- seq_printf(s, " FG_CC_MTR0[%02x] : %02x\n",
- AXP288_FG_CC_MTR0_REG,
- fuel_gauge_reg_readb(info, AXP288_FG_CC_MTR0_REG));
+ fuel_gauge_read_15bit_word(info, AXP288_FG_CC_MTR1_REG));
seq_printf(s, " FG_OCV_CAP[%02x] : %02x\n",
AXP288_FG_OCV_CAP_REG,
fuel_gauge_reg_readb(info, AXP288_FG_OCV_CAP_REG));
@@ -517,21 +545,12 @@ static int fuel_gauge_get_btemp(struct axp288_fg_info *info, int *btemp)
static int fuel_gauge_get_vocv(struct axp288_fg_info *info, int *vocv)
{
- int ret, value;
+ int ret;
- /* 12-bit data value, upper 8 in OCVH, lower 4 in OCVL */
- ret = fuel_gauge_reg_readb(info, AXP288_FG_OCVH_REG);
- if (ret < 0)
- goto vocv_read_fail;
- value = ret << 4;
+ ret = fuel_gauge_read_12bit_word(info, AXP288_FG_OCVH_REG);
+ if (ret >= 0)
+ *vocv = VOLTAGE_FROM_ADC(ret);
- ret = fuel_gauge_reg_readb(info, AXP288_FG_OCVL_REG);
- if (ret < 0)
- goto vocv_read_fail;
- value |= (ret & 0xf);
-
- *vocv = VOLTAGE_FROM_ADC(value);
-vocv_read_fail:
return ret;
}
@@ -663,28 +682,18 @@ static int fuel_gauge_get_property(struct power_supply *ps,
val->intval = POWER_SUPPLY_TECHNOLOGY_LION;
break;
case POWER_SUPPLY_PROP_CHARGE_NOW:
- ret = fuel_gauge_reg_readb(info, AXP288_FG_CC_MTR1_REG);
+ ret = fuel_gauge_read_15bit_word(info, AXP288_FG_CC_MTR1_REG);
if (ret < 0)
goto fuel_gauge_read_err;
- value = (ret & FG_CC_MTR1_VAL_MASK) << 8;
- ret = fuel_gauge_reg_readb(info, AXP288_FG_CC_MTR0_REG);
- if (ret < 0)
- goto fuel_gauge_read_err;
- value |= (ret & FG_CC_MTR0_VAL_MASK);
- val->intval = value * FG_DES_CAP_RES_LSB;
+ val->intval = ret * FG_DES_CAP_RES_LSB;
break;
case POWER_SUPPLY_PROP_CHARGE_FULL:
- ret = fuel_gauge_reg_readb(info, AXP288_FG_DES_CAP1_REG);
+ ret = fuel_gauge_read_15bit_word(info, AXP288_FG_DES_CAP1_REG);
if (ret < 0)
goto fuel_gauge_read_err;
- value = (ret & FG_DES_CAP1_VAL_MASK) << 8;
- ret = fuel_gauge_reg_readb(info, AXP288_FG_DES_CAP0_REG);
- if (ret < 0)
- goto fuel_gauge_read_err;
- value |= (ret & FG_DES_CAP0_VAL_MASK);
- val->intval = value * FG_DES_CAP_RES_LSB;
+ val->intval = ret * FG_DES_CAP_RES_LSB;
break;
case POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN:
val->intval = PROP_CURR(info->pdata->design_cap);
diff --git a/drivers/power/supply/power_supply_sysfs.c b/drivers/power/supply/power_supply_sysfs.c
index b929d8b..785cf23 100644
--- a/drivers/power/supply/power_supply_sysfs.c
+++ b/drivers/power/supply/power_supply_sysfs.c
@@ -319,6 +319,7 @@ static struct device_attribute power_supply_attrs[] = {
POWER_SUPPLY_ATTR(pd_voltage_max),
POWER_SUPPLY_ATTR(pd_voltage_min),
POWER_SUPPLY_ATTR(sdp_current_max),
+ POWER_SUPPLY_ATTR(connector_type),
/* Local extensions of type int64_t */
POWER_SUPPLY_ATTR(charge_counter_ext),
/* Properties of type `const char *' */
diff --git a/drivers/power/supply/qcom/battery.c b/drivers/power/supply/qcom/battery.c
index 4b900e2..aa5b1b0 100644
--- a/drivers/power/supply/qcom/battery.c
+++ b/drivers/power/supply/qcom/battery.c
@@ -150,22 +150,52 @@ static void split_settled(struct pl_data *chip)
total_current_ua = pval.intval;
}
- pval.intval = total_current_ua - slave_ua;
- /* Set ICL on main charger */
- rc = power_supply_set_property(chip->main_psy,
+ /*
+ * If there is an increase in slave share
+ * (Also handles parallel enable case)
+ * Set Main ICL then slave ICL
+ * else
+ * (Also handles parallel disable case)
+ * Set slave ICL then main ICL.
+ */
+ if (slave_ua > chip->pl_settled_ua) {
+ pval.intval = total_current_ua - slave_ua;
+ /* Set ICL on main charger */
+ rc = power_supply_set_property(chip->main_psy,
POWER_SUPPLY_PROP_CURRENT_MAX, &pval);
- if (rc < 0) {
- pr_err("Couldn't change slave suspend state rc=%d\n", rc);
- return;
- }
+ if (rc < 0) {
+ pr_err("Couldn't change slave suspend state rc=%d\n",
+ rc);
+ return;
+ }
- /* set parallel's ICL could be 0mA when pl is disabled */
- pval.intval = slave_ua;
- rc = power_supply_set_property(chip->pl_psy,
- POWER_SUPPLY_PROP_CURRENT_MAX, &pval);
- if (rc < 0) {
- pr_err("Couldn't set parallel icl, rc=%d\n", rc);
- return;
+ /* set parallel's ICL could be 0mA when pl is disabled */
+ pval.intval = slave_ua;
+ rc = power_supply_set_property(chip->pl_psy,
+ POWER_SUPPLY_PROP_CURRENT_MAX, &pval);
+ if (rc < 0) {
+ pr_err("Couldn't set parallel icl, rc=%d\n", rc);
+ return;
+ }
+ } else {
+ /* set parallel's ICL could be 0mA when pl is disabled */
+ pval.intval = slave_ua;
+ rc = power_supply_set_property(chip->pl_psy,
+ POWER_SUPPLY_PROP_CURRENT_MAX, &pval);
+ if (rc < 0) {
+ pr_err("Couldn't set parallel icl, rc=%d\n", rc);
+ return;
+ }
+
+ pval.intval = total_current_ua - slave_ua;
+ /* Set ICL on main charger */
+ rc = power_supply_set_property(chip->main_psy,
+ POWER_SUPPLY_PROP_CURRENT_MAX, &pval);
+ if (rc < 0) {
+ pr_err("Couldn't change slave suspend state rc=%d\n",
+ rc);
+ return;
+ }
}
chip->total_settled_ua = total_settled_ua;
@@ -626,24 +656,56 @@ static int pl_disable_vote_callback(struct votable *votable,
get_fcc_split(chip, total_fcc_ua, &master_fcc_ua,
&slave_fcc_ua);
- chip->slave_fcc_ua = slave_fcc_ua;
-
- pval.intval = master_fcc_ua;
- rc = power_supply_set_property(chip->main_psy,
+ /*
+ * If there is an increase in slave share
+ * (Also handles parallel enable case)
+ * Set Main ICL then slave FCC
+ * else
+ * (Also handles parallel disable case)
+ * Set slave ICL then main FCC.
+ */
+ if (slave_fcc_ua > chip->slave_fcc_ua) {
+ pval.intval = master_fcc_ua;
+ rc = power_supply_set_property(chip->main_psy,
POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX,
&pval);
- if (rc < 0) {
- pr_err("Could not set main fcc, rc=%d\n", rc);
- return rc;
- }
+ if (rc < 0) {
+ pr_err("Could not set main fcc, rc=%d\n", rc);
+ return rc;
+ }
- pval.intval = slave_fcc_ua;
- rc = power_supply_set_property(chip->pl_psy,
+ pval.intval = slave_fcc_ua;
+ rc = power_supply_set_property(chip->pl_psy,
POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX,
&pval);
- if (rc < 0) {
- pr_err("Couldn't set parallel fcc, rc=%d\n", rc);
- return rc;
+ if (rc < 0) {
+ pr_err("Couldn't set parallel fcc, rc=%d\n",
+ rc);
+ return rc;
+ }
+
+ chip->slave_fcc_ua = slave_fcc_ua;
+ } else {
+ pval.intval = slave_fcc_ua;
+ rc = power_supply_set_property(chip->pl_psy,
+ POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX,
+ &pval);
+ if (rc < 0) {
+ pr_err("Couldn't set parallel fcc, rc=%d\n",
+ rc);
+ return rc;
+ }
+
+ chip->slave_fcc_ua = slave_fcc_ua;
+
+ pval.intval = master_fcc_ua;
+ rc = power_supply_set_property(chip->main_psy,
+ POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT_MAX,
+ &pval);
+ if (rc < 0) {
+ pr_err("Could not set main fcc, rc=%d\n", rc);
+ return rc;
+ }
}
/*
diff --git a/drivers/power/supply/qcom/fg-core.h b/drivers/power/supply/qcom/fg-core.h
index 9179325..99120f4 100644
--- a/drivers/power/supply/qcom/fg-core.h
+++ b/drivers/power/supply/qcom/fg-core.h
@@ -458,7 +458,6 @@ struct fg_chip {
bool qnovo_enable;
struct completion soc_update;
struct completion soc_ready;
- struct completion mem_grant;
struct delayed_work profile_load_work;
struct work_struct status_change_work;
struct delayed_work ttf_work;
diff --git a/drivers/power/supply/qcom/fg-memif.c b/drivers/power/supply/qcom/fg-memif.c
index 279b097..d9b5ad7 100644
--- a/drivers/power/supply/qcom/fg-memif.c
+++ b/drivers/power/supply/qcom/fg-memif.c
@@ -746,15 +746,12 @@ int fg_interleaved_mem_write(struct fg_chip *chip, u16 address, u8 offset,
return rc;
}
-#define MEM_GRANT_WAIT_MS 200
+#define MEM_GNT_WAIT_TIME_US 10000
+#define MEM_GNT_RETRIES 20
static int fg_direct_mem_request(struct fg_chip *chip, bool request)
{
- int rc, ret;
+ int rc, ret, i = 0;
u8 val, mask;
- bool tried_again = false;
-
- if (request)
- reinit_completion(&chip->mem_grant);
mask = MEM_ACCESS_REQ_BIT | IACS_SLCT_BIT;
val = request ? MEM_ACCESS_REQ_BIT : 0;
@@ -769,7 +766,7 @@ static int fg_direct_mem_request(struct fg_chip *chip, bool request)
rc = fg_masked_write(chip, MEM_IF_MEM_ARB_CFG(chip), mask, val);
if (rc < 0) {
pr_err("failed to configure mem_if_mem_arb_cfg rc:%d\n", rc);
- return rc;
+ goto release;
}
if (request)
@@ -780,43 +777,39 @@ static int fg_direct_mem_request(struct fg_chip *chip, bool request)
if (!request)
return 0;
-wait:
- ret = wait_for_completion_interruptible_timeout(
- &chip->mem_grant, msecs_to_jiffies(MEM_GRANT_WAIT_MS));
- /* If we were interrupted wait again one more time. */
- if (ret <= 0) {
- if ((ret == -ERESTARTSYS || ret == 0) && !tried_again) {
- pr_debug("trying again, ret=%d\n", ret);
- tried_again = true;
- goto wait;
- } else {
- pr_err("wait for mem_grant timed out ret=%d\n",
- ret);
- fg_dump_regs(chip);
+ while (i < MEM_GNT_RETRIES) {
+ rc = fg_read(chip, MEM_IF_INT_RT_STS(chip), &val, 1);
+ if (rc < 0) {
+ pr_err("Error in reading MEM_IF_INT_RT_STS, rc=%d\n",
+ rc);
+ goto release;
}
+
+ if (val & MEM_GNT_BIT)
+ return 0;
+
+ usleep_range(MEM_GNT_WAIT_TIME_US, MEM_GNT_WAIT_TIME_US + 1);
+ i++;
}
- if (ret <= 0) {
- val = 0;
- mask = MEM_ACCESS_REQ_BIT | IACS_SLCT_BIT;
- rc = fg_masked_write(chip, MEM_IF_MEM_INTF_CFG(chip), mask,
- val);
- if (rc < 0) {
- pr_err("failed to configure mem_if_mem_intf_cfg rc=%d\n",
- rc);
- return rc;
- }
+ rc = -ETIMEDOUT;
+ pr_err("wait for mem_grant timed out, val=0x%x\n", val);
+ fg_dump_regs(chip);
- mask = MEM_ARB_LO_LATENCY_EN_BIT | MEM_ARB_REQ_BIT;
- rc = fg_masked_write(chip, MEM_IF_MEM_ARB_CFG(chip), mask,
- val);
- if (rc < 0) {
- pr_err("failed to configure mem_if_mem_arb_cfg rc:%d\n",
- rc);
- return rc;
- }
+release:
+ val = 0;
+ mask = MEM_ACCESS_REQ_BIT | IACS_SLCT_BIT;
+ ret = fg_masked_write(chip, MEM_IF_MEM_INTF_CFG(chip), mask, val);
+ if (ret < 0) {
+ pr_err("failed to configure mem_if_mem_intf_cfg rc=%d\n", rc);
+ return ret;
+ }
- return -ETIMEDOUT;
+ mask = MEM_ARB_LO_LATENCY_EN_BIT | MEM_ARB_REQ_BIT;
+ ret = fg_masked_write(chip, MEM_IF_MEM_ARB_CFG(chip), mask, val);
+ if (ret < 0) {
+ pr_err("failed to configure mem_if_mem_arb_cfg rc:%d\n", rc);
+ return ret;
}
return rc;
diff --git a/drivers/power/supply/qcom/qpnp-fg-gen3.c b/drivers/power/supply/qcom/qpnp-fg-gen3.c
index 2044657..8c53b2e 100644
--- a/drivers/power/supply/qcom/qpnp-fg-gen3.c
+++ b/drivers/power/supply/qcom/qpnp-fg-gen3.c
@@ -881,7 +881,7 @@ static int fg_get_prop_capacity(struct fg_chip *chip, int *val)
return 0;
}
- if (chip->battery_missing) {
+ if (chip->battery_missing || !chip->soc_reporting_ready) {
*val = BATT_MISS_SOC;
return 0;
}
@@ -2567,6 +2567,11 @@ static void status_change_work(struct work_struct *work)
goto out;
}
+ if (!chip->soc_reporting_ready) {
+ fg_dbg(chip, FG_STATUS, "Profile load is not complete yet\n");
+ goto out;
+ }
+
rc = power_supply_get_property(chip->batt_psy, POWER_SUPPLY_PROP_STATUS,
&prop);
if (rc < 0) {
@@ -2630,7 +2635,7 @@ static void status_change_work(struct work_struct *work)
fg_ttf_update(chip);
chip->prev_charge_status = chip->charge_status;
out:
- fg_dbg(chip, FG_POWER_SUPPLY, "charge_status:%d charge_type:%d charge_done:%d\n",
+ fg_dbg(chip, FG_STATUS, "charge_status:%d charge_type:%d charge_done:%d\n",
chip->charge_status, chip->charge_type, chip->charge_done);
pm_relax(chip->dev);
}
@@ -2733,6 +2738,49 @@ static bool is_profile_load_required(struct fg_chip *chip)
return true;
}
+static void fg_update_batt_profile(struct fg_chip *chip)
+{
+ int rc, offset;
+ u8 val;
+
+ rc = fg_sram_read(chip, PROFILE_INTEGRITY_WORD,
+ SW_CONFIG_OFFSET, &val, 1, FG_IMA_DEFAULT);
+ if (rc < 0) {
+ pr_err("Error in reading SW_CONFIG_OFFSET, rc=%d\n", rc);
+ return;
+ }
+
+ /*
+ * If the RCONN had not been updated, no need to update battery
+ * profile. Else, update the battery profile so that the profile
+ * modified by bootloader or HLOS matches with the profile read
+ * from device tree.
+ */
+
+ if (!(val & RCONN_CONFIG_BIT))
+ return;
+
+ rc = fg_sram_read(chip, ESR_RSLOW_CHG_WORD,
+ ESR_RSLOW_CHG_OFFSET, &val, 1, FG_IMA_DEFAULT);
+ if (rc < 0) {
+ pr_err("Error in reading ESR_RSLOW_CHG_OFFSET, rc=%d\n", rc);
+ return;
+ }
+ offset = (ESR_RSLOW_CHG_WORD - PROFILE_LOAD_WORD) * 4
+ + ESR_RSLOW_CHG_OFFSET;
+ chip->batt_profile[offset] = val;
+
+ rc = fg_sram_read(chip, ESR_RSLOW_DISCHG_WORD,
+ ESR_RSLOW_DISCHG_OFFSET, &val, 1, FG_IMA_DEFAULT);
+ if (rc < 0) {
+ pr_err("Error in reading ESR_RSLOW_DISCHG_OFFSET, rc=%d\n", rc);
+ return;
+ }
+ offset = (ESR_RSLOW_DISCHG_WORD - PROFILE_LOAD_WORD) * 4
+ + ESR_RSLOW_DISCHG_OFFSET;
+ chip->batt_profile[offset] = val;
+}
+
static void clear_battery_profile(struct fg_chip *chip)
{
u8 val = 0;
@@ -2826,6 +2874,8 @@ static void profile_load_work(struct work_struct *work)
if (!chip->profile_available)
goto out;
+ fg_update_batt_profile(chip);
+
if (!is_profile_load_required(chip))
goto done;
@@ -2887,6 +2937,10 @@ static void profile_load_work(struct work_struct *work)
rc);
}
+ rc = fg_rconn_config(chip);
+ if (rc < 0)
+ pr_err("Error in configuring Rconn, rc=%d\n", rc);
+
batt_psy_initialized(chip);
fg_notify_charger(chip);
chip->profile_loaded = true;
@@ -2896,6 +2950,10 @@ static void profile_load_work(struct work_struct *work)
vote(chip->awake_votable, ESR_FCC_VOTER, true, 0);
schedule_delayed_work(&chip->pl_enable_work, msecs_to_jiffies(5000));
vote(chip->awake_votable, PROFILE_LOAD, false, 0);
+ if (!work_pending(&chip->status_change_work)) {
+ pm_stay_awake(chip->dev);
+ schedule_work(&chip->status_change_work);
+ }
}
static void sram_dump_work(struct work_struct *work)
@@ -4083,12 +4141,6 @@ static int fg_hw_init(struct fg_chip *chip)
return rc;
}
- rc = fg_rconn_config(chip);
- if (rc < 0) {
- pr_err("Error in configuring Rconn, rc=%d\n", rc);
- return rc;
- }
-
fg_encode(chip->sp, FG_SRAM_ESR_TIGHT_FILTER,
chip->dt.esr_tight_flt_upct, buf);
rc = fg_sram_write(chip, chip->sp[FG_SRAM_ESR_TIGHT_FILTER].addr_word,
@@ -4184,25 +4236,6 @@ static int fg_adjust_timebase(struct fg_chip *chip)
/* INTERRUPT HANDLERS STAY HERE */
-static irqreturn_t fg_dma_grant_irq_handler(int irq, void *data)
-{
- struct fg_chip *chip = data;
- u8 status;
- int rc;
-
- rc = fg_read(chip, MEM_IF_INT_RT_STS(chip), &status, 1);
- if (rc < 0) {
- pr_err("failed to read addr=0x%04x, rc=%d\n",
- MEM_IF_INT_RT_STS(chip), rc);
- return IRQ_HANDLED;
- }
-
- fg_dbg(chip, FG_IRQ, "irq %d triggered, status:%d\n", irq, status);
- complete_all(&chip->mem_grant);
-
- return IRQ_HANDLED;
-}
-
static irqreturn_t fg_mem_xcp_irq_handler(int irq, void *data)
{
struct fg_chip *chip = data;
@@ -4490,7 +4523,7 @@ static struct fg_irq_info fg_irqs[FG_IRQ_MAX] = {
/* MEM_IF irqs */
[DMA_GRANT_IRQ] = {
.name = "dma-grant",
- .handler = fg_dma_grant_irq_handler,
+ .handler = fg_dummy_irq_handler,
.wakeable = true,
},
[MEM_XCP_IRQ] = {
@@ -5167,7 +5200,6 @@ static int fg_gen3_probe(struct platform_device *pdev)
mutex_init(&chip->qnovo_esr_ctrl_lock);
init_completion(&chip->soc_update);
init_completion(&chip->soc_ready);
- init_completion(&chip->mem_grant);
INIT_DELAYED_WORK(&chip->profile_load_work, profile_load_work);
INIT_DELAYED_WORK(&chip->pl_enable_work, pl_enable_work);
INIT_WORK(&chip->status_change_work, status_change_work);
@@ -5183,23 +5215,6 @@ static int fg_gen3_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, chip);
- rc = fg_register_interrupts(chip);
- if (rc < 0) {
- dev_err(chip->dev, "Error in registering interrupts, rc:%d\n",
- rc);
- goto exit;
- }
-
- /* Keep SOC_UPDATE irq disabled until we require it */
- if (fg_irqs[SOC_UPDATE_IRQ].irq)
- disable_irq_nosync(fg_irqs[SOC_UPDATE_IRQ].irq);
-
- /* Keep BSOC_DELTA_IRQ disabled until we require it */
- vote(chip->delta_bsoc_irq_en_votable, DELTA_BSOC_IRQ_VOTER, false, 0);
-
- /* Keep BATT_MISSING_IRQ disabled until we require it */
- vote(chip->batt_miss_irq_en_votable, BATT_MISS_IRQ_VOTER, false, 0);
-
rc = fg_hw_init(chip);
if (rc < 0) {
dev_err(chip->dev, "Error in initializing FG hardware, rc:%d\n",
@@ -5227,6 +5242,23 @@ static int fg_gen3_probe(struct platform_device *pdev)
goto exit;
}
+ rc = fg_register_interrupts(chip);
+ if (rc < 0) {
+ dev_err(chip->dev, "Error in registering interrupts, rc:%d\n",
+ rc);
+ goto exit;
+ }
+
+ /* Keep SOC_UPDATE irq disabled until we require it */
+ if (fg_irqs[SOC_UPDATE_IRQ].irq)
+ disable_irq_nosync(fg_irqs[SOC_UPDATE_IRQ].irq);
+
+ /* Keep BSOC_DELTA_IRQ disabled until we require it */
+ vote(chip->delta_bsoc_irq_en_votable, DELTA_BSOC_IRQ_VOTER, false, 0);
+
+ /* Keep BATT_MISSING_IRQ disabled until we require it */
+ vote(chip->batt_miss_irq_en_votable, BATT_MISS_IRQ_VOTER, false, 0);
+
rc = fg_debugfs_create(chip);
if (rc < 0) {
dev_err(chip->dev, "Error in creating debugfs entries, rc:%d\n",
diff --git a/drivers/power/supply/qcom/qpnp-smb2.c b/drivers/power/supply/qcom/qpnp-smb2.c
index 1ab0357..8536a61 100644
--- a/drivers/power/supply/qcom/qpnp-smb2.c
+++ b/drivers/power/supply/qcom/qpnp-smb2.c
@@ -193,6 +193,11 @@ module_param_named(
try_sink_enabled, __try_sink_enabled, int, 0600
);
+static int __audio_headset_drp_wait_ms = 100;
+module_param_named(
+ audio_headset_drp_wait_ms, __audio_headset_drp_wait_ms, int, 0600
+);
+
#define MICRO_1P5A 1500000
#define MICRO_P1A 100000
#define OTG_DEFAULT_DEGLITCH_TIME_MS 50
@@ -313,8 +318,6 @@ static int smb2_parse_dt(struct smb2 *chip)
chip->dt.auto_recharge_soc = of_property_read_bool(node,
"qcom,auto-recharge-soc");
- chg->micro_usb_mode = of_property_read_bool(node, "qcom,micro-usb");
-
chg->dcp_icl_ua = chip->dt.usb_icl_ua;
chg->suspend_input_on_debug_batt = of_property_read_bool(node,
@@ -356,6 +359,7 @@ static enum power_supply_property smb2_usb_props[] = {
POWER_SUPPLY_PROP_PD_VOLTAGE_MAX,
POWER_SUPPLY_PROP_PD_VOLTAGE_MIN,
POWER_SUPPLY_PROP_SDP_CURRENT_MAX,
+ POWER_SUPPLY_PROP_CONNECTOR_TYPE,
};
static int smb2_usb_get_prop(struct power_supply *psy,
@@ -378,9 +382,9 @@ static int smb2_usb_get_prop(struct power_supply *psy,
if (!val->intval)
break;
- if ((chg->typec_mode == POWER_SUPPLY_TYPEC_SOURCE_DEFAULT ||
- chg->micro_usb_mode) &&
- chg->real_charger_type == POWER_SUPPLY_TYPE_USB)
+ if (((chg->typec_mode == POWER_SUPPLY_TYPEC_SOURCE_DEFAULT)
+ || (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB))
+ && (chg->real_charger_type == POWER_SUPPLY_TYPE_USB))
val->intval = 0;
else
val->intval = 1;
@@ -409,7 +413,7 @@ static int smb2_usb_get_prop(struct power_supply *psy,
val->intval = chg->real_charger_type;
break;
case POWER_SUPPLY_PROP_TYPEC_MODE:
- if (chg->micro_usb_mode)
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
val->intval = POWER_SUPPLY_TYPEC_NONE;
else if (chip->bad_part)
val->intval = POWER_SUPPLY_TYPEC_SOURCE_DEFAULT;
@@ -417,13 +421,13 @@ static int smb2_usb_get_prop(struct power_supply *psy,
val->intval = chg->typec_mode;
break;
case POWER_SUPPLY_PROP_TYPEC_POWER_ROLE:
- if (chg->micro_usb_mode)
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
val->intval = POWER_SUPPLY_TYPEC_PR_NONE;
else
rc = smblib_get_prop_typec_power_role(chg, val);
break;
case POWER_SUPPLY_PROP_TYPEC_CC_ORIENTATION:
- if (chg->micro_usb_mode)
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
val->intval = 0;
else
rc = smblib_get_prop_typec_cc_orientation(chg, val);
@@ -471,6 +475,9 @@ static int smb2_usb_get_prop(struct power_supply *psy,
val->intval = get_client_vote(chg->usb_icl_votable,
USB_PSY_VOTER);
break;
+ case POWER_SUPPLY_PROP_CONNECTOR_TYPE:
+ val->intval = chg->connector_type;
+ break;
default:
pr_err("get prop %d is not supported in usb\n", psp);
rc = -EINVAL;
@@ -609,9 +616,9 @@ static int smb2_usb_port_get_prop(struct power_supply *psy,
if (!val->intval)
break;
- if ((chg->typec_mode == POWER_SUPPLY_TYPEC_SOURCE_DEFAULT ||
- chg->micro_usb_mode) &&
- chg->real_charger_type == POWER_SUPPLY_TYPE_USB)
+ if (((chg->typec_mode == POWER_SUPPLY_TYPEC_SOURCE_DEFAULT)
+ || (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB))
+ && (chg->real_charger_type == POWER_SUPPLY_TYPE_USB))
val->intval = 1;
else
val->intval = 0;
@@ -1268,7 +1275,7 @@ static int smb2_init_vconn_regulator(struct smb2 *chip)
struct regulator_config cfg = {};
int rc = 0;
- if (chg->micro_usb_mode)
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
return 0;
chg->vconn_vreg = devm_kzalloc(chg->dev, sizeof(*chg->vconn_vreg),
@@ -1563,9 +1570,9 @@ static int smb2_init_hw(struct smb2 *chip)
vote(chg->pd_disallowed_votable_indirect, HVDCP_TIMEOUT_VOTER,
true, 0);
vote(chg->pd_disallowed_votable_indirect, MICRO_USB_VOTER,
- chg->micro_usb_mode, 0);
+ (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB), 0);
vote(chg->hvdcp_enable_votable, MICRO_USB_VOTER,
- chg->micro_usb_mode, 0);
+ (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB), 0);
/*
* AICL configuration:
@@ -1595,7 +1602,17 @@ static int smb2_init_hw(struct smb2 *chip)
return rc;
}
- if (chg->micro_usb_mode)
+ /* Check USB connector type (typeC/microUSB) */
+ rc = smblib_read(chg, RID_CC_CONTROL_7_0_REG, &val);
+ if (rc < 0) {
+ dev_err(chg->dev, "Couldn't read RID_CC_CONTROL_7_0 rc=%d\n",
+ rc);
+ return rc;
+ }
+ chg->connector_type = (val & EN_MICRO_USB_MODE_BIT) ?
+ POWER_SUPPLY_CONNECTOR_MICRO_USB
+ : POWER_SUPPLY_CONNECTOR_TYPEC;
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
rc = smb2_disable_typec(chg);
else
rc = smb2_configure_typec(chg);
@@ -2260,6 +2277,7 @@ static int smb2_probe(struct platform_device *pdev)
chg->irq_info = smb2_irqs;
chg->die_health = -EINVAL;
chg->name = "PMI";
+ chg->audio_headset_drp_wait_ms = &__audio_headset_drp_wait_ms;
chg->regmap = dev_get_regmap(chg->dev->parent, NULL);
if (!chg->regmap) {
diff --git a/drivers/power/supply/qcom/smb-lib.c b/drivers/power/supply/qcom/smb-lib.c
index 6525d12..ec165b3 100644
--- a/drivers/power/supply/qcom/smb-lib.c
+++ b/drivers/power/supply/qcom/smb-lib.c
@@ -687,6 +687,7 @@ static void smblib_uusb_removal(struct smb_charger *chg)
vote(chg->pl_enable_votable_indirect, USBIN_I_VOTER, false, 0);
vote(chg->pl_enable_votable_indirect, USBIN_V_VOTER, false, 0);
vote(chg->usb_icl_votable, SW_QC3_VOTER, false, 0);
+ vote(chg->usb_icl_votable, USBIN_USBIN_BOOST_VOTER, false, 0);
cancel_delayed_work_sync(&chg->hvdcp_detect_work);
@@ -979,8 +980,8 @@ int smblib_get_icl_current(struct smb_charger *chg, int *icl_ua)
u8 load_cfg;
bool override;
- if ((chg->typec_mode == POWER_SUPPLY_TYPEC_SOURCE_DEFAULT
- || chg->micro_usb_mode)
+ if (((chg->typec_mode == POWER_SUPPLY_TYPEC_SOURCE_DEFAULT)
+ || (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB))
&& (chg->usb_psy_desc.type == POWER_SUPPLY_TYPE_USB)) {
rc = get_sdp_current(chg, icl_ua);
if (rc < 0) {
@@ -1469,6 +1470,8 @@ int smblib_vbus_regulator_enable(struct regulator_dev *rdev)
rc = _smblib_vbus_regulator_enable(rdev);
if (rc >= 0)
chg->otg_en = true;
+ else
+ vote(chg->usb_icl_votable, USBIN_USBIN_BOOST_VOTER, false, 0);
unlock:
mutex_unlock(&chg->otg_oc_lock);
@@ -2054,6 +2057,18 @@ static int smblib_dm_pulse(struct smb_charger *chg)
return rc;
}
+static int smblib_force_vbus_voltage(struct smb_charger *chg, u8 val)
+{
+ int rc;
+
+ rc = smblib_masked_write(chg, CMD_HVDCP_2_REG, val, val);
+ if (rc < 0)
+ smblib_err(chg, "Couldn't write to CMD_HVDCP_2_REG rc=%d\n",
+ rc);
+
+ return rc;
+}
+
int smblib_dp_dm(struct smb_charger *chg, int val)
{
int target_icl_ua, rc = 0;
@@ -2105,6 +2120,21 @@ int smblib_dp_dm(struct smb_charger *chg, int val)
smblib_dbg(chg, PR_PARALLEL, "ICL DOWN ICL=%d reduction=%d\n",
target_icl_ua, chg->usb_icl_delta_ua);
break;
+ case POWER_SUPPLY_DP_DM_FORCE_5V:
+ rc = smblib_force_vbus_voltage(chg, FORCE_5V_BIT);
+ if (rc < 0)
+ pr_err("Failed to force 5V\n");
+ break;
+ case POWER_SUPPLY_DP_DM_FORCE_9V:
+ rc = smblib_force_vbus_voltage(chg, FORCE_9V_BIT);
+ if (rc < 0)
+ pr_err("Failed to force 9V\n");
+ break;
+ case POWER_SUPPLY_DP_DM_FORCE_12V:
+ rc = smblib_force_vbus_voltage(chg, FORCE_12V_BIT);
+ if (rc < 0)
+ pr_err("Failed to force 12V\n");
+ break;
case POWER_SUPPLY_DP_DM_ICL_UP:
default:
break;
@@ -2252,6 +2282,7 @@ int smblib_get_prop_usb_voltage_max(struct smb_charger *chg,
{
switch (chg->real_charger_type) {
case POWER_SUPPLY_TYPE_USB_HVDCP:
+ case POWER_SUPPLY_TYPE_USB_HVDCP_3:
case POWER_SUPPLY_TYPE_USB_PD:
if (chg->smb_version == PM660_SUBTYPE)
val->intval = MICRO_9V;
@@ -2512,23 +2543,16 @@ int smblib_get_prop_die_health(struct smb_charger *chg,
return rc;
}
- /* TEMP_RANGE bits are mutually exclusive */
- switch (stat & TEMP_RANGE_MASK) {
- case TEMP_BELOW_RANGE_BIT:
- val->intval = POWER_SUPPLY_HEALTH_COOL;
- break;
- case TEMP_WITHIN_RANGE_BIT:
- val->intval = POWER_SUPPLY_HEALTH_WARM;
- break;
- case TEMP_ABOVE_RANGE_BIT:
- val->intval = POWER_SUPPLY_HEALTH_HOT;
- break;
- case ALERT_LEVEL_BIT:
+ if (stat & ALERT_LEVEL_BIT)
val->intval = POWER_SUPPLY_HEALTH_OVERHEAT;
- break;
- default:
+ else if (stat & TEMP_ABOVE_RANGE_BIT)
+ val->intval = POWER_SUPPLY_HEALTH_HOT;
+ else if (stat & TEMP_WITHIN_RANGE_BIT)
+ val->intval = POWER_SUPPLY_HEALTH_WARM;
+ else if (stat & TEMP_BELOW_RANGE_BIT)
+ val->intval = POWER_SUPPLY_HEALTH_COOL;
+ else
val->intval = POWER_SUPPLY_HEALTH_UNKNOWN;
- }
return 0;
}
@@ -2825,7 +2849,9 @@ static int __smblib_set_prop_pd_active(struct smb_charger *chg, bool pd_active)
* more, but it may impact compliance.
*/
sink_attached = chg->typec_status[3] & UFP_DFP_MODE_STATUS_BIT;
- if (!chg->typec_legacy_valid && !sink_attached && hvdcp)
+ if ((chg->connector_type != POWER_SUPPLY_CONNECTOR_MICRO_USB)
+ && !chg->typec_legacy_valid
+ && !sink_attached && hvdcp)
schedule_work(&chg->legacy_detection_work);
}
@@ -3404,7 +3430,7 @@ void smblib_usb_plugin_locked(struct smb_charger *chg)
smblib_err(chg, "Couldn't disable DPDM rc=%d\n", rc);
}
- if (chg->micro_usb_mode)
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
smblib_micro_usb_plugin(chg, vbus_rising);
power_supply_changed(chg->usb_psy);
@@ -3566,16 +3592,6 @@ static void smblib_handle_hvdcp_3p0_auth_done(struct smb_charger *chg,
/* the APSD done handler will set the USB supply type */
apsd_result = smblib_get_apsd_result(chg);
- if (get_effective_result(chg->hvdcp_hw_inov_dis_votable)) {
- if (apsd_result->pst == POWER_SUPPLY_TYPE_USB_HVDCP) {
- /* force HVDCP2 to 9V if INOV is disabled */
- rc = smblib_masked_write(chg, CMD_HVDCP_2_REG,
- FORCE_9V_BIT, FORCE_9V_BIT);
- if (rc < 0)
- smblib_err(chg,
- "Couldn't force 9V HVDCP rc=%d\n", rc);
- }
- }
smblib_dbg(chg, PR_INTERRUPT, "IRQ: hvdcp-3p0-auth-done rising; %s detected\n",
apsd_result->name);
@@ -3723,7 +3739,7 @@ static void smblib_handle_apsd_done(struct smb_charger *chg, bool rising)
switch (apsd_result->bit) {
case SDP_CHARGER_BIT:
case CDP_CHARGER_BIT:
- if (chg->micro_usb_mode)
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
extcon_set_cable_state_(chg->extcon, EXTCON_USB,
true);
/* if not DCP then no hvdcp timeout happens. Enable pd here */
@@ -3765,7 +3781,8 @@ irqreturn_t smblib_handle_usb_source_change(int irq, void *data)
}
smblib_dbg(chg, PR_REGISTER, "APSD_STATUS = 0x%02x\n", stat);
- if (chg->micro_usb_mode && (stat & APSD_DTC_STATUS_DONE_BIT)
+ if ((chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
+ && (stat & APSD_DTC_STATUS_DONE_BIT)
&& !chg->uusb_apsd_rerun_done) {
/*
* Force re-run APSD to handle slow insertion related
@@ -3815,6 +3832,20 @@ static int typec_try_sink(struct smb_charger *chg)
bool debounce_done, vbus_detected, sink;
u8 stat;
int exit_mode = ATTACHED_SRC, rc;
+ int typec_mode;
+
+ if (!(*chg->try_sink_enabled))
+ return ATTACHED_SRC;
+
+ typec_mode = smblib_get_prop_typec_mode(chg);
+ if (typec_mode == POWER_SUPPLY_TYPEC_SINK_AUDIO_ADAPTER
+ || typec_mode == POWER_SUPPLY_TYPEC_SINK_DEBUG_ACCESSORY)
+ return ATTACHED_SRC;
+
+ /*
+ * Try.SNK entry status - ATTACHWAIT.SRC state and detected Rd-open
+ * or RD-Ra for TccDebounce time.
+ */
/* ignore typec interrupt while try.snk WIP */
chg->try_sink_active = true;
@@ -3953,21 +3984,19 @@ static int typec_try_sink(struct smb_charger *chg)
static void typec_sink_insertion(struct smb_charger *chg)
{
int exit_mode;
+ int typec_mode;
- /*
- * Try.SNK entry status - ATTACHWAIT.SRC state and detected Rd-open
- * or RD-Ra for TccDebounce time.
- */
+ exit_mode = typec_try_sink(chg);
- if (*chg->try_sink_enabled) {
- exit_mode = typec_try_sink(chg);
-
- if (exit_mode != ATTACHED_SRC) {
- smblib_usb_typec_change(chg);
- return;
- }
+ if (exit_mode != ATTACHED_SRC) {
+ smblib_usb_typec_change(chg);
+ return;
}
+ typec_mode = smblib_get_prop_typec_mode(chg);
+ if (typec_mode == POWER_SUPPLY_TYPEC_SINK_AUDIO_ADAPTER)
+ chg->is_audio_adapter = true;
+
/* when a sink is inserted we should not wait on hvdcp timeout to
* enable pd
*/
@@ -4046,6 +4075,7 @@ static void smblib_handle_typec_removal(struct smb_charger *chg)
vote(chg->pl_enable_votable_indirect, USBIN_V_VOTER, false, 0);
vote(chg->awake_votable, PL_DELAY_VOTER, false, 0);
+ vote(chg->usb_icl_votable, USBIN_USBIN_BOOST_VOTER, false, 0);
chg->vconn_attempts = 0;
chg->otg_attempts = 0;
chg->pulse_cnt = 0;
@@ -4091,6 +4121,12 @@ static void smblib_handle_typec_removal(struct smb_charger *chg)
smblib_err(chg, "Couldn't set USBIN_ADAPTER_ALLOW_5V_OR_9V_TO_12V rc=%d\n",
rc);
+ if (chg->is_audio_adapter == true)
+ /* wait for the audio driver to lower its en gpio */
+ msleep(*chg->audio_headset_drp_wait_ms);
+
+ chg->is_audio_adapter = false;
+
/* enable DRP */
rc = smblib_masked_write(chg, TYPE_C_INTRPT_ENB_SOFTWARE_CTRL_REG,
TYPEC_POWER_ROLE_CMD_MASK, 0);
@@ -4262,7 +4298,7 @@ irqreturn_t smblib_handle_usb_typec_change(int irq, void *data)
struct smb_irq_data *irq_data = data;
struct smb_charger *chg = irq_data->parent_data;
- if (chg->micro_usb_mode) {
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB) {
cancel_delayed_work_sync(&chg->uusb_otg_work);
vote(chg->awake_votable, OTG_DELAY_VOTER, true, 0);
smblib_dbg(chg, PR_INTERRUPT, "Scheduling OTG work\n");
@@ -4674,7 +4710,7 @@ static void smblib_vconn_oc_work(struct work_struct *work)
int rc, i;
u8 stat;
- if (chg->micro_usb_mode)
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
return;
smblib_err(chg, "over-current detected on VCONN\n");
diff --git a/drivers/power/supply/qcom/smb-lib.h b/drivers/power/supply/qcom/smb-lib.h
index 1046b27..1154b09 100644
--- a/drivers/power/supply/qcom/smb-lib.h
+++ b/drivers/power/supply/qcom/smb-lib.h
@@ -239,6 +239,7 @@ struct smb_charger {
struct smb_iio iio;
int *debug_mask;
int *try_sink_enabled;
+ int *audio_headset_drp_wait_ms;
enum smb_mode mode;
struct smb_chg_freq chg_freq;
int smb_version;
@@ -324,7 +325,7 @@ struct smb_charger {
bool sw_jeita_enabled;
bool is_hdc;
bool chg_done;
- bool micro_usb_mode;
+ bool connector_type;
bool otg_en;
bool vconn_en;
bool suspend_input_on_debug_batt;
@@ -345,6 +346,7 @@ struct smb_charger {
u8 float_cfg;
bool use_extcon;
bool otg_present;
+ bool is_audio_adapter;
/* workaround flag */
u32 wa_flags;
diff --git a/drivers/power/supply/qcom/smb1355-charger.c b/drivers/power/supply/qcom/smb1355-charger.c
index 59f2466..a279e98 100644
--- a/drivers/power/supply/qcom/smb1355-charger.c
+++ b/drivers/power/supply/qcom/smb1355-charger.c
@@ -26,10 +26,9 @@
#include <linux/regulator/machine.h>
#include <linux/regulator/of_regulator.h>
#include <linux/power_supply.h>
+#include <linux/workqueue.h>
#include <linux/pmic-voter.h>
-#define SMB1355_DEFAULT_FCC_UA 1000000
-
/* SMB1355 registers, different than mentioned in smb-reg.h */
#define CHGR_BASE 0x1000
@@ -72,8 +71,12 @@
#define BATIF_CFG_SMISC_BATID_REG (BATIF_BASE + 0x73)
#define CFG_SMISC_RBIAS_EXT_CTRL_BIT BIT(2)
+#define SMB2CHGS_BATIF_ENG_SMISC_DIETEMP (BATIF_BASE + 0xC0)
+#define TDIE_COMPARATOR_THRESHOLD GENMASK(5, 0)
+
#define BATIF_ENG_SCMISC_SPARE1_REG (BATIF_BASE + 0xC2)
#define EXT_BIAS_PIN_BIT BIT(2)
+#define DIE_TEMP_COMP_HYST_BIT BIT(1)
#define TEMP_COMP_STATUS_REG (MISC_BASE + 0x07)
#define SKIN_TEMP_RST_HOT_BIT BIT(6)
@@ -147,7 +150,6 @@ struct smb_irq_info {
};
struct smb_iio {
- struct iio_channel *temp_chan;
struct iio_channel *temp_max_chan;
};
@@ -170,6 +172,10 @@ struct smb1355 {
struct pmic_revid_data *pmic_rev_id;
int c_health;
+ int die_temp_deciDegC;
+ bool exit_die_temp;
+ struct delayed_work die_temp_work;
+ bool disabled;
};
static bool is_secure(struct smb1355 *chip, int addr)
@@ -269,6 +275,48 @@ static int smb1355_get_charge_param(struct smb1355 *chip,
return rc;
}
+#define UB_COMP_OFFSET_DEGC 34
+#define DIE_TEMP_MEAS_PERIOD_MS 10000
+static void die_temp_work(struct work_struct *work)
+{
+ struct smb1355 *chip = container_of(work, struct smb1355,
+ die_temp_work.work);
+ int rc, i;
+ u8 temp_stat;
+
+ for (i = 0; i < BIT(5); i++) {
+ rc = smb1355_masked_write(chip,
+ SMB2CHGS_BATIF_ENG_SMISC_DIETEMP,
+ TDIE_COMPARATOR_THRESHOLD, i);
+ if (rc < 0) {
+ pr_err("Couldn't set temp comp threshold rc=%d\n", rc);
+ continue;
+ }
+
+ if (chip->exit_die_temp)
+ return;
+
+ /* wait for the comparator output to deglitch */
+ msleep(100);
+
+ rc = smb1355_read(chip, TEMP_COMP_STATUS_REG, &temp_stat);
+ if (rc < 0) {
+ pr_err("Couldn't read temp comp status rc=%d\n", rc);
+ continue;
+ }
+
+ if (!(temp_stat & DIE_TEMP_UB_HOT_BIT)) {
+ /* found the temp */
+ break;
+ }
+ }
+
+ chip->die_temp_deciDegC = 10 * (i + UB_COMP_OFFSET_DEGC);
+
+ schedule_delayed_work(&chip->die_temp_work,
+ msecs_to_jiffies(DIE_TEMP_MEAS_PERIOD_MS));
+}
+
static irqreturn_t smb1355_handle_chg_state_change(int irq, void *data)
{
struct smb1355 *chip = data;
@@ -366,25 +414,6 @@ static int smb1355_get_prop_batt_charge_type(struct smb1355 *chip,
return rc;
}
-static int smb1355_get_parallel_charging(struct smb1355 *chip, int *disabled)
-{
- int rc;
- u8 cfg2;
-
- rc = smb1355_read(chip, CHGR_CFG2_REG, &cfg2);
- if (rc < 0) {
- pr_err("Couldn't read en_cmg_reg rc=%d\n", rc);
- return rc;
- }
-
- if (cfg2 & CHG_EN_SRC_BIT)
- *disabled = 0;
- else
- *disabled = 1;
-
- return 0;
-}
-
static int smb1355_get_prop_connector_health(struct smb1355 *chip)
{
u8 temp;
@@ -409,24 +438,6 @@ static int smb1355_get_prop_connector_health(struct smb1355 *chip)
}
-static int smb1355_get_prop_charger_temp(struct smb1355 *chip,
- union power_supply_propval *val)
-{
- int rc;
-
- if (!chip->iio.temp_chan ||
- PTR_ERR(chip->iio.temp_chan) == -EPROBE_DEFER)
- chip->iio.temp_chan = devm_iio_channel_get(chip->dev,
- "charger_temp");
-
- if (IS_ERR(chip->iio.temp_chan))
- return PTR_ERR(chip->iio.temp_chan);
-
- rc = iio_read_channel_processed(chip->iio.temp_chan, &val->intval);
- val->intval /= 100;
- return rc;
-}
-
static int smb1355_get_prop_charger_temp_max(struct smb1355 *chip,
union power_supply_propval *val)
{
@@ -467,13 +478,13 @@ static int smb1355_parallel_get_prop(struct power_supply *psy,
val->intval = !(stat & DISABLE_CHARGING_BIT);
break;
case POWER_SUPPLY_PROP_CHARGER_TEMP:
- rc = smb1355_get_prop_charger_temp(chip, val);
+ val->intval = chip->die_temp_deciDegC;
break;
case POWER_SUPPLY_PROP_CHARGER_TEMP_MAX:
rc = smb1355_get_prop_charger_temp_max(chip, val);
break;
case POWER_SUPPLY_PROP_INPUT_SUSPEND:
- rc = smb1355_get_parallel_charging(chip, &val->intval);
+ val->intval = chip->disabled;
break;
case POWER_SUPPLY_PROP_VOLTAGE_MAX:
rc = smb1355_get_charge_param(chip, &chip->param.ov,
@@ -513,6 +524,9 @@ static int smb1355_set_parallel_charging(struct smb1355 *chip, bool disable)
{
int rc;
+ if (chip->disabled == disable)
+ return 0;
+
rc = smb1355_masked_write(chip, WD_CFG_REG, WDOG_TIMER_EN_BIT,
disable ? 0 : WDOG_TIMER_EN_BIT);
if (rc < 0) {
@@ -531,9 +545,21 @@ static int smb1355_set_parallel_charging(struct smb1355 *chip, bool disable)
disable ? 0 : CHG_EN_SRC_BIT);
if (rc < 0) {
pr_err("Couldn't configure charge enable source rc=%d\n", rc);
- return rc;
+ disable = true;
}
+ chip->die_temp_deciDegC = -EINVAL;
+ if (disable) {
+ chip->exit_die_temp = true;
+ cancel_delayed_work_sync(&chip->die_temp_work);
+ } else {
+ /* start the work to measure temperature */
+ chip->exit_die_temp = false;
+ schedule_delayed_work(&chip->die_temp_work, 0);
+ }
+
+ chip->disabled = disable;
+
return 0;
}
@@ -769,18 +795,29 @@ static int smb1355_init_hw(struct smb1355 *chip)
}
/*
- * Disable thermal Die temperature comparator source and hw mitigation
- * for skin/die
+ * Enable thermal Die temperature comparator source and disable hw
+ * mitigation for skin/die
*/
rc = smb1355_masked_write(chip, MISC_THERMREG_SRC_CFG_REG,
THERMREG_DIE_CMP_SRC_EN_BIT | BYP_THERM_CHG_CURR_ADJUST_BIT,
- BYP_THERM_CHG_CURR_ADJUST_BIT);
+ THERMREG_DIE_CMP_SRC_EN_BIT | BYP_THERM_CHG_CURR_ADJUST_BIT);
if (rc < 0) {
pr_err("Couldn't set Skin temperature comparator src rc=%d\n",
rc);
return rc;
}
+ /*
+ * Disable hysterisis for die temperature. This is so that sw can run
+ * stepping scheme quickly
+ */
+ rc = smb1355_masked_write(chip, BATIF_ENG_SCMISC_SPARE1_REG,
+ DIE_TEMP_COMP_HYST_BIT, 0);
+ if (rc < 0) {
+ pr_err("Couldn't disable hyst. for die rc=%d\n", rc);
+ return rc;
+ }
+
rc = smb1355_tskin_sensor_config(chip);
if (rc < 0) {
pr_err("Couldn't configure tskin regs rc=%d\n", rc);
@@ -905,6 +942,9 @@ static int smb1355_probe(struct platform_device *pdev)
chip->c_health = -EINVAL;
chip->name = "smb1355";
mutex_init(&chip->write_lock);
+ INIT_DELAYED_WORK(&chip->die_temp_work, die_temp_work);
+ chip->disabled = true;
+ chip->die_temp_deciDegC = -EINVAL;
chip->regmap = dev_get_regmap(chip->dev->parent, NULL);
if (!chip->regmap) {
diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
index ec492af..e463117 100644
--- a/drivers/regulator/core.c
+++ b/drivers/regulator/core.c
@@ -4197,6 +4197,10 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
const struct regulator_ops *ops;
mode_t mode;
+ /* Check if debugfs directory already exists */
+ if (rdev->debugfs)
+ return;
+
/* Avoid duplicate debugfs directory names */
if (parent && rname == rdev->desc->name) {
snprintf(name, sizeof(name), "%s-%s", dev_name(parent),
@@ -4221,6 +4225,7 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
regulator = regulator_get(NULL, rdev_get_name(rdev));
if (IS_ERR(regulator)) {
+ debugfs_remove_recursive(rdev->debugfs);
rdev_err(rdev, "regulator get failed, ret=%ld\n",
PTR_ERR(regulator));
return;
@@ -4291,6 +4296,8 @@ static int regulator_register_resolve_supply(struct device *dev, void *data)
if (regulator_resolve_supply(rdev))
rdev_dbg(rdev, "unable to resolve supply\n");
+ else
+ rdev_init_debugfs(rdev);
return 0;
}
diff --git a/drivers/regulator/qpnp-labibb-regulator.c b/drivers/regulator/qpnp-labibb-regulator.c
index d672d5f..f457eea 100644
--- a/drivers/regulator/qpnp-labibb-regulator.c
+++ b/drivers/regulator/qpnp-labibb-regulator.c
@@ -17,6 +17,7 @@
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
+#include <linux/ktime.h>
#include <linux/regmap.h>
#include <linux/module.h>
#include <linux/notifier.h>
@@ -37,6 +38,7 @@
#define REG_REVISION_2 0x01
#define REG_PERPH_TYPE 0x04
+#define REG_INT_RT_STS 0x10
#define QPNP_LAB_TYPE 0x24
#define QPNP_IBB_TYPE 0x20
@@ -77,8 +79,8 @@
/* LAB register bits definitions */
/* REG_LAB_STATUS1 */
-#define LAB_STATUS1_VREG_OK_MASK BIT(7)
-#define LAB_STATUS1_VREG_OK BIT(7)
+#define LAB_STATUS1_VREG_OK_BIT BIT(7)
+#define LAB_STATUS1_SC_DETECT_BIT BIT(6)
/* REG_LAB_SWIRE_PGM_CTL */
#define LAB_EN_SWIRE_PGM_VOUT BIT(7)
@@ -188,8 +190,8 @@
/* IBB register bits definition */
/* REG_IBB_STATUS1 */
-#define IBB_STATUS1_VREG_OK_MASK BIT(7)
-#define IBB_STATUS1_VREG_OK BIT(7)
+#define IBB_STATUS1_VREG_OK_BIT BIT(7)
+#define IBB_STATUS1_SC_DETECT_BIT BIT(6)
/* REG_IBB_VOLTAGE */
#define IBB_VOLTAGE_OVERRIDE_EN BIT(7)
@@ -557,12 +559,15 @@ struct lab_regulator {
struct mutex lab_mutex;
int lab_vreg_ok_irq;
+ int lab_sc_irq;
+
int curr_volt;
int min_volt;
int step_size;
int slew_rate;
int soft_start;
+ int sc_wait_time_ms;
int vreg_enabled;
};
@@ -572,6 +577,8 @@ struct ibb_regulator {
struct regulator_dev *rdev;
struct mutex ibb_mutex;
+ int ibb_sc_irq;
+
int curr_volt;
int min_volt;
@@ -602,6 +609,9 @@ struct qpnp_labibb {
struct mutex bus_mutex;
enum qpnp_labibb_mode mode;
struct work_struct lab_vreg_ok_work;
+ struct delayed_work sc_err_recovery_work;
+ struct hrtimer sc_err_check_timer;
+ int sc_err_count;
bool standalone;
bool ttw_en;
bool in_ttw_mode;
@@ -612,6 +622,8 @@ struct qpnp_labibb {
bool skip_2nd_swire_cmd;
bool pfm_enable;
bool notify_lab_vreg_ok_sts;
+ bool detect_lab_sc;
+ bool sc_detected;
u32 swire_2nd_cmd_delay;
u32 swire_ibb_ps_enable_delay;
};
@@ -2178,8 +2190,10 @@ static void qpnp_lab_vreg_notifier_work(struct work_struct *work)
u8 val;
struct qpnp_labibb *labibb = container_of(work, struct qpnp_labibb,
lab_vreg_ok_work);
+ if (labibb->lab_vreg.sc_wait_time_ms != -EINVAL)
+ retries = labibb->lab_vreg.sc_wait_time_ms / 5;
- while (retries--) {
+ while (retries) {
rc = qpnp_labibb_read(labibb, labibb->lab_base +
REG_LAB_STATUS1, &val, 1);
if (rc < 0) {
@@ -2188,17 +2202,105 @@ static void qpnp_lab_vreg_notifier_work(struct work_struct *work)
return;
}
- if (val & LAB_STATUS1_VREG_OK) {
+ if (val & LAB_STATUS1_VREG_OK_BIT) {
raw_notifier_call_chain(&labibb_notifier,
LAB_VREG_OK, NULL);
break;
}
usleep_range(dly, dly + 100);
+ retries--;
}
- if (!retries)
- pr_err("LAB_VREG_OK not set, failed to notify\n");
+ if (!retries) {
+ if (labibb->detect_lab_sc) {
+ pr_crit("short circuit detected on LAB rail.. disabling the LAB/IBB/OLEDB modules\n");
+ /* Disable LAB module */
+ val = 0;
+ rc = qpnp_labibb_write(labibb, labibb->lab_base +
+ REG_LAB_MODULE_RDY, &val, 1);
+ if (rc < 0) {
+ pr_err("write register %x failed rc = %d\n",
+ REG_LAB_MODULE_RDY, rc);
+ return;
+ }
+ raw_notifier_call_chain(&labibb_notifier,
+ LAB_VREG_NOT_OK, NULL);
+ labibb->sc_detected = true;
+ labibb->lab_vreg.vreg_enabled = 0;
+ labibb->ibb_vreg.vreg_enabled = 0;
+ } else {
+ pr_err("LAB_VREG_OK not set, failed to notify\n");
+ }
+ }
+}
+
+static int qpnp_lab_enable_standalone(struct qpnp_labibb *labibb)
+{
+ int rc;
+ u8 val;
+
+ val = LAB_ENABLE_CTL_EN;
+ rc = qpnp_labibb_write(labibb,
+ labibb->lab_base + REG_LAB_ENABLE_CTL, &val, 1);
+ if (rc < 0) {
+ pr_err("Write register %x failed rc = %d\n",
+ REG_LAB_ENABLE_CTL, rc);
+ return rc;
+ }
+
+ udelay(labibb->lab_vreg.soft_start);
+
+ rc = qpnp_labibb_read(labibb, labibb->lab_base +
+ REG_LAB_STATUS1, &val, 1);
+ if (rc < 0) {
+ pr_err("Read register %x failed rc = %d\n",
+ REG_LAB_STATUS1, rc);
+ return rc;
+ }
+
+ if (!(val & LAB_STATUS1_VREG_OK_BIT)) {
+ pr_err("Can't enable LAB standalone\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int qpnp_ibb_enable_standalone(struct qpnp_labibb *labibb)
+{
+ int rc, delay, retries = 10;
+ u8 val;
+
+ rc = qpnp_ibb_set_mode(labibb, IBB_SW_CONTROL_EN);
+ if (rc < 0) {
+ pr_err("Unable to set IBB_MODULE_EN rc = %d\n", rc);
+ return rc;
+ }
+
+ delay = labibb->ibb_vreg.soft_start;
+ while (retries--) {
+ /* Wait for a small period before reading IBB_STATUS1 */
+ usleep_range(delay, delay + 100);
+
+ rc = qpnp_labibb_read(labibb, labibb->ibb_base +
+ REG_IBB_STATUS1, &val, 1);
+ if (rc < 0) {
+ pr_err("Read register %x failed rc = %d\n",
+ REG_IBB_STATUS1, rc);
+ return rc;
+ }
+
+ if (val & IBB_STATUS1_VREG_OK_BIT)
+ break;
+ }
+
+ if (!(val & IBB_STATUS1_VREG_OK_BIT)) {
+ pr_err("Can't enable IBB standalone\n");
+ return -EINVAL;
+ }
+
+ return 0;
}
static int qpnp_labibb_regulator_enable(struct qpnp_labibb *labibb)
@@ -2242,7 +2344,7 @@ static int qpnp_labibb_regulator_enable(struct qpnp_labibb *labibb)
labibb->lab_vreg.soft_start, labibb->ibb_vreg.soft_start,
labibb->ibb_vreg.pwrup_dly, dly);
- if (!(val & LAB_STATUS1_VREG_OK)) {
+ if (!(val & LAB_STATUS1_VREG_OK_BIT)) {
pr_err("failed for LAB %x\n", val);
goto err_out;
}
@@ -2259,7 +2361,7 @@ static int qpnp_labibb_regulator_enable(struct qpnp_labibb *labibb)
goto err_out;
}
- if (val & IBB_STATUS1_VREG_OK) {
+ if (val & IBB_STATUS1_VREG_OK_BIT) {
enabled = true;
break;
}
@@ -2330,7 +2432,7 @@ static int qpnp_labibb_regulator_disable(struct qpnp_labibb *labibb)
return rc;
}
- if (!(val & IBB_STATUS1_VREG_OK)) {
+ if (!(val & IBB_STATUS1_VREG_OK_BIT)) {
disabled = true;
break;
}
@@ -2359,10 +2461,13 @@ static int qpnp_labibb_regulator_disable(struct qpnp_labibb *labibb)
static int qpnp_lab_regulator_enable(struct regulator_dev *rdev)
{
int rc;
- u8 val;
-
struct qpnp_labibb *labibb = rdev_get_drvdata(rdev);
+ if (labibb->sc_detected) {
+ pr_info("Short circuit detected: disabled LAB/IBB rails\n");
+ return 0;
+ }
+
if (labibb->skip_2nd_swire_cmd) {
rc = qpnp_ibb_ps_config(labibb, false);
if (rc < 0) {
@@ -2372,38 +2477,18 @@ static int qpnp_lab_regulator_enable(struct regulator_dev *rdev)
}
if (!labibb->lab_vreg.vreg_enabled && !labibb->swire_control) {
-
if (!labibb->standalone)
return qpnp_labibb_regulator_enable(labibb);
- val = LAB_ENABLE_CTL_EN;
- rc = qpnp_labibb_write(labibb,
- labibb->lab_base + REG_LAB_ENABLE_CTL, &val, 1);
- if (rc < 0) {
- pr_err("qpnp_lab_regulator_enable write register %x failed rc = %d\n",
- REG_LAB_ENABLE_CTL, rc);
+ rc = qpnp_lab_enable_standalone(labibb);
+ if (rc) {
+ pr_err("enable lab standalone failed, rc=%d\n", rc);
return rc;
}
-
- udelay(labibb->lab_vreg.soft_start);
-
- rc = qpnp_labibb_read(labibb, labibb->lab_base +
- REG_LAB_STATUS1, &val, 1);
- if (rc < 0) {
- pr_err("qpnp_lab_regulator_enable read register %x failed rc = %d\n",
- REG_LAB_STATUS1, rc);
- return rc;
- }
-
- if ((val & LAB_STATUS1_VREG_OK_MASK) != LAB_STATUS1_VREG_OK) {
- pr_err("qpnp_lab_regulator_enable failed\n");
- return -EINVAL;
- }
-
labibb->lab_vreg.vreg_enabled = 1;
}
- if (labibb->notify_lab_vreg_ok_sts)
+ if (labibb->notify_lab_vreg_ok_sts || labibb->detect_lab_sc)
schedule_work(&labibb->lab_vreg_ok_work);
return 0;
@@ -2444,6 +2529,163 @@ static int qpnp_lab_regulator_is_enabled(struct regulator_dev *rdev)
return labibb->lab_vreg.vreg_enabled;
}
+static int qpnp_labibb_force_enable(struct qpnp_labibb *labibb)
+{
+ int rc;
+
+ if (labibb->skip_2nd_swire_cmd) {
+ rc = qpnp_ibb_ps_config(labibb, false);
+ if (rc < 0) {
+ pr_err("Failed to disable IBB PS rc=%d\n", rc);
+ return rc;
+ }
+ }
+
+ if (!labibb->swire_control) {
+ if (!labibb->standalone)
+ return qpnp_labibb_regulator_enable(labibb);
+
+ rc = qpnp_ibb_enable_standalone(labibb);
+ if (rc < 0) {
+ pr_err("enable ibb standalone failed, rc=%d\n", rc);
+ return rc;
+ }
+ labibb->ibb_vreg.vreg_enabled = 1;
+
+ rc = qpnp_lab_enable_standalone(labibb);
+ if (rc < 0) {
+ pr_err("enable lab standalone failed, rc=%d\n", rc);
+ return rc;
+ }
+ labibb->lab_vreg.vreg_enabled = 1;
+ }
+
+ return 0;
+}
+
+#define SC_ERR_RECOVERY_DELAY_MS 250
+#define SC_ERR_COUNT_INTERVAL_SEC 1
+#define POLLING_SCP_DONE_COUNT 2
+#define POLLING_SCP_DONE_INTERVAL_MS 5
+static irqreturn_t labibb_sc_err_handler(int irq, void *_labibb)
+{
+ int rc;
+ u16 reg;
+ u8 sc_err_mask, val;
+ char *str;
+ struct qpnp_labibb *labibb = (struct qpnp_labibb *)_labibb;
+ bool in_sc_err, lab_en, ibb_en, scp_done = false;
+ int count;
+
+ if (irq == labibb->lab_vreg.lab_sc_irq) {
+ reg = labibb->lab_base + REG_LAB_STATUS1;
+ sc_err_mask = LAB_STATUS1_SC_DETECT_BIT;
+ str = "LAB";
+ } else if (irq == labibb->ibb_vreg.ibb_sc_irq) {
+ reg = labibb->ibb_base + REG_IBB_STATUS1;
+ sc_err_mask = IBB_STATUS1_SC_DETECT_BIT;
+ str = "IBB";
+ } else {
+ return IRQ_HANDLED;
+ }
+
+ rc = qpnp_labibb_read(labibb, reg, &val, 1);
+ if (rc < 0) {
+ pr_err("Read 0x%x failed, rc=%d\n", reg, rc);
+ return IRQ_HANDLED;
+ }
+ pr_debug("%s SC error triggered! %s_STATUS1 = %d\n", str, str, val);
+
+ in_sc_err = !!(val & sc_err_mask);
+
+ /*
+ * The SC fault would trigger PBS to disable regulators
+ * for protection. This would cause the SC_DETECT status being
+ * cleared so that it's not able to get the SC fault status.
+ * Check if LAB/IBB regulators are enabled in the driver but
+ * disabled in hardware, this means a SC fault had happened
+ * and SCP handling is completed by PBS.
+ */
+ if (!in_sc_err) {
+ count = POLLING_SCP_DONE_COUNT;
+ do {
+ reg = labibb->lab_base + REG_LAB_ENABLE_CTL;
+ rc = qpnp_labibb_read(labibb, reg, &val, 1);
+ if (rc < 0) {
+ pr_err("Read 0x%x failed, rc=%d\n", reg, rc);
+ return IRQ_HANDLED;
+ }
+ lab_en = !!(val & LAB_ENABLE_CTL_EN);
+
+ reg = labibb->ibb_base + REG_IBB_ENABLE_CTL;
+ rc = qpnp_labibb_read(labibb, reg, &val, 1);
+ if (rc < 0) {
+ pr_err("Read 0x%x failed, rc=%d\n", reg, rc);
+ return IRQ_HANDLED;
+ }
+ ibb_en = !!(val & IBB_ENABLE_CTL_MODULE_EN);
+ if (lab_en || ibb_en)
+ msleep(POLLING_SCP_DONE_INTERVAL_MS);
+ else
+ break;
+ } while ((lab_en || ibb_en) && count--);
+
+ if (labibb->lab_vreg.vreg_enabled
+ && labibb->ibb_vreg.vreg_enabled
+ && !lab_en && !ibb_en) {
+ pr_debug("LAB/IBB has been disabled by SCP\n");
+ scp_done = true;
+ }
+ }
+
+ if (in_sc_err || scp_done) {
+ if (hrtimer_active(&labibb->sc_err_check_timer) ||
+ hrtimer_callback_running(&labibb->sc_err_check_timer)) {
+ labibb->sc_err_count++;
+ } else {
+ labibb->sc_err_count = 1;
+ hrtimer_start(&labibb->sc_err_check_timer,
+ ktime_set(SC_ERR_COUNT_INTERVAL_SEC, 0),
+ HRTIMER_MODE_REL);
+ }
+ schedule_delayed_work(&labibb->sc_err_recovery_work,
+ msecs_to_jiffies(SC_ERR_RECOVERY_DELAY_MS));
+ }
+
+ return IRQ_HANDLED;
+}
+
+#define SC_FAULT_COUNT_MAX 4
+static enum hrtimer_restart labibb_check_sc_err_count(struct hrtimer *timer)
+{
+ struct qpnp_labibb *labibb = container_of(timer,
+ struct qpnp_labibb, sc_err_check_timer);
+ /*
+ * if SC fault triggers more than 4 times in 1 second,
+ * then disable the IRQs and leave as it.
+ */
+ if (labibb->sc_err_count > SC_FAULT_COUNT_MAX) {
+ disable_irq(labibb->lab_vreg.lab_sc_irq);
+ disable_irq(labibb->ibb_vreg.ibb_sc_irq);
+ }
+
+ return HRTIMER_NORESTART;
+}
+
+static void labibb_sc_err_recovery_work(struct work_struct *work)
+{
+ struct qpnp_labibb *labibb = container_of(work, struct qpnp_labibb,
+ sc_err_recovery_work.work);
+ int rc;
+
+ labibb->ibb_vreg.vreg_enabled = 0;
+ labibb->lab_vreg.vreg_enabled = 0;
+ rc = qpnp_labibb_force_enable(labibb);
+ if (rc < 0)
+ pr_err("force enable labibb failed, rc=%d\n", rc);
+
+}
+
static int qpnp_lab_regulator_set_voltage(struct regulator_dev *rdev,
int min_uV, int max_uV, unsigned int *selector)
{
@@ -2505,7 +2747,7 @@ static int qpnp_skip_swire_command(struct qpnp_labibb *labibb)
pr_err("Failed to read ibb_status1 reg rc=%d\n", rc);
return rc;
}
- if ((reg & IBB_STATUS1_VREG_OK_MASK) == IBB_STATUS1_VREG_OK)
+ if (reg & IBB_STATUS1_VREG_OK_BIT)
break;
/* poll delay */
@@ -2661,6 +2903,12 @@ static int register_qpnp_lab_regulator(struct qpnp_labibb *labibb,
labibb->notify_lab_vreg_ok_sts = of_property_read_bool(of_node,
"qcom,notify-lab-vreg-ok-sts");
+ labibb->lab_vreg.sc_wait_time_ms = -EINVAL;
+ if (labibb->pmic_rev_id->pmic_subtype == PM660L_SUBTYPE &&
+ labibb->detect_lab_sc)
+ of_property_read_u32(of_node, "qcom,qpnp-lab-sc-wait-time-ms",
+ &labibb->lab_vreg.sc_wait_time_ms);
+
rc = of_property_read_u32(of_node, "qcom,qpnp-lab-soft-start",
&(labibb->lab_vreg.soft_start));
if (!rc) {
@@ -2833,6 +3081,18 @@ static int register_qpnp_lab_regulator(struct qpnp_labibb *labibb,
}
}
+ if (labibb->lab_vreg.lab_sc_irq != -EINVAL) {
+ rc = devm_request_threaded_irq(labibb->dev,
+ labibb->lab_vreg.lab_sc_irq, NULL,
+ labibb_sc_err_handler,
+ IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ "lab-sc-err", labibb);
+ if (rc) {
+ pr_err("Failed to register 'lab-sc-err' irq rc=%d\n",
+ rc);
+ return rc;
+ }
+ }
rc = qpnp_labibb_read(labibb, labibb->lab_base + REG_LAB_MODULE_RDY,
&val, 1);
if (rc < 0) {
@@ -3302,45 +3562,26 @@ static int qpnp_ibb_dt_init(struct qpnp_labibb *labibb,
static int qpnp_ibb_regulator_enable(struct regulator_dev *rdev)
{
- int rc, delay, retries = 10;
- u8 val;
+ int rc = 0;
struct qpnp_labibb *labibb = rdev_get_drvdata(rdev);
- if (!labibb->ibb_vreg.vreg_enabled && !labibb->swire_control) {
+ if (labibb->sc_detected) {
+ pr_info("Short circuit detected: disabled LAB/IBB rails\n");
+ return 0;
+ }
+ if (!labibb->ibb_vreg.vreg_enabled && !labibb->swire_control) {
if (!labibb->standalone)
return qpnp_labibb_regulator_enable(labibb);
- rc = qpnp_ibb_set_mode(labibb, IBB_SW_CONTROL_EN);
+ rc = qpnp_ibb_enable_standalone(labibb);
if (rc < 0) {
- pr_err("Unable to set IBB_MODULE_EN rc = %d\n", rc);
+ pr_err("enable ibb standalone failed, rc=%d\n", rc);
return rc;
}
-
- delay = labibb->ibb_vreg.soft_start;
- while (retries--) {
- /* Wait for a small period before reading IBB_STATUS1 */
- usleep_range(delay, delay + 100);
-
- rc = qpnp_labibb_read(labibb, labibb->ibb_base +
- REG_IBB_STATUS1, &val, 1);
- if (rc < 0) {
- pr_err("qpnp_ibb_regulator_enable read register %x failed rc = %d\n",
- REG_IBB_STATUS1, rc);
- return rc;
- }
-
- if (val & IBB_STATUS1_VREG_OK)
- break;
- }
-
- if (!(val & IBB_STATUS1_VREG_OK)) {
- pr_err("qpnp_ibb_regulator_enable failed\n");
- return -EINVAL;
- }
-
labibb->ibb_vreg.vreg_enabled = 1;
}
+
return 0;
}
@@ -3389,7 +3630,6 @@ static int qpnp_ibb_regulator_set_voltage(struct regulator_dev *rdev,
return rc;
}
-
static int qpnp_ibb_regulator_get_voltage(struct regulator_dev *rdev)
{
struct qpnp_labibb *labibb = rdev_get_drvdata(rdev);
@@ -3611,6 +3851,19 @@ static int register_qpnp_ibb_regulator(struct qpnp_labibb *labibb,
labibb->ibb_vreg.pwrdn_dly = 0;
}
+ if (labibb->ibb_vreg.ibb_sc_irq != -EINVAL) {
+ rc = devm_request_threaded_irq(labibb->dev,
+ labibb->ibb_vreg.ibb_sc_irq, NULL,
+ labibb_sc_err_handler,
+ IRQF_ONESHOT | IRQF_TRIGGER_RISING,
+ "ibb-sc-err", labibb);
+ if (rc) {
+ pr_err("Failed to register 'ibb-sc-err' irq rc=%d\n",
+ rc);
+ return rc;
+ }
+ }
+
rc = qpnp_labibb_read(labibb, labibb->ibb_base + REG_IBB_MODULE_RDY,
&val, 1);
if (rc < 0) {
@@ -3684,15 +3937,39 @@ static int register_qpnp_ibb_regulator(struct qpnp_labibb *labibb,
static int qpnp_lab_register_irq(struct device_node *child,
struct qpnp_labibb *labibb)
{
+ int rc = 0;
+
if (is_lab_vreg_ok_irq_available(labibb)) {
- labibb->lab_vreg.lab_vreg_ok_irq =
- of_irq_get_byname(child, "lab-vreg-ok");
- if (labibb->lab_vreg.lab_vreg_ok_irq < 0) {
+ rc = of_irq_get_byname(child, "lab-vreg-ok");
+ if (rc < 0) {
pr_err("Invalid lab-vreg-ok irq\n");
- return -EINVAL;
+ return rc;
}
+ labibb->lab_vreg.lab_vreg_ok_irq = rc;
}
+ labibb->lab_vreg.lab_sc_irq = -EINVAL;
+ rc = of_irq_get_byname(child, "lab-sc-err");
+ if (rc < 0)
+ pr_debug("Unable to get lab-sc-err, rc = %d\n", rc);
+ else
+ labibb->lab_vreg.lab_sc_irq = rc;
+
+ return 0;
+}
+
+static int qpnp_ibb_register_irq(struct device_node *child,
+ struct qpnp_labibb *labibb)
+{
+ int rc;
+
+ labibb->ibb_vreg.ibb_sc_irq = -EINVAL;
+ rc = of_irq_get_byname(child, "ibb-sc-err");
+ if (rc < 0)
+ pr_debug("Unable to get ibb-sc-err, rc = %d\n", rc);
+ else
+ labibb->ibb_vreg.ibb_sc_irq = rc;
+
return 0;
}
@@ -3788,6 +4065,8 @@ static int qpnp_labibb_regulator_probe(struct platform_device *pdev)
if (labibb->pmic_rev_id->pmic_subtype == PM660L_SUBTYPE) {
labibb->mode = QPNP_LABIBB_AMOLED_MODE;
+ /* Enable polling for LAB short circuit detection for PM660A */
+ labibb->detect_lab_sc = true;
} else {
rc = of_property_read_string(labibb->dev->of_node,
"qcom,qpnp-labibb-mode", &mode_name);
@@ -3896,6 +4175,7 @@ static int qpnp_labibb_regulator_probe(struct platform_device *pdev)
case QPNP_IBB_TYPE:
labibb->ibb_base = base;
labibb->ibb_dig_major = revision;
+ qpnp_ibb_register_irq(child, labibb);
rc = register_qpnp_ibb_regulator(labibb, child);
if (rc < 0)
goto fail_registration;
@@ -3919,6 +4199,11 @@ static int qpnp_labibb_regulator_probe(struct platform_device *pdev)
}
INIT_WORK(&labibb->lab_vreg_ok_work, qpnp_lab_vreg_notifier_work);
+ INIT_DELAYED_WORK(&labibb->sc_err_recovery_work,
+ labibb_sc_err_recovery_work);
+ hrtimer_init(&labibb->sc_err_check_timer,
+ CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ labibb->sc_err_check_timer.function = labibb_check_sc_err_count;
dev_set_drvdata(&pdev->dev, labibb);
pr_info("LAB/IBB registered successfully, lab_vreg enable=%d ibb_vreg enable=%d swire_control=%d\n",
labibb->lab_vreg.vreg_enabled,
diff --git a/drivers/regulator/qpnp-oledb-regulator.c b/drivers/regulator/qpnp-oledb-regulator.c
index c012f37..bee9a3d 100644
--- a/drivers/regulator/qpnp-oledb-regulator.c
+++ b/drivers/regulator/qpnp-oledb-regulator.c
@@ -27,6 +27,7 @@
#include <linux/regulator/of_regulator.h>
#include <linux/regulator/qpnp-labibb-regulator.h>
#include <linux/qpnp/qpnp-pbs.h>
+#include <linux/qpnp/qpnp-revid.h>
#define QPNP_OLEDB_REGULATOR_DRIVER_NAME "qcom,qpnp-oledb-regulator"
#define OLEDB_VOUT_STEP_MV 100
@@ -162,6 +163,7 @@ struct qpnp_oledb {
struct notifier_block oledb_nb;
struct mutex bus_lock;
struct device_node *pbs_dev_node;
+ struct pmic_revid_data *pmic_rev_id;
u32 base;
u8 mod_enable;
@@ -181,6 +183,8 @@ struct qpnp_oledb {
bool dynamic_ext_pinctl_config;
bool pbs_control;
bool force_pd_control;
+ bool handle_lab_sc_notification;
+ bool lab_sc_detected;
};
static const u16 oledb_warmup_dly_ns[] = {6700, 13300, 26700, 53400};
@@ -275,6 +279,11 @@ static int qpnp_oledb_regulator_enable(struct regulator_dev *rdev)
struct qpnp_oledb *oledb = rdev_get_drvdata(rdev);
+ if (oledb->lab_sc_detected == true) {
+ pr_info("Short circuit detected: Disabled OLEDB rail\n");
+ return 0;
+ }
+
if (oledb->ext_pin_control) {
rc = qpnp_oledb_read(oledb, oledb->base + OLEDB_EXT_PIN_CTL,
&val, 1);
@@ -368,12 +377,19 @@ static int qpnp_oledb_regulator_disable(struct regulator_dev *rdev)
}
if (val & OLEDB_FORCE_PD_CTL_SPARE_BIT) {
- rc = qpnp_pbs_trigger_event(oledb->pbs_dev_node,
- trigger_bitmap);
+ rc = qpnp_oledb_sec_masked_write(oledb, oledb->base +
+ OLEDB_SPARE_CTL,
+ OLEDB_FORCE_PD_CTL_SPARE_BIT, 0);
if (rc < 0) {
- pr_err("Failed to trigger the PBS sequence\n");
+ pr_err("Failed to write SPARE_CTL rc=%d\n", rc);
return rc;
}
+
+ rc = qpnp_pbs_trigger_event(oledb->pbs_dev_node,
+ trigger_bitmap);
+ if (rc < 0)
+ pr_err("Failed to trigger the PBS sequence\n");
+
pr_debug("PBS event triggered\n");
} else {
pr_debug("OLEDB_SPARE_CTL register bit not set\n");
@@ -1085,8 +1101,22 @@ static int qpnp_oledb_parse_fast_precharge(struct qpnp_oledb *oledb)
static int qpnp_oledb_parse_dt(struct qpnp_oledb *oledb)
{
int rc = 0;
+ struct device_node *revid_dev_node;
struct device_node *of_node = oledb->dev->of_node;
+ revid_dev_node = of_parse_phandle(oledb->dev->of_node,
+ "qcom,pmic-revid", 0);
+ if (!revid_dev_node) {
+ pr_err("Missing qcom,pmic-revid property - driver failed\n");
+ return -EINVAL;
+ }
+
+ oledb->pmic_rev_id = get_revid_data(revid_dev_node);
+ if (IS_ERR(oledb->pmic_rev_id)) {
+ pr_debug("Unable to get revid data\n");
+ return -EPROBE_DEFER;
+ }
+
oledb->swire_control =
of_property_read_bool(of_node, "qcom,swire-control");
@@ -1100,8 +1130,14 @@ static int qpnp_oledb_parse_dt(struct qpnp_oledb *oledb)
oledb->pbs_control =
of_property_read_bool(of_node, "qcom,pbs-control");
- oledb->force_pd_control =
- of_property_read_bool(of_node, "qcom,force-pd-control");
+ /* Use the force_pd_control only for PM660A versions <= v2.0 */
+ if (oledb->pmic_rev_id->pmic_subtype == PM660L_SUBTYPE &&
+ oledb->pmic_rev_id->rev4 <= PM660L_V2P0_REV4) {
+ if (!(oledb->pmic_rev_id->rev4 == PM660L_V2P0_REV4 &&
+ oledb->pmic_rev_id->rev2 > PM660L_V2P0_REV2)) {
+ oledb->force_pd_control = true;
+ }
+ }
if (oledb->force_pd_control) {
oledb->pbs_dev_node = of_parse_phandle(of_node,
@@ -1199,13 +1235,6 @@ static int qpnp_oledb_force_pulldown_config(struct qpnp_oledb *oledb)
int rc = 0;
u8 val;
- rc = qpnp_oledb_sec_masked_write(oledb, oledb->base +
- OLEDB_SPARE_CTL, OLEDB_FORCE_PD_CTL_SPARE_BIT, 0);
- if (rc < 0) {
- pr_err("Failed to write SPARE_CTL rc=%d\n", rc);
- return rc;
- }
-
val = 1;
rc = qpnp_oledb_write(oledb, oledb->base + OLEDB_PD_CTL,
&val, 1);
@@ -1227,14 +1256,31 @@ static int qpnp_labibb_notifier_cb(struct notifier_block *nb,
unsigned long action, void *data)
{
int rc = 0;
+ u8 val;
struct qpnp_oledb *oledb = container_of(nb, struct qpnp_oledb,
oledb_nb);
+ if (action == LAB_VREG_NOT_OK) {
+ /* short circuit detected. Disable OLEDB module */
+ val = 0;
+ rc = qpnp_oledb_write(oledb, oledb->base + OLEDB_MODULE_RDY,
+ &val, 1);
+ if (rc < 0) {
+ pr_err("Failed to write MODULE_RDY rc=%d\n", rc);
+ return NOTIFY_STOP;
+ }
+ oledb->lab_sc_detected = true;
+ oledb->mod_enable = false;
+ pr_crit("LAB SC detected, disabling OLEDB forever!\n");
+ }
+
if (action == LAB_VREG_OK) {
/* Disable SWIRE pull down control and enable via spmi mode */
rc = qpnp_oledb_force_pulldown_config(oledb);
- if (rc < 0)
+ if (rc < 0) {
+ pr_err("Failed to config force pull down\n");
return NOTIFY_STOP;
+ }
}
return NOTIFY_OK;
@@ -1281,7 +1327,11 @@ static int qpnp_oledb_regulator_probe(struct platform_device *pdev)
return rc;
}
- if (oledb->force_pd_control) {
+ /* Enable LAB short circuit notification support */
+ if (oledb->pmic_rev_id->pmic_subtype == PM660L_SUBTYPE)
+ oledb->handle_lab_sc_notification = true;
+
+ if (oledb->force_pd_control || oledb->handle_lab_sc_notification) {
oledb->oledb_nb.notifier_call = qpnp_labibb_notifier_cb;
rc = qpnp_labibb_notifier_register(&oledb->oledb_nb);
if (rc < 0) {
diff --git a/drivers/regulator/refgen.c b/drivers/regulator/refgen.c
index 629fee0..830e1b0 100644
--- a/drivers/regulator/refgen.c
+++ b/drivers/regulator/refgen.c
@@ -31,7 +31,7 @@
#define REFGEN_BIAS_EN_DISABLE 0x6
#define REFGEN_REG_BG_CTRL 0x14
-#define REFGEN_BG_CTRL_MASK GENMASK(2, 0)
+#define REFGEN_BG_CTRL_MASK GENMASK(2, 1)
#define REFGEN_BG_CTRL_ENABLE 0x6
#define REFGEN_BG_CTRL_DISABLE 0x4
@@ -41,11 +41,21 @@ struct refgen {
void __iomem *addr;
};
+static void masked_writel(u32 val, u32 mask, void __iomem *addr)
+{
+ u32 reg;
+
+ reg = readl_relaxed(addr);
+ reg = (reg & ~mask) | (val & mask);
+ writel_relaxed(reg, addr);
+}
+
static int refgen_enable(struct regulator_dev *rdev)
{
struct refgen *vreg = rdev_get_drvdata(rdev);
- writel_relaxed(REFGEN_BG_CTRL_ENABLE, vreg->addr + REFGEN_REG_BG_CTRL);
+ masked_writel(REFGEN_BG_CTRL_ENABLE, REFGEN_BG_CTRL_MASK,
+ vreg->addr + REFGEN_REG_BG_CTRL);
writel_relaxed(REFGEN_BIAS_EN_ENABLE, vreg->addr + REFGEN_REG_BIAS_EN);
return 0;
@@ -56,7 +66,8 @@ static int refgen_disable(struct regulator_dev *rdev)
struct refgen *vreg = rdev_get_drvdata(rdev);
writel_relaxed(REFGEN_BIAS_EN_DISABLE, vreg->addr + REFGEN_REG_BIAS_EN);
- writel_relaxed(REFGEN_BG_CTRL_DISABLE, vreg->addr + REFGEN_REG_BG_CTRL);
+ masked_writel(REFGEN_BG_CTRL_DISABLE, REFGEN_BG_CTRL_MASK,
+ vreg->addr + REFGEN_REG_BG_CTRL);
return 0;
}
diff --git a/drivers/rtc/rtc-rx8010.c b/drivers/rtc/rtc-rx8010.c
index 7163b91..d08da37 100644
--- a/drivers/rtc/rtc-rx8010.c
+++ b/drivers/rtc/rtc-rx8010.c
@@ -63,7 +63,6 @@ struct rx8010_data {
struct i2c_client *client;
struct rtc_device *rtc;
u8 ctrlreg;
- spinlock_t flags_lock;
};
static irqreturn_t rx8010_irq_1_handler(int irq, void *dev_id)
@@ -72,12 +71,12 @@ static irqreturn_t rx8010_irq_1_handler(int irq, void *dev_id)
struct rx8010_data *rx8010 = i2c_get_clientdata(client);
int flagreg;
- spin_lock(&rx8010->flags_lock);
+ mutex_lock(&rx8010->rtc->ops_lock);
flagreg = i2c_smbus_read_byte_data(client, RX8010_FLAG);
if (flagreg <= 0) {
- spin_unlock(&rx8010->flags_lock);
+ mutex_unlock(&rx8010->rtc->ops_lock);
return IRQ_NONE;
}
@@ -101,7 +100,7 @@ static irqreturn_t rx8010_irq_1_handler(int irq, void *dev_id)
i2c_smbus_write_byte_data(client, RX8010_FLAG, flagreg);
- spin_unlock(&rx8010->flags_lock);
+ mutex_unlock(&rx8010->rtc->ops_lock);
return IRQ_HANDLED;
}
@@ -143,7 +142,6 @@ static int rx8010_set_time(struct device *dev, struct rtc_time *dt)
u8 date[7];
int ctrl, flagreg;
int ret;
- unsigned long irqflags;
if ((dt->tm_year < 100) || (dt->tm_year > 199))
return -EINVAL;
@@ -181,11 +179,8 @@ static int rx8010_set_time(struct device *dev, struct rtc_time *dt)
if (ret < 0)
return ret;
- spin_lock_irqsave(&rx8010->flags_lock, irqflags);
-
flagreg = i2c_smbus_read_byte_data(rx8010->client, RX8010_FLAG);
if (flagreg < 0) {
- spin_unlock_irqrestore(&rx8010->flags_lock, irqflags);
return flagreg;
}
@@ -193,8 +188,6 @@ static int rx8010_set_time(struct device *dev, struct rtc_time *dt)
ret = i2c_smbus_write_byte_data(rx8010->client, RX8010_FLAG,
flagreg & ~RX8010_FLAG_VLF);
- spin_unlock_irqrestore(&rx8010->flags_lock, irqflags);
-
return 0;
}
@@ -288,12 +281,9 @@ static int rx8010_set_alarm(struct device *dev, struct rtc_wkalrm *t)
u8 alarmvals[3];
int extreg, flagreg;
int err;
- unsigned long irqflags;
- spin_lock_irqsave(&rx8010->flags_lock, irqflags);
flagreg = i2c_smbus_read_byte_data(client, RX8010_FLAG);
if (flagreg < 0) {
- spin_unlock_irqrestore(&rx8010->flags_lock, irqflags);
return flagreg;
}
@@ -302,14 +292,12 @@ static int rx8010_set_alarm(struct device *dev, struct rtc_wkalrm *t)
err = i2c_smbus_write_byte_data(rx8010->client, RX8010_CTRL,
rx8010->ctrlreg);
if (err < 0) {
- spin_unlock_irqrestore(&rx8010->flags_lock, irqflags);
return err;
}
}
flagreg &= ~RX8010_FLAG_AF;
err = i2c_smbus_write_byte_data(rx8010->client, RX8010_FLAG, flagreg);
- spin_unlock_irqrestore(&rx8010->flags_lock, irqflags);
if (err < 0)
return err;
@@ -404,7 +392,6 @@ static int rx8010_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
struct rx8010_data *rx8010 = dev_get_drvdata(dev);
int ret, tmp;
int flagreg;
- unsigned long irqflags;
switch (cmd) {
case RTC_VL_READ:
@@ -419,16 +406,13 @@ static int rx8010_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
return 0;
case RTC_VL_CLR:
- spin_lock_irqsave(&rx8010->flags_lock, irqflags);
flagreg = i2c_smbus_read_byte_data(rx8010->client, RX8010_FLAG);
if (flagreg < 0) {
- spin_unlock_irqrestore(&rx8010->flags_lock, irqflags);
return flagreg;
}
flagreg &= ~RX8010_FLAG_VLF;
ret = i2c_smbus_write_byte_data(client, RX8010_FLAG, flagreg);
- spin_unlock_irqrestore(&rx8010->flags_lock, irqflags);
if (ret < 0)
return ret;
@@ -466,8 +450,6 @@ static int rx8010_probe(struct i2c_client *client,
rx8010->client = client;
i2c_set_clientdata(client, rx8010);
- spin_lock_init(&rx8010->flags_lock);
-
err = rx8010_init_client(client);
if (err)
return err;
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index 1de0890..5ecd408 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -1704,8 +1704,11 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
/* check for for attention message */
if (scsw_dstat(&irb->scsw) & DEV_STAT_ATTENTION) {
device = dasd_device_from_cdev_locked(cdev);
- device->discipline->check_attention(device, irb->esw.esw1.lpum);
- dasd_put_device(device);
+ if (!IS_ERR(device)) {
+ device->discipline->check_attention(device,
+ irb->esw.esw1.lpum);
+ dasd_put_device(device);
+ }
}
if (!cqr)
diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
index f3756ca..d55e643 100644
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -921,7 +921,6 @@ void qeth_clear_thread_running_bit(struct qeth_card *, unsigned long);
int qeth_core_hardsetup_card(struct qeth_card *);
void qeth_print_status_message(struct qeth_card *);
int qeth_init_qdio_queues(struct qeth_card *);
-int qeth_send_startlan(struct qeth_card *);
int qeth_send_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *,
int (*reply_cb)
(struct qeth_card *, struct qeth_reply *, unsigned long),
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index e8c4830..21ef802 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -2944,7 +2944,7 @@ int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob,
}
EXPORT_SYMBOL_GPL(qeth_send_ipa_cmd);
-int qeth_send_startlan(struct qeth_card *card)
+static int qeth_send_startlan(struct qeth_card *card)
{
int rc;
struct qeth_cmd_buffer *iob;
@@ -2957,7 +2957,6 @@ int qeth_send_startlan(struct qeth_card *card)
rc = qeth_send_ipa_cmd(card, iob, NULL, NULL);
return rc;
}
-EXPORT_SYMBOL_GPL(qeth_send_startlan);
static int qeth_default_setadapterparms_cb(struct qeth_card *card,
struct qeth_reply *reply, unsigned long data)
@@ -5091,6 +5090,20 @@ int qeth_core_hardsetup_card(struct qeth_card *card)
goto out;
}
+ rc = qeth_send_startlan(card);
+ if (rc) {
+ QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+ if (rc == IPA_RC_LAN_OFFLINE) {
+ dev_warn(&card->gdev->dev,
+ "The LAN is offline\n");
+ card->lan_online = 0;
+ } else {
+ rc = -ENODEV;
+ goto out;
+ }
+ } else
+ card->lan_online = 1;
+
card->options.ipa4.supported_funcs = 0;
card->options.ipa6.supported_funcs = 0;
card->options.adp.supported_funcs = 0;
@@ -5102,14 +5115,14 @@ int qeth_core_hardsetup_card(struct qeth_card *card)
if (qeth_is_supported(card, IPA_SETADAPTERPARMS)) {
rc = qeth_query_setadapterparms(card);
if (rc < 0) {
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+ QETH_DBF_TEXT_(SETUP, 2, "7err%d", rc);
goto out;
}
}
if (qeth_adp_supported(card, IPA_SETADP_SET_DIAG_ASSIST)) {
rc = qeth_query_setdiagass(card);
if (rc < 0) {
- QETH_DBF_TEXT_(SETUP, 2, "7err%d", rc);
+ QETH_DBF_TEXT_(SETUP, 2, "8err%d", rc);
goto out;
}
}
diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
index 5d010aa..8530477 100644
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -1204,21 +1204,6 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode)
/* softsetup */
QETH_DBF_TEXT(SETUP, 2, "softsetp");
- rc = qeth_send_startlan(card);
- if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
- if (rc == 0xe080) {
- dev_warn(&card->gdev->dev,
- "The LAN is offline\n");
- card->lan_online = 0;
- goto contin;
- }
- rc = -ENODEV;
- goto out_remove;
- } else
- card->lan_online = 1;
-
-contin:
if ((card->info.type == QETH_CARD_TYPE_OSD) ||
(card->info.type == QETH_CARD_TYPE_OSX)) {
rc = qeth_l2_start_ipassists(card);
diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
index 171be5e..03a2619 100644
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -3230,21 +3230,6 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode)
/* softsetup */
QETH_DBF_TEXT(SETUP, 2, "softsetp");
- rc = qeth_send_startlan(card);
- if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
- if (rc == 0xe080) {
- dev_warn(&card->gdev->dev,
- "The LAN is offline\n");
- card->lan_online = 0;
- goto contin;
- }
- rc = -ENODEV;
- goto out_remove;
- } else
- card->lan_online = 1;
-
-contin:
rc = qeth_l3_setadapter_parms(card);
if (rc)
QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc);
diff --git a/drivers/s390/net/qeth_l3_sys.c b/drivers/s390/net/qeth_l3_sys.c
index 0e00a5c..cffe42f 100644
--- a/drivers/s390/net/qeth_l3_sys.c
+++ b/drivers/s390/net/qeth_l3_sys.c
@@ -692,15 +692,15 @@ static ssize_t qeth_l3_dev_vipa_add_show(char *buf, struct qeth_card *card,
enum qeth_prot_versions proto)
{
struct qeth_ipaddr *ipaddr;
- struct hlist_node *tmp;
char addr_str[40];
+ int str_len = 0;
int entry_len; /* length of 1 entry string, differs between v4 and v6 */
- int i = 0;
+ int i;
entry_len = (proto == QETH_PROT_IPV4)? 12 : 40;
entry_len += 2; /* \n + terminator */
spin_lock_bh(&card->ip_lock);
- hash_for_each_safe(card->ip_htable, i, tmp, ipaddr, hnode) {
+ hash_for_each(card->ip_htable, i, ipaddr, hnode) {
if (ipaddr->proto != proto)
continue;
if (ipaddr->type != QETH_IP_TYPE_VIPA)
@@ -708,16 +708,17 @@ static ssize_t qeth_l3_dev_vipa_add_show(char *buf, struct qeth_card *card,
/* String must not be longer than PAGE_SIZE. So we check if
* string length gets near PAGE_SIZE. Then we can savely display
* the next IPv6 address (worst case, compared to IPv4) */
- if ((PAGE_SIZE - i) <= entry_len)
+ if ((PAGE_SIZE - str_len) <= entry_len)
break;
qeth_l3_ipaddr_to_string(proto, (const u8 *)&ipaddr->u,
addr_str);
- i += snprintf(buf + i, PAGE_SIZE - i, "%s\n", addr_str);
+ str_len += snprintf(buf + str_len, PAGE_SIZE - str_len, "%s\n",
+ addr_str);
}
spin_unlock_bh(&card->ip_lock);
- i += snprintf(buf + i, PAGE_SIZE - i, "\n");
+ str_len += snprintf(buf + str_len, PAGE_SIZE - str_len, "\n");
- return i;
+ return str_len;
}
static ssize_t qeth_l3_dev_vipa_add4_show(struct device *dev,
@@ -854,15 +855,15 @@ static ssize_t qeth_l3_dev_rxip_add_show(char *buf, struct qeth_card *card,
enum qeth_prot_versions proto)
{
struct qeth_ipaddr *ipaddr;
- struct hlist_node *tmp;
char addr_str[40];
+ int str_len = 0;
int entry_len; /* length of 1 entry string, differs between v4 and v6 */
- int i = 0;
+ int i;
entry_len = (proto == QETH_PROT_IPV4)? 12 : 40;
entry_len += 2; /* \n + terminator */
spin_lock_bh(&card->ip_lock);
- hash_for_each_safe(card->ip_htable, i, tmp, ipaddr, hnode) {
+ hash_for_each(card->ip_htable, i, ipaddr, hnode) {
if (ipaddr->proto != proto)
continue;
if (ipaddr->type != QETH_IP_TYPE_RXIP)
@@ -870,16 +871,17 @@ static ssize_t qeth_l3_dev_rxip_add_show(char *buf, struct qeth_card *card,
/* String must not be longer than PAGE_SIZE. So we check if
* string length gets near PAGE_SIZE. Then we can savely display
* the next IPv6 address (worst case, compared to IPv4) */
- if ((PAGE_SIZE - i) <= entry_len)
+ if ((PAGE_SIZE - str_len) <= entry_len)
break;
qeth_l3_ipaddr_to_string(proto, (const u8 *)&ipaddr->u,
addr_str);
- i += snprintf(buf + i, PAGE_SIZE - i, "%s\n", addr_str);
+ str_len += snprintf(buf + str_len, PAGE_SIZE - str_len, "%s\n",
+ addr_str);
}
spin_unlock_bh(&card->ip_lock);
- i += snprintf(buf + i, PAGE_SIZE - i, "\n");
+ str_len += snprintf(buf + str_len, PAGE_SIZE - str_len, "\n");
- return i;
+ return str_len;
}
static ssize_t qeth_l3_dev_rxip_add4_show(struct device *dev,
diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
index 6678d1f..065f11a 100644
--- a/drivers/scsi/aacraid/aachba.c
+++ b/drivers/scsi/aacraid/aachba.c
@@ -2954,16 +2954,11 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
return;
BUG_ON(fibptr == NULL);
+
dev = fibptr->dev;
- scsi_dma_unmap(scsicmd);
-
- /* expose physical device if expose_physicald flag is on */
- if (scsicmd->cmnd[0] == INQUIRY && !(scsicmd->cmnd[1] & 0x01)
- && expose_physicals > 0)
- aac_expose_phy_device(scsicmd);
-
srbreply = (struct aac_srb_reply *) fib_data(fibptr);
+
scsicmd->sense_buffer[0] = '\0'; /* Initialize sense valid flag to false */
if (fibptr->flags & FIB_CONTEXT_FLAG_FASTRESP) {
@@ -2976,158 +2971,176 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
*/
scsi_set_resid(scsicmd, scsi_bufflen(scsicmd)
- le32_to_cpu(srbreply->data_xfer_length));
- /*
- * First check the fib status
- */
+ }
- if (le32_to_cpu(srbreply->status) != ST_OK) {
- int len;
- printk(KERN_WARNING "aac_srb_callback: srb failed, status = %d\n", le32_to_cpu(srbreply->status));
- len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
- SCSI_SENSE_BUFFERSIZE);
+ scsi_dma_unmap(scsicmd);
+
+ /* expose physical device if expose_physicald flag is on */
+ if (scsicmd->cmnd[0] == INQUIRY && !(scsicmd->cmnd[1] & 0x01)
+ && expose_physicals > 0)
+ aac_expose_phy_device(scsicmd);
+
+ /*
+ * First check the fib status
+ */
+
+ if (le32_to_cpu(srbreply->status) != ST_OK) {
+ int len;
+
+ pr_warn("aac_srb_callback: srb failed, status = %d\n",
+ le32_to_cpu(srbreply->status));
+ len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
+ SCSI_SENSE_BUFFERSIZE);
+ scsicmd->result = DID_ERROR << 16
+ | COMMAND_COMPLETE << 8
+ | SAM_STAT_CHECK_CONDITION;
+ memcpy(scsicmd->sense_buffer,
+ srbreply->sense_data, len);
+ }
+
+ /*
+ * Next check the srb status
+ */
+ switch ((le32_to_cpu(srbreply->srb_status))&0x3f) {
+ case SRB_STATUS_ERROR_RECOVERY:
+ case SRB_STATUS_PENDING:
+ case SRB_STATUS_SUCCESS:
+ scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8;
+ break;
+ case SRB_STATUS_DATA_OVERRUN:
+ switch (scsicmd->cmnd[0]) {
+ case READ_6:
+ case WRITE_6:
+ case READ_10:
+ case WRITE_10:
+ case READ_12:
+ case WRITE_12:
+ case READ_16:
+ case WRITE_16:
+ if (le32_to_cpu(srbreply->data_xfer_length)
+ < scsicmd->underflow)
+ pr_warn("aacraid: SCSI CMD underflow\n");
+ else
+ pr_warn("aacraid: SCSI CMD Data Overrun\n");
scsicmd->result = DID_ERROR << 16
- | COMMAND_COMPLETE << 8
- | SAM_STAT_CHECK_CONDITION;
- memcpy(scsicmd->sense_buffer,
- srbreply->sense_data, len);
- }
-
- /*
- * Next check the srb status
- */
- switch ((le32_to_cpu(srbreply->srb_status))&0x3f) {
- case SRB_STATUS_ERROR_RECOVERY:
- case SRB_STATUS_PENDING:
- case SRB_STATUS_SUCCESS:
+ | COMMAND_COMPLETE << 8;
+ break;
+ case INQUIRY:
+ scsicmd->result = DID_OK << 16
+ | COMMAND_COMPLETE << 8;
+ break;
+ default:
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8;
break;
- case SRB_STATUS_DATA_OVERRUN:
- switch (scsicmd->cmnd[0]) {
- case READ_6:
- case WRITE_6:
- case READ_10:
- case WRITE_10:
- case READ_12:
- case WRITE_12:
- case READ_16:
- case WRITE_16:
- if (le32_to_cpu(srbreply->data_xfer_length)
- < scsicmd->underflow)
- printk(KERN_WARNING"aacraid: SCSI CMD underflow\n");
- else
- printk(KERN_WARNING"aacraid: SCSI CMD Data Overrun\n");
- scsicmd->result = DID_ERROR << 16
- | COMMAND_COMPLETE << 8;
- break;
- case INQUIRY: {
- scsicmd->result = DID_OK << 16
- | COMMAND_COMPLETE << 8;
- break;
- }
- default:
- scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8;
- break;
- }
- break;
- case SRB_STATUS_ABORTED:
- scsicmd->result = DID_ABORT << 16 | ABORT << 8;
- break;
- case SRB_STATUS_ABORT_FAILED:
- /*
- * Not sure about this one - but assuming the
- * hba was trying to abort for some reason
- */
- scsicmd->result = DID_ERROR << 16 | ABORT << 8;
- break;
- case SRB_STATUS_PARITY_ERROR:
- scsicmd->result = DID_PARITY << 16
- | MSG_PARITY_ERROR << 8;
- break;
- case SRB_STATUS_NO_DEVICE:
- case SRB_STATUS_INVALID_PATH_ID:
- case SRB_STATUS_INVALID_TARGET_ID:
- case SRB_STATUS_INVALID_LUN:
- case SRB_STATUS_SELECTION_TIMEOUT:
- scsicmd->result = DID_NO_CONNECT << 16
- | COMMAND_COMPLETE << 8;
- break;
+ }
+ break;
+ case SRB_STATUS_ABORTED:
+ scsicmd->result = DID_ABORT << 16 | ABORT << 8;
+ break;
+ case SRB_STATUS_ABORT_FAILED:
+ /*
+ * Not sure about this one - but assuming the
+ * hba was trying to abort for some reason
+ */
+ scsicmd->result = DID_ERROR << 16 | ABORT << 8;
+ break;
+ case SRB_STATUS_PARITY_ERROR:
+ scsicmd->result = DID_PARITY << 16
+ | MSG_PARITY_ERROR << 8;
+ break;
+ case SRB_STATUS_NO_DEVICE:
+ case SRB_STATUS_INVALID_PATH_ID:
+ case SRB_STATUS_INVALID_TARGET_ID:
+ case SRB_STATUS_INVALID_LUN:
+ case SRB_STATUS_SELECTION_TIMEOUT:
+ scsicmd->result = DID_NO_CONNECT << 16
+ | COMMAND_COMPLETE << 8;
+ break;
- case SRB_STATUS_COMMAND_TIMEOUT:
- case SRB_STATUS_TIMEOUT:
- scsicmd->result = DID_TIME_OUT << 16
- | COMMAND_COMPLETE << 8;
- break;
+ case SRB_STATUS_COMMAND_TIMEOUT:
+ case SRB_STATUS_TIMEOUT:
+ scsicmd->result = DID_TIME_OUT << 16
+ | COMMAND_COMPLETE << 8;
+ break;
- case SRB_STATUS_BUSY:
- scsicmd->result = DID_BUS_BUSY << 16
- | COMMAND_COMPLETE << 8;
- break;
+ case SRB_STATUS_BUSY:
+ scsicmd->result = DID_BUS_BUSY << 16
+ | COMMAND_COMPLETE << 8;
+ break;
- case SRB_STATUS_BUS_RESET:
- scsicmd->result = DID_RESET << 16
- | COMMAND_COMPLETE << 8;
- break;
+ case SRB_STATUS_BUS_RESET:
+ scsicmd->result = DID_RESET << 16
+ | COMMAND_COMPLETE << 8;
+ break;
- case SRB_STATUS_MESSAGE_REJECTED:
- scsicmd->result = DID_ERROR << 16
- | MESSAGE_REJECT << 8;
- break;
- case SRB_STATUS_REQUEST_FLUSHED:
- case SRB_STATUS_ERROR:
- case SRB_STATUS_INVALID_REQUEST:
- case SRB_STATUS_REQUEST_SENSE_FAILED:
- case SRB_STATUS_NO_HBA:
- case SRB_STATUS_UNEXPECTED_BUS_FREE:
- case SRB_STATUS_PHASE_SEQUENCE_FAILURE:
- case SRB_STATUS_BAD_SRB_BLOCK_LENGTH:
- case SRB_STATUS_DELAYED_RETRY:
- case SRB_STATUS_BAD_FUNCTION:
- case SRB_STATUS_NOT_STARTED:
- case SRB_STATUS_NOT_IN_USE:
- case SRB_STATUS_FORCE_ABORT:
- case SRB_STATUS_DOMAIN_VALIDATION_FAIL:
- default:
+ case SRB_STATUS_MESSAGE_REJECTED:
+ scsicmd->result = DID_ERROR << 16
+ | MESSAGE_REJECT << 8;
+ break;
+ case SRB_STATUS_REQUEST_FLUSHED:
+ case SRB_STATUS_ERROR:
+ case SRB_STATUS_INVALID_REQUEST:
+ case SRB_STATUS_REQUEST_SENSE_FAILED:
+ case SRB_STATUS_NO_HBA:
+ case SRB_STATUS_UNEXPECTED_BUS_FREE:
+ case SRB_STATUS_PHASE_SEQUENCE_FAILURE:
+ case SRB_STATUS_BAD_SRB_BLOCK_LENGTH:
+ case SRB_STATUS_DELAYED_RETRY:
+ case SRB_STATUS_BAD_FUNCTION:
+ case SRB_STATUS_NOT_STARTED:
+ case SRB_STATUS_NOT_IN_USE:
+ case SRB_STATUS_FORCE_ABORT:
+ case SRB_STATUS_DOMAIN_VALIDATION_FAIL:
+ default:
#ifdef AAC_DETAILED_STATUS_INFO
- printk(KERN_INFO "aacraid: SRB ERROR(%u) %s scsi cmd 0x%x - scsi status 0x%x\n",
- le32_to_cpu(srbreply->srb_status) & 0x3F,
- aac_get_status_string(
- le32_to_cpu(srbreply->srb_status) & 0x3F),
- scsicmd->cmnd[0],
- le32_to_cpu(srbreply->scsi_status));
+ pr_info("aacraid: SRB ERROR(%u) %s scsi cmd 0x%x -scsi status 0x%x\n",
+ le32_to_cpu(srbreply->srb_status) & 0x3F,
+ aac_get_status_string(
+ le32_to_cpu(srbreply->srb_status) & 0x3F),
+ scsicmd->cmnd[0],
+ le32_to_cpu(srbreply->scsi_status));
#endif
- if ((scsicmd->cmnd[0] == ATA_12)
- || (scsicmd->cmnd[0] == ATA_16)) {
- if (scsicmd->cmnd[2] & (0x01 << 5)) {
- scsicmd->result = DID_OK << 16
- | COMMAND_COMPLETE << 8;
- break;
- } else {
- scsicmd->result = DID_ERROR << 16
- | COMMAND_COMPLETE << 8;
- break;
- }
+ /*
+ * When the CC bit is SET by the host in ATA pass thru CDB,
+ * driver is supposed to return DID_OK
+ *
+ * When the CC bit is RESET by the host, driver should
+ * return DID_ERROR
+ */
+ if ((scsicmd->cmnd[0] == ATA_12)
+ || (scsicmd->cmnd[0] == ATA_16)) {
+
+ if (scsicmd->cmnd[2] & (0x01 << 5)) {
+ scsicmd->result = DID_OK << 16
+ | COMMAND_COMPLETE << 8;
+ break;
} else {
scsicmd->result = DID_ERROR << 16
| COMMAND_COMPLETE << 8;
- break;
+ break;
}
- }
- if (le32_to_cpu(srbreply->scsi_status)
- == SAM_STAT_CHECK_CONDITION) {
- int len;
-
- scsicmd->result |= SAM_STAT_CHECK_CONDITION;
- len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
- SCSI_SENSE_BUFFERSIZE);
-#ifdef AAC_DETAILED_STATUS_INFO
- printk(KERN_WARNING "aac_srb_callback: check condition, status = %d len=%d\n",
- le32_to_cpu(srbreply->status), len);
-#endif
- memcpy(scsicmd->sense_buffer,
- srbreply->sense_data, len);
+ } else {
+ scsicmd->result = DID_ERROR << 16
+ | COMMAND_COMPLETE << 8;
+ break;
}
}
+ if (le32_to_cpu(srbreply->scsi_status)
+ == SAM_STAT_CHECK_CONDITION) {
+ int len;
+
+ scsicmd->result |= SAM_STAT_CHECK_CONDITION;
+ len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
+ SCSI_SENSE_BUFFERSIZE);
+#ifdef AAC_DETAILED_STATUS_INFO
+ pr_warn("aac_srb_callback: check condition, status = %d len=%d\n",
+ le32_to_cpu(srbreply->status), len);
+#endif
+ memcpy(scsicmd->sense_buffer,
+ srbreply->sense_data, len);
+ }
+
/*
* OR in the scsi status (already shifted up a bit)
*/
diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index f101990..4532990 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -5131,6 +5131,19 @@ lpfc_free_sysfs_attr(struct lpfc_vport *vport)
*/
/**
+ * lpfc_get_host_symbolic_name - Copy symbolic name into the scsi host
+ * @shost: kernel scsi host pointer.
+ **/
+static void
+lpfc_get_host_symbolic_name(struct Scsi_Host *shost)
+{
+ struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+
+ lpfc_vport_symbolic_node_name(vport, fc_host_symbolic_name(shost),
+ sizeof fc_host_symbolic_name(shost));
+}
+
+/**
* lpfc_get_host_port_id - Copy the vport DID into the scsi host port id
* @shost: kernel scsi host pointer.
**/
@@ -5667,6 +5680,8 @@ struct fc_function_template lpfc_transport_functions = {
.show_host_supported_fc4s = 1,
.show_host_supported_speeds = 1,
.show_host_maxframe_size = 1,
+
+ .get_host_symbolic_name = lpfc_get_host_symbolic_name,
.show_host_symbolic_name = 1,
/* dynamic attributes the driver supports */
@@ -5734,6 +5749,8 @@ struct fc_function_template lpfc_vport_transport_functions = {
.show_host_supported_fc4s = 1,
.show_host_supported_speeds = 1,
.show_host_maxframe_size = 1,
+
+ .get_host_symbolic_name = lpfc_get_host_symbolic_name,
.show_host_symbolic_name = 1,
/* dynamic attributes the driver supports */
diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index 7b696d1..4df3cdc 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -1999,6 +1999,9 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
if (sp->cmn.fcphHigh < FC_PH3)
sp->cmn.fcphHigh = FC_PH3;
+ sp->cmn.valid_vendor_ver_level = 0;
+ memset(sp->vendorVersion, 0, sizeof(sp->vendorVersion));
+
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
"Issue PLOGI: did:x%x",
did, 0, 0);
@@ -3990,6 +3993,9 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
} else {
memcpy(pcmd, &vport->fc_sparam,
sizeof(struct serv_parm));
+
+ sp->cmn.valid_vendor_ver_level = 0;
+ memset(sp->vendorVersion, 0, sizeof(sp->vendorVersion));
}
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
diff --git a/drivers/scsi/lpfc/lpfc_hw.h b/drivers/scsi/lpfc/lpfc_hw.h
index 8226543..3b970d37 100644
--- a/drivers/scsi/lpfc/lpfc_hw.h
+++ b/drivers/scsi/lpfc/lpfc_hw.h
@@ -360,6 +360,12 @@ struct csp {
* Word 1 Bit 30 in PLOGI request is random offset
*/
#define virtual_fabric_support randomOffset /* Word 1, bit 30 */
+/*
+ * Word 1 Bit 29 in common service parameter is overloaded.
+ * Word 1 Bit 29 in FLOGI response is multiple NPort assignment
+ * Word 1 Bit 29 in FLOGI/PLOGI request is Valid Vendor Version Level
+ */
+#define valid_vendor_ver_level response_multiple_NPort /* Word 1, bit 29 */
#ifdef __BIG_ENDIAN_BITFIELD
uint16_t request_multiple_Nport:1; /* FC Word 1, bit 31 */
uint16_t randomOffset:1; /* FC Word 1, bit 30 */
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index 2d4f4b5..8f1df76 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -119,6 +119,8 @@ lpfc_sli4_wq_put(struct lpfc_queue *q, union lpfc_wqe *wqe)
if (q->phba->sli3_options & LPFC_SLI4_PHWQ_ENABLED)
bf_set(wqe_wqid, &wqe->generic.wqe_com, q->queue_id);
lpfc_sli_pcimem_bcopy(wqe, temp_wqe, q->entry_size);
+ /* ensure WQE bcopy flushed before doorbell write */
+ wmb();
/* Update the host index before invoking device */
host_index = q->host_index;
@@ -10004,6 +10006,7 @@ lpfc_sli_abort_iotag_issue(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
iabt->ulpCommand = CMD_CLOSE_XRI_CN;
abtsiocbp->iocb_cmpl = lpfc_sli_abort_els_cmpl;
+ abtsiocbp->vport = vport;
lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
"0339 Abort xri x%x, original iotag x%x, "
diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c
index c27f4b7..e18bbc6 100644
--- a/drivers/scsi/lpfc/lpfc_vport.c
+++ b/drivers/scsi/lpfc/lpfc_vport.c
@@ -537,6 +537,12 @@ enable_vport(struct fc_vport *fc_vport)
spin_lock_irq(shost->host_lock);
vport->load_flag |= FC_LOADING;
+ if (vport->fc_flag & FC_VPORT_NEEDS_INIT_VPI) {
+ spin_unlock_irq(shost->host_lock);
+ lpfc_issue_init_vpi(vport);
+ goto out;
+ }
+
vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
spin_unlock_irq(shost->host_lock);
@@ -557,6 +563,8 @@ enable_vport(struct fc_vport *fc_vport)
} else {
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
}
+
+out:
lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
"1827 Vport Enabled.\n");
return VPORT_OK;
diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
index bd04bd0..a156451 100644
--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
+++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
@@ -1960,7 +1960,8 @@ static void megasas_build_ld_nonrw_fusion(struct megasas_instance *instance,
*/
static void
megasas_build_syspd_fusion(struct megasas_instance *instance,
- struct scsi_cmnd *scmd, struct megasas_cmd_fusion *cmd, u8 fp_possible)
+ struct scsi_cmnd *scmd, struct megasas_cmd_fusion *cmd,
+ bool fp_possible)
{
u32 device_id;
struct MPI2_RAID_SCSI_IO_REQUEST *io_request;
@@ -2064,6 +2065,8 @@ megasas_build_io_fusion(struct megasas_instance *instance,
u16 sge_count;
u8 cmd_type;
struct MPI2_RAID_SCSI_IO_REQUEST *io_request = cmd->io_request;
+ struct MR_PRIV_DEVICE *mr_device_priv_data;
+ mr_device_priv_data = scp->device->hostdata;
/* Zero out some fields so they don't get reused */
memset(io_request->LUN, 0x0, 8);
@@ -2092,12 +2095,14 @@ megasas_build_io_fusion(struct megasas_instance *instance,
megasas_build_ld_nonrw_fusion(instance, scp, cmd);
break;
case READ_WRITE_SYSPDIO:
+ megasas_build_syspd_fusion(instance, scp, cmd, true);
+ break;
case NON_READ_WRITE_SYSPDIO:
- if (instance->secure_jbod_support &&
- (cmd_type == NON_READ_WRITE_SYSPDIO))
- megasas_build_syspd_fusion(instance, scp, cmd, 0);
+ if (instance->secure_jbod_support ||
+ mr_device_priv_data->is_tm_capable)
+ megasas_build_syspd_fusion(instance, scp, cmd, false);
else
- megasas_build_syspd_fusion(instance, scp, cmd, 1);
+ megasas_build_syspd_fusion(instance, scp, cmd, true);
break;
default:
break;
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
index 6643f6f..0ad8ece 100644
--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -484,7 +484,6 @@ static int tcm_qla2xxx_handle_cmd(scsi_qla_host_t *vha, struct qla_tgt_cmd *cmd,
static void tcm_qla2xxx_handle_data_work(struct work_struct *work)
{
struct qla_tgt_cmd *cmd = container_of(work, struct qla_tgt_cmd, work);
- unsigned long flags;
/*
* Ensure that the complete FCP WRITE payload has been received.
@@ -492,17 +491,6 @@ static void tcm_qla2xxx_handle_data_work(struct work_struct *work)
*/
cmd->cmd_in_wq = 0;
- spin_lock_irqsave(&cmd->cmd_lock, flags);
- cmd->cmd_flags |= CMD_FLAG_DATA_WORK;
- if (cmd->aborted) {
- cmd->cmd_flags |= CMD_FLAG_DATA_WORK_FREE;
- spin_unlock_irqrestore(&cmd->cmd_lock, flags);
-
- tcm_qla2xxx_free_cmd(cmd);
- return;
- }
- spin_unlock_irqrestore(&cmd->cmd_lock, flags);
-
cmd->vha->tgt_counters.qla_core_ret_ctio++;
if (!cmd->write_data_transferred) {
/*
@@ -682,34 +670,13 @@ static void tcm_qla2xxx_queue_tm_rsp(struct se_cmd *se_cmd)
qlt_xmit_tm_rsp(mcmd);
}
-
-#define DATA_WORK_NOT_FREE(_flags) \
- (( _flags & (CMD_FLAG_DATA_WORK|CMD_FLAG_DATA_WORK_FREE)) == \
- CMD_FLAG_DATA_WORK)
static void tcm_qla2xxx_aborted_task(struct se_cmd *se_cmd)
{
struct qla_tgt_cmd *cmd = container_of(se_cmd,
struct qla_tgt_cmd, se_cmd);
- unsigned long flags;
if (qlt_abort_cmd(cmd))
return;
-
- spin_lock_irqsave(&cmd->cmd_lock, flags);
- if ((cmd->state == QLA_TGT_STATE_NEW)||
- ((cmd->state == QLA_TGT_STATE_DATA_IN) &&
- DATA_WORK_NOT_FREE(cmd->cmd_flags)) ) {
-
- cmd->cmd_flags |= CMD_FLAG_DATA_WORK_FREE;
- spin_unlock_irqrestore(&cmd->cmd_lock, flags);
- /* Cmd have not reached firmware.
- * Use this trigger to free it. */
- tcm_qla2xxx_free_cmd(cmd);
- return;
- }
- spin_unlock_irqrestore(&cmd->cmd_lock, flags);
- return;
-
}
static void tcm_qla2xxx_clear_sess_lookup(struct tcm_qla2xxx_lport *,
diff --git a/drivers/scsi/ufs/ufs-qcom-debugfs.c b/drivers/scsi/ufs/ufs-qcom-debugfs.c
index 494ecd1..db4ecec 100644
--- a/drivers/scsi/ufs/ufs-qcom-debugfs.c
+++ b/drivers/scsi/ufs/ufs-qcom-debugfs.c
@@ -121,7 +121,8 @@ static ssize_t ufs_qcom_dbg_testbus_cfg_write(struct file *file,
struct ufs_hba *hba = host->hba;
- ret = simple_write_to_buffer(configuration, TESTBUS_CFG_BUFF_LINE_SIZE,
+ ret = simple_write_to_buffer(configuration,
+ TESTBUS_CFG_BUFF_LINE_SIZE - 1,
&buff_pos, ubuf, cnt);
if (ret < 0) {
dev_err(host->hba->dev, "%s: failed to read user data\n",
diff --git a/drivers/scsi/ufs/ufs-qcom-ice.c b/drivers/scsi/ufs/ufs-qcom-ice.c
index 0c86263..84765b1 100644
--- a/drivers/scsi/ufs/ufs-qcom-ice.c
+++ b/drivers/scsi/ufs/ufs-qcom-ice.c
@@ -170,17 +170,15 @@ int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host)
static void ufs_qcom_ice_cfg_work(struct work_struct *work)
{
unsigned long flags;
- struct ice_data_setting ice_set;
struct ufs_qcom_host *qcom_host =
container_of(work, struct ufs_qcom_host, ice_cfg_work);
- struct request *req_pending = NULL;
if (!qcom_host->ice.vops->config_start)
return;
spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
- req_pending = qcom_host->req_pending;
- if (!req_pending) {
+ if (!qcom_host->req_pending) {
+ qcom_host->work_pending = false;
spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
return;
}
@@ -189,24 +187,15 @@ static void ufs_qcom_ice_cfg_work(struct work_struct *work)
/*
* config_start is called again as previous attempt returned -EAGAIN,
* this call shall now take care of the necessary key setup.
- * 'ice_set' will not actually be used, instead the next call to
- * config_start() for this request, in the normal call flow, will
- * succeed as the key has now been setup.
*/
qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
- qcom_host->req_pending, &ice_set, false);
+ qcom_host->req_pending, NULL, false);
spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
qcom_host->req_pending = NULL;
+ qcom_host->work_pending = false;
spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
- /*
- * Resume with requests processing. We assume config_start has been
- * successful, but even if it wasn't we still must resume in order to
- * allow for the request to be retried.
- */
- ufshcd_scsi_unblock_requests(qcom_host->hba);
-
}
/**
@@ -285,18 +274,14 @@ int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
* requires a non-atomic context, this means we should
* call the function again from the worker thread to do
* the configuration. For this request the error will
- * propagate so it will be re-queued and until the
- * configuration is is completed we block further
- * request processing.
+ * propagate so it will be re-queued.
*/
if (err == -EAGAIN) {
dev_dbg(qcom_host->hba->dev,
"%s: scheduling task for ice setup\n",
__func__);
- if (!qcom_host->req_pending) {
- ufshcd_scsi_block_requests(
- qcom_host->hba);
+ if (!qcom_host->work_pending) {
qcom_host->req_pending = cmd->request;
if (!schedule_work(
@@ -307,10 +292,9 @@ int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
&qcom_host->ice_work_lock,
flags);
- ufshcd_scsi_unblock_requests(
- qcom_host->hba);
return err;
}
+ qcom_host->work_pending = true;
}
} else {
@@ -409,9 +393,7 @@ int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
* requires a non-atomic context, this means we should
* call the function again from the worker thread to do
* the configuration. For this request the error will
- * propagate so it will be re-queued and until the
- * configuration is is completed we block further
- * request processing.
+ * propagate so it will be re-queued.
*/
if (err == -EAGAIN) {
@@ -419,9 +401,8 @@ int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
"%s: scheduling task for ice setup\n",
__func__);
- if (!qcom_host->req_pending) {
- ufshcd_scsi_block_requests(
- qcom_host->hba);
+ if (!qcom_host->work_pending) {
+
qcom_host->req_pending = cmd->request;
if (!schedule_work(
&qcom_host->ice_cfg_work)) {
@@ -431,10 +412,9 @@ int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
&qcom_host->ice_work_lock,
flags);
- ufshcd_scsi_unblock_requests(
- qcom_host->hba);
return err;
}
+ qcom_host->work_pending = true;
}
} else {
diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index 1ad191e..ba44523 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -2742,6 +2742,7 @@ static const struct of_device_id ufs_qcom_of_match[] = {
{ .compatible = "qcom,ufshc"},
{},
};
+MODULE_DEVICE_TABLE(of, ufs_qcom_of_match);
static const struct dev_pm_ops ufs_qcom_pm_ops = {
.suspend = ufshcd_pltfrm_suspend,
diff --git a/drivers/scsi/ufs/ufs-qcom.h b/drivers/scsi/ufs/ufs-qcom.h
index 0ab656e..9da3d19 100644
--- a/drivers/scsi/ufs/ufs-qcom.h
+++ b/drivers/scsi/ufs/ufs-qcom.h
@@ -375,6 +375,7 @@ struct ufs_qcom_host {
struct work_struct ice_cfg_work;
struct request *req_pending;
struct ufs_vreg *vddp_ref_clk;
+ bool work_pending;
};
static inline u32
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 4a43695..a6bc1da 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -908,7 +908,6 @@ static void ufshcd_print_host_state(struct ufs_hba *hba)
hba->capabilities, hba->caps);
dev_err(hba->dev, "quirks=0x%x, dev. quirks=0x%x\n", hba->quirks,
hba->dev_info.quirks);
- ufshcd_print_fsm_state(hba);
}
/**
@@ -7033,6 +7032,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
*/
scsi_print_command(cmd);
if (!hba->req_abort_count) {
+ ufshcd_print_fsm_state(hba);
ufshcd_print_host_regs(hba);
ufshcd_print_host_state(hba);
ufshcd_print_pwr_info(hba);
@@ -9260,7 +9260,6 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
goto enable_gating;
}
- flush_work(&hba->eeh_work);
ret = ufshcd_link_state_transition(hba, req_link_state, 1);
if (ret)
goto set_dev_active;
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 62306bad..22b7236 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -141,6 +141,22 @@
Say M here if you want to include support for the Qualcomm RPM as a
module. This will build a module called "qcom-smd-rpm".
+config MSM_SPM
+ bool "Driver support for SPM and AVS wrapper hardware"
+ help
+ Enables the support for SPM and AVS wrapper hardware on MSMs. SPM
+ hardware is used to manage the processor power during sleep. The
+ driver allows configuring SPM to allow different low power modes for
+ both core and L2.
+
+config MSM_L2_SPM
+ bool "SPM support for L2 cache"
+ help
+ Enable SPM driver support for L2 cache. Some MSM chipsets allow
+ control of L2 cache low power mode with a Subsystem Power manager.
+ Enabling this driver allows configuring L2 SPM for low power modes
+ on supported chipsets.
+
config QCOM_SCM
bool "Secure Channel Manager (SCM) support"
default n
@@ -251,6 +267,32 @@
of deadlocks or cpu hangs these dump regions are captured to
give a snapshot of the system at the time of the crash.
+config QCOM_MINIDUMP
+ bool "QCOM Minidump Support"
+ depends on MSM_SMEM && QCOM_DLOAD_MODE
+ help
+ This enables minidump feature. It allows various clients to
+ register to dump their state at system bad state (panic/WDT,etc.,).
+ Minidump would dump all registered entries, only when DLOAD mode
+ is enabled.
+
+config MINIDUMP_MAX_ENTRIES
+ int "Minidump Maximum num of entries"
+ default 200
+ depends on QCOM_MINIDUMP
+ help
+ This defines maximum number of entries to be allocated for application
+ subsytem in Minidump table.
+
+config MSM_RPM_SMD
+ bool "RPM driver using SMD protocol"
+ help
+ RPM is the dedicated hardware engine for managing shared SoC
+ resources. This config adds driver support for using SMD as a
+ transport layer communication with RPM hardware. It also selects
+ the MSM_MPM config that programs the MPM module to monitor interrupts
+ during sleep modes.
+
config QCOM_BUS_SCALING
bool "Bus scaling driver"
help
@@ -293,6 +335,27 @@
processors in the System on a Chip (SoC) which allows basic
inter-processor communication.
+config MSM_SMD
+ depends on MSM_SMEM
+ bool "MSM Shared Memory Driver (SMD)"
+ help
+ Support for the shared memory interprocessor communication protocol
+ which provides virual point to point serial channels between processes
+ on the apps processor and processes on other processors in the SoC.
+ Also includes support for the Shared Memory State Machine (SMSM)
+ protocol which provides a mechanism to publish single bit state
+ information to one or more processors in the SoC.
+
+config MSM_SMD_DEBUG
+ depends on MSM_SMD
+ bool "MSM SMD debug support"
+ help
+ Support for debugging SMD and SMSM communication between apps and
+ other processors in the SoC. Debug support primarily consists of
+ logs consisting of information such as what interrupts were processed,
+ what channels caused interrupt activity, and when internal state
+ change events occur.
+
config MSM_GLINK
bool "Generic Link (G-Link)"
help
@@ -686,3 +749,11 @@
and ETM registers are saved and restored across power collapse.
If unsure, say 'N' here to avoid potential power, performance and
memory penalty.
+
+config QCOM_QDSS_BRIDGE
+ bool "Configure bridge driver for QTI/Qualcomm Technologies, Inc. MDM"
+ depends on MSM_MHI
+ help
+ The driver will help route diag traffic from modem side over the QDSS
+ sub-system to USB on APSS side. The driver acts as a bridge between the
+ MHI and USB interface. If unsure, say N.
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 9a4e010..6deadc0 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -13,6 +13,7 @@
obj-$(CONFIG_QCOM_SMD) += smd.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
+obj-$(CONFIG_MSM_SPM) += msm-spm.o spm_devices.o
obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
obj-$(CONFIG_QCOM_SMP2P) += smp2p.o
obj-$(CONFIG_QCOM_SMSM) += smsm.o
@@ -28,9 +29,11 @@
obj-$(CONFIG_QCOM_EUD) += eud.o
obj-$(CONFIG_QCOM_WATCHDOG_V2) += watchdog_v2.o
obj-$(CONFIG_QCOM_MEMORY_DUMP_V2) += memory_dump_v2.o
+obj-$(CONFIG_QCOM_MINIDUMP) += msm_minidump.o minidump_log.o
obj-$(CONFIG_QCOM_RUN_QUEUE_STATS) += rq_stats.o
obj-$(CONFIG_QCOM_SECURE_BUFFER) += secure_buffer.o
obj-$(CONFIG_MSM_SMEM) += msm_smem.o smem_debug.o
+obj-$(CONFIG_MSM_SMD) += msm_smd.o smd_debug.o smd_private.o smd_init_dt.o smsm_debug.o
obj-$(CONFIG_MSM_GLINK) += glink.o glink_debugfs.o glink_ssr.o
obj-$(CONFIG_MSM_GLINK_LOOPBACK_SERVER) += glink_loopback_server.o
obj-$(CONFIG_MSM_GLINK_SMEM_NATIVE_XPRT) += glink_smem_native_xprt.o
@@ -40,6 +43,10 @@
obj-$(CONFIG_TRACER_PKT) += tracer_pkt.o
obj-$(CONFIG_QCOM_BUS_SCALING) += msm_bus/
obj-$(CONFIG_QTI_RPMH_API) += rpmh.o
+obj-$(CONFIG_MSM_RPM_SMD) += rpm-smd.o
+ifdef CONFIG_DEBUG_FS
+obj-$(CONFIG_MSM_RPM_SMD) += rpm-smd-debug.o
+endif
obj-$(CONFIG_QTI_SYSTEM_PM) += system_pm.o
obj-$(CONFIG_MSM_SERVICE_NOTIFIER) += service-notifier.o
obj-$(CONFIG_MSM_SERVICE_LOCATOR) += service-locator.o
@@ -79,3 +86,4 @@
obj-$(CONFIG_QMP_DEBUGFS_CLIENT) += qmp-debugfs-client.o
obj-$(CONFIG_MSM_REMOTEQDSS) += remoteqdss.o
obj-$(CONFIG_QSEE_IPC_IRQ_BRIDGE) += qsee_ipc_irq_bridge.o
+obj-$(CONFIG_QCOM_QDSS_BRIDGE) += qdss_bridge.o
diff --git a/drivers/soc/qcom/cmd-db.c b/drivers/soc/qcom/cmd-db.c
index 252bd21..72abf50 100644
--- a/drivers/soc/qcom/cmd-db.c
+++ b/drivers/soc/qcom/cmd-db.c
@@ -197,6 +197,7 @@ int cmd_db_get_aux_data(const char *resource_id, u8 *data, int len)
len);
return len;
}
+EXPORT_SYMBOL(cmd_db_get_aux_data);
int cmd_db_get_aux_data_len(const char *resource_id)
{
@@ -208,6 +209,7 @@ int cmd_db_get_aux_data_len(const char *resource_id)
return ret < 0 ? 0 : ent.len;
}
+EXPORT_SYMBOL(cmd_db_get_aux_data_len);
u16 cmd_db_get_version(const char *resource_id)
{
diff --git a/drivers/soc/qcom/cpuss_dump.c b/drivers/soc/qcom/cpuss_dump.c
index 886a32f..eba1128 100644
--- a/drivers/soc/qcom/cpuss_dump.c
+++ b/drivers/soc/qcom/cpuss_dump.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2014-2016, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2014-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -74,6 +74,8 @@ static int cpuss_dump_probe(struct platform_device *pdev)
dump_data->addr = dump_addr;
dump_data->len = size;
+ scnprintf(dump_data->name, sizeof(dump_data->name),
+ "KCPUSS%X", id);
dump_entry.id = id;
dump_entry.addr = virt_to_phys(dump_data);
ret = msm_dump_data_register(MSM_DUMP_TABLE_APPS, &dump_entry);
diff --git a/drivers/soc/qcom/dcc_v2.c b/drivers/soc/qcom/dcc_v2.c
index 457dc5f..cff407e 100644
--- a/drivers/soc/qcom/dcc_v2.c
+++ b/drivers/soc/qcom/dcc_v2.c
@@ -1610,6 +1610,14 @@ static struct platform_driver dcc_driver = {
static int __init dcc_init(void)
{
+ int ret;
+
+ ret = scm_is_secure_device();
+ if (ret == 0) {
+ pr_info("DCC is not available\n");
+ return -ENODEV;
+ }
+
return platform_driver_register(&dcc_driver);
}
pure_initcall(dcc_init);
diff --git a/drivers/soc/qcom/glink.c b/drivers/soc/qcom/glink.c
index ebed4d2..e6fd52e 100644
--- a/drivers/soc/qcom/glink.c
+++ b/drivers/soc/qcom/glink.c
@@ -1669,6 +1669,8 @@ void ch_purge_intent_lists(struct channel_ctx *ctx)
&ctx->local_rx_intent_list, list) {
ctx->notify_rx_abort(ctx, ctx->user_priv,
ptr_intent->pkt_priv);
+ ctx->transport_ptr->ops->deallocate_rx_intent(
+ ctx->transport_ptr->ops, ptr_intent);
list_del(&ptr_intent->list);
kfree(ptr_intent);
}
@@ -3767,6 +3769,8 @@ static void glink_dummy_xprt_ctx_release(struct rwref_lock *xprt_st_lock)
GLINK_INFO("%s: freeing transport [%s->%s]context\n", __func__,
xprt_ctx->name,
xprt_ctx->edge);
+ kfree(xprt_ctx->ops);
+ xprt_ctx->ops = NULL;
kfree(xprt_ctx);
}
@@ -4158,6 +4162,7 @@ static void glink_core_link_down(struct glink_transport_if *if_ptr)
rwref_write_get(&xprt_ptr->xprt_state_lhb0);
xprt_ptr->next_lcid = 1;
xprt_ptr->local_state = GLINK_XPRT_DOWN;
+ xprt_ptr->curr_qos_rate_kBps = 0;
xprt_ptr->local_version_idx = xprt_ptr->versions_entries - 1;
xprt_ptr->remote_version_idx = xprt_ptr->versions_entries - 1;
xprt_ptr->l_features =
@@ -4292,6 +4297,12 @@ static void glink_core_channel_cleanup(struct glink_core_xprt_ctx *xprt_ptr)
rwref_read_get(&xprt_ptr->xprt_state_lhb0);
ctx = get_first_ch_ctx(xprt_ptr);
while (ctx) {
+ spin_lock_irqsave(&xprt_ptr->tx_ready_lock_lhb3, flags);
+ spin_lock(&ctx->tx_lists_lock_lhc3);
+ if (!list_empty(&ctx->tx_active))
+ glink_qos_done_ch_tx(ctx);
+ spin_unlock(&ctx->tx_lists_lock_lhc3);
+ spin_unlock_irqrestore(&xprt_ptr->tx_ready_lock_lhb3, flags);
rwref_write_get_atomic(&ctx->ch_state_lhb2, true);
if (ctx->local_open_state == GLINK_CHANNEL_OPENED ||
ctx->local_open_state == GLINK_CHANNEL_OPENING) {
diff --git a/drivers/soc/qcom/glink_loopback_server.c b/drivers/soc/qcom/glink_loopback_server.c
index 94a3d8c..abd140e 100644
--- a/drivers/soc/qcom/glink_loopback_server.c
+++ b/drivers/soc/qcom/glink_loopback_server.c
@@ -140,7 +140,7 @@ static struct ctl_ch_info ctl_ch_tbl[] = {
{"LOOPBACK_CTL_APSS", "mpss", "smem"},
{"LOOPBACK_CTL_APSS", "lpass", "smem"},
{"LOOPBACK_CTL_APSS", "dsps", "smem"},
- {"LOOPBACK_CTL_APPS", "cdsp", "smem"},
+ {"LOOPBACK_CTL_APSS", "cdsp", "smem"},
{"LOOPBACK_CTL_APSS", "spss", "mailbox"},
{"LOOPBACK_CTL_APSS", "wdsp", "spi"},
};
diff --git a/drivers/soc/qcom/glink_private.h b/drivers/soc/qcom/glink_private.h
index 9810207..31c1721 100644
--- a/drivers/soc/qcom/glink_private.h
+++ b/drivers/soc/qcom/glink_private.h
@@ -699,7 +699,6 @@ enum ssr_command {
* received.
* edge: The G-Link edge name for the channel associated with
* this callback data
- * do_cleanup_data: Structure containing the G-Link SSR do_cleanup message.
* cb_kref: Kref object to maintain cb_data reference.
*/
struct ssr_notify_data {
@@ -707,7 +706,6 @@ struct ssr_notify_data {
unsigned int event;
bool responded;
const char *edge;
- struct do_cleanup_msg *do_cleanup_data;
struct kref cb_kref;
};
@@ -752,7 +750,6 @@ struct subsys_info {
* ssr_name: Name of the subsystem recognized by the SSR framework
* edge: Name of the G-Link edge
* xprt: Name of the G-Link transport
- * restarted: Indicates whether a restart has been triggered for this edge
* cb_data: Private callback data structure for notification functions
* notify_list_node: used to chain this structure in the notify list
*/
@@ -760,7 +757,6 @@ struct subsys_info_leaf {
const char *ssr_name;
const char *edge;
const char *xprt;
- bool restarted;
struct ssr_notify_data *cb_data;
struct list_head notify_list_node;
};
diff --git a/drivers/soc/qcom/glink_ssr.c b/drivers/soc/qcom/glink_ssr.c
index 4737288..dd436da 100644
--- a/drivers/soc/qcom/glink_ssr.c
+++ b/drivers/soc/qcom/glink_ssr.c
@@ -22,7 +22,6 @@
#include <linux/random.h>
#include <soc/qcom/glink.h>
#include <soc/qcom/subsystem_notif.h>
-#include <soc/qcom/subsystem_restart.h>
#include "glink_private.h"
#define GLINK_SSR_REPLY_TIMEOUT HZ
@@ -254,6 +253,8 @@ static void glink_ssr_link_state_cb(struct glink_link_state_cb_info *cb_info,
void glink_ssr_notify_rx(void *handle, const void *priv, const void *pkt_priv,
const void *ptr, size_t size)
{
+ struct do_cleanup_msg *do_cleanup_data =
+ (struct do_cleanup_msg *)pkt_priv;
struct ssr_notify_data *cb_data = (struct ssr_notify_data *)priv;
struct cleanup_done_msg *resp = (struct cleanup_done_msg *)ptr;
struct rx_done_ch_work *rx_done_work;
@@ -264,15 +265,15 @@ void glink_ssr_notify_rx(void *handle, const void *priv, const void *pkt_priv,
__func__);
return;
}
+ if (unlikely(!do_cleanup_data))
+ goto missing_do_cleanup_data;
if (unlikely(!cb_data))
goto missing_cb_data;
- if (unlikely(!cb_data->do_cleanup_data))
- goto missing_do_cleanup_data;
if (unlikely(!resp))
goto missing_response;
- if (unlikely(resp->version != cb_data->do_cleanup_data->version))
+ if (unlikely(resp->version != do_cleanup_data->version))
goto version_mismatch;
- if (unlikely(resp->seq_num != cb_data->do_cleanup_data->seq_num))
+ if (unlikely(resp->seq_num != do_cleanup_data->seq_num))
goto invalid_seq_number;
if (unlikely(resp->response != GLINK_SSR_CLEANUP_DONE))
goto wrong_response;
@@ -284,10 +285,9 @@ void glink_ssr_notify_rx(void *handle, const void *priv, const void *pkt_priv,
"<SSR> %s: Response from %s resp[%d] version[%d] seq_num[%d] restarted[%s]\n",
__func__, cb_data->edge, resp->response,
resp->version, resp->seq_num,
- cb_data->do_cleanup_data->name);
+ do_cleanup_data->name);
- kfree(cb_data->do_cleanup_data);
- cb_data->do_cleanup_data = NULL;
+ kfree(do_cleanup_data);
rx_done_work->ptr = ptr;
rx_done_work->handle = handle;
INIT_WORK(&rx_done_work->work, rx_done_cb_worker);
@@ -306,13 +306,13 @@ void glink_ssr_notify_rx(void *handle, const void *priv, const void *pkt_priv,
return;
version_mismatch:
GLINK_SSR_ERR("<SSR> %s: Version mismatch. %s[%d], %s[%d]\n", __func__,
- "do_cleanup version", cb_data->do_cleanup_data->version,
+ "do_cleanup version", do_cleanup_data->version,
"cleanup_done version", resp->version);
return;
invalid_seq_number:
GLINK_SSR_ERR("<SSR> %s: Invalid seq. number. %s[%d], %s[%d]\n",
__func__, "do_cleanup seq num",
- cb_data->do_cleanup_data->seq_num,
+ do_cleanup_data->seq_num,
"cleanup_done seq_num", resp->seq_num);
return;
wrong_response:
@@ -595,10 +595,8 @@ int notify_for_subsystem(struct subsys_info *ss_info)
do_cleanup_data->name_len = strlen(ss_info->edge);
strlcpy(do_cleanup_data->name, ss_info->edge,
do_cleanup_data->name_len + 1);
- ss_leaf_entry->cb_data->do_cleanup_data = do_cleanup_data;
- ret = glink_queue_rx_intent(handle,
- (void *)ss_leaf_entry->cb_data,
+ ret = glink_queue_rx_intent(handle, do_cleanup_data,
sizeof(struct cleanup_done_msg));
if (ret) {
GLINK_SSR_ERR(
@@ -607,15 +605,10 @@ int notify_for_subsystem(struct subsys_info *ss_info)
"queue_rx_intent failed", ret,
atomic_read(&responses_remaining));
kfree(do_cleanup_data);
- ss_leaf_entry->cb_data->do_cleanup_data = NULL;
- if (strcmp(ss_leaf_entry->ssr_name, "rpm")) {
- subsystem_restart(ss_leaf_entry->ssr_name);
- ss_leaf_entry->restarted = true;
- } else {
+ if (!strcmp(ss_leaf_entry->ssr_name, "rpm"))
panic("%s: Could not queue intent for RPM!\n",
__func__);
- }
atomic_dec(&responses_remaining);
kref_put(&ss_leaf_entry->cb_data->cb_kref,
cb_data_release);
@@ -623,12 +616,12 @@ int notify_for_subsystem(struct subsys_info *ss_info)
}
if (strcmp(ss_leaf_entry->ssr_name, "rpm"))
- ret = glink_tx(handle, ss_leaf_entry->cb_data,
+ ret = glink_tx(handle, do_cleanup_data,
do_cleanup_data,
sizeof(*do_cleanup_data),
GLINK_TX_REQ_INTENT);
else
- ret = glink_tx(handle, ss_leaf_entry->cb_data,
+ ret = glink_tx(handle, do_cleanup_data,
do_cleanup_data,
sizeof(*do_cleanup_data),
GLINK_TX_SINGLE_THREADED);
@@ -638,15 +631,10 @@ int notify_for_subsystem(struct subsys_info *ss_info)
__func__, ret, "resp. remaining",
atomic_read(&responses_remaining));
kfree(do_cleanup_data);
- ss_leaf_entry->cb_data->do_cleanup_data = NULL;
- if (strcmp(ss_leaf_entry->ssr_name, "rpm")) {
- subsystem_restart(ss_leaf_entry->ssr_name);
- ss_leaf_entry->restarted = true;
- } else {
+ if (!strcmp(ss_leaf_entry->ssr_name, "rpm"))
panic("%s: glink_tx() to RPM failed!\n",
__func__);
- }
atomic_dec(&responses_remaining);
kref_put(&ss_leaf_entry->cb_data->cb_kref,
cb_data_release);
@@ -688,11 +676,7 @@ int notify_for_subsystem(struct subsys_info *ss_info)
/* Check for RPM, as it can't be restarted */
if (!strcmp(ss_leaf_entry->ssr_name, "rpm"))
panic("%s: RPM failed to respond!\n", __func__);
- else if (!ss_leaf_entry->restarted)
- subsystem_restart(ss_leaf_entry->ssr_name);
}
- ss_leaf_entry->restarted = false;
-
if (!IS_ERR_OR_NULL(ss_leaf_entry->cb_data))
ss_leaf_entry->cb_data->responded = false;
kref_put(&ss_leaf_entry->cb_data->cb_kref, cb_data_release);
@@ -1012,7 +996,6 @@ static int glink_ssr_probe(struct platform_device *pdev)
ss_info_leaf->ssr_name = subsys_name;
ss_info_leaf->edge = edge;
ss_info_leaf->xprt = xprt;
- ss_info_leaf->restarted = false;
list_add_tail(&ss_info_leaf->notify_list_node,
&ss_info->notify_list);
ss_info->notify_list_len++;
diff --git a/drivers/soc/qcom/icnss.c b/drivers/soc/qcom/icnss.c
index 6d008d8..e475041 100644
--- a/drivers/soc/qcom/icnss.c
+++ b/drivers/soc/qcom/icnss.c
@@ -2156,6 +2156,12 @@ static int icnss_pd_restart_complete(struct icnss_priv *priv)
if (!priv->ops || !priv->ops->reinit)
goto out;
+ if (test_bit(ICNSS_FW_DOWN, &priv->state)) {
+ icnss_pr_err("FW is in bad state, state: 0x%lx\n",
+ priv->state);
+ goto out;
+ }
+
if (!test_bit(ICNSS_DRIVER_PROBED, &priv->state))
goto call_probe;
@@ -2228,6 +2234,12 @@ static int icnss_driver_event_register_driver(void *data)
if (test_bit(SKIP_QMI, &quirks))
set_bit(ICNSS_FW_READY, &penv->state);
+ if (test_bit(ICNSS_FW_DOWN, &penv->state)) {
+ icnss_pr_err("FW is in bad state, state: 0x%lx\n",
+ penv->state);
+ return -ENODEV;
+ }
+
if (!test_bit(ICNSS_FW_READY, &penv->state)) {
icnss_pr_dbg("FW is not ready yet, state: 0x%lx\n",
penv->state);
@@ -2476,7 +2488,7 @@ static int icnss_modem_notifier_nb(struct notifier_block *nb,
icnss_pr_vdbg("Modem-Notify: event %lu\n", code);
if (code == SUBSYS_AFTER_SHUTDOWN &&
- notif->crashed != CRASH_STATUS_WDOG_BITE) {
+ notif->crashed == CRASH_STATUS_ERR_FATAL) {
ret = icnss_assign_msa_perm_all(priv,
ICNSS_MSA_PERM_HLOS_ALL);
if (!ret) {
@@ -2494,8 +2506,17 @@ static int icnss_modem_notifier_nb(struct notifier_block *nb,
if (code != SUBSYS_BEFORE_SHUTDOWN)
return NOTIFY_OK;
- if (test_bit(ICNSS_PDR_REGISTERED, &priv->state))
+ if (test_bit(ICNSS_PDR_REGISTERED, &priv->state)) {
+ set_bit(ICNSS_FW_DOWN, &priv->state);
+ icnss_ignore_qmi_timeout(true);
+
+ fw_down_data.crashed = !!notif->crashed;
+ if (test_bit(ICNSS_FW_READY, &priv->state))
+ icnss_call_driver_uevent(priv,
+ ICNSS_UEVENT_FW_DOWN,
+ &fw_down_data);
return NOTIFY_OK;
+ }
icnss_pr_info("Modem went down, state: 0x%lx, crashed: %d\n",
priv->state, notif->crashed);
@@ -2629,14 +2650,18 @@ static int icnss_service_notifier_notify(struct notifier_block *nb,
icnss_pr_info("PD service down, pd_state: %d, state: 0x%lx: cause: %s\n",
*state, priv->state, icnss_pdr_cause[cause]);
event_post:
- set_bit(ICNSS_FW_DOWN, &priv->state);
- icnss_ignore_qmi_timeout(true);
- clear_bit(ICNSS_HOST_TRIGGERED_PDR, &priv->state);
+ if (!test_bit(ICNSS_FW_DOWN, &priv->state)) {
+ set_bit(ICNSS_FW_DOWN, &priv->state);
+ icnss_ignore_qmi_timeout(true);
- fw_down_data.crashed = event_data->crashed;
- if (test_bit(ICNSS_FW_READY, &priv->state))
- icnss_call_driver_uevent(priv, ICNSS_UEVENT_FW_DOWN,
- &fw_down_data);
+ fw_down_data.crashed = event_data->crashed;
+ if (test_bit(ICNSS_FW_READY, &priv->state))
+ icnss_call_driver_uevent(priv,
+ ICNSS_UEVENT_FW_DOWN,
+ &fw_down_data);
+ }
+
+ clear_bit(ICNSS_HOST_TRIGGERED_PDR, &priv->state);
icnss_driver_event_post(ICNSS_DRIVER_EVENT_PD_SERVICE_DOWN,
ICNSS_EVENT_SYNC, event_data);
done:
@@ -2788,7 +2813,8 @@ static int icnss_enable_recovery(struct icnss_priv *priv)
return 0;
}
-int icnss_register_driver(struct icnss_driver_ops *ops)
+int __icnss_register_driver(struct icnss_driver_ops *ops,
+ struct module *owner, const char *mod_name)
{
int ret = 0;
@@ -2819,7 +2845,7 @@ int icnss_register_driver(struct icnss_driver_ops *ops)
out:
return ret;
}
-EXPORT_SYMBOL(icnss_register_driver);
+EXPORT_SYMBOL(__icnss_register_driver);
int icnss_unregister_driver(struct icnss_driver_ops *ops)
{
@@ -2845,7 +2871,7 @@ int icnss_unregister_driver(struct icnss_driver_ops *ops)
}
EXPORT_SYMBOL(icnss_unregister_driver);
-int icnss_ce_request_irq(unsigned int ce_id,
+int icnss_ce_request_irq(struct device *dev, unsigned int ce_id,
irqreturn_t (*handler)(int, void *),
unsigned long flags, const char *name, void *ctx)
{
@@ -2853,7 +2879,7 @@ int icnss_ce_request_irq(unsigned int ce_id,
unsigned int irq;
struct ce_irq_list *irq_entry;
- if (!penv || !penv->pdev) {
+ if (!penv || !penv->pdev || !dev) {
ret = -ENODEV;
goto out;
}
@@ -2892,13 +2918,13 @@ int icnss_ce_request_irq(unsigned int ce_id,
}
EXPORT_SYMBOL(icnss_ce_request_irq);
-int icnss_ce_free_irq(unsigned int ce_id, void *ctx)
+int icnss_ce_free_irq(struct device *dev, unsigned int ce_id, void *ctx)
{
int ret = 0;
unsigned int irq;
struct ce_irq_list *irq_entry;
- if (!penv || !penv->pdev) {
+ if (!penv || !penv->pdev || !dev) {
ret = -ENODEV;
goto out;
}
@@ -2928,11 +2954,11 @@ int icnss_ce_free_irq(unsigned int ce_id, void *ctx)
}
EXPORT_SYMBOL(icnss_ce_free_irq);
-void icnss_enable_irq(unsigned int ce_id)
+void icnss_enable_irq(struct device *dev, unsigned int ce_id)
{
unsigned int irq;
- if (!penv || !penv->pdev) {
+ if (!penv || !penv->pdev || !dev) {
icnss_pr_err("Platform driver not initialized\n");
return;
}
@@ -2952,11 +2978,11 @@ void icnss_enable_irq(unsigned int ce_id)
}
EXPORT_SYMBOL(icnss_enable_irq);
-void icnss_disable_irq(unsigned int ce_id)
+void icnss_disable_irq(struct device *dev, unsigned int ce_id)
{
unsigned int irq;
- if (!penv || !penv->pdev) {
+ if (!penv || !penv->pdev || !dev) {
icnss_pr_err("Platform driver not initialized\n");
return;
}
@@ -2977,9 +3003,9 @@ void icnss_disable_irq(unsigned int ce_id)
}
EXPORT_SYMBOL(icnss_disable_irq);
-int icnss_get_soc_info(struct icnss_soc_info *info)
+int icnss_get_soc_info(struct device *dev, struct icnss_soc_info *info)
{
- if (!penv) {
+ if (!penv || !dev) {
icnss_pr_err("Platform driver not initialized\n");
return -EINVAL;
}
@@ -2999,10 +3025,13 @@ int icnss_get_soc_info(struct icnss_soc_info *info)
}
EXPORT_SYMBOL(icnss_get_soc_info);
-int icnss_set_fw_log_mode(uint8_t fw_log_mode)
+int icnss_set_fw_log_mode(struct device *dev, uint8_t fw_log_mode)
{
int ret;
+ if (!dev)
+ return -ENODEV;
+
icnss_pr_dbg("FW log mode: %u\n", fw_log_mode);
ret = wlfw_ini_send_sync_msg(fw_log_mode);
@@ -3085,7 +3114,7 @@ int icnss_athdiag_write(struct device *dev, uint32_t offset,
}
EXPORT_SYMBOL(icnss_athdiag_write);
-int icnss_wlan_enable(struct icnss_wlan_enable_cfg *config,
+int icnss_wlan_enable(struct device *dev, struct icnss_wlan_enable_cfg *config,
enum icnss_driver_mode mode,
const char *host_version)
{
@@ -3093,6 +3122,9 @@ int icnss_wlan_enable(struct icnss_wlan_enable_cfg *config,
u32 i;
int ret;
+ if (!dev)
+ return -ENODEV;
+
icnss_pr_dbg("Mode: %d, config: %p, host_version: %s\n",
mode, config, host_version);
@@ -3159,23 +3191,26 @@ int icnss_wlan_enable(struct icnss_wlan_enable_cfg *config,
}
EXPORT_SYMBOL(icnss_wlan_enable);
-int icnss_wlan_disable(enum icnss_driver_mode mode)
+int icnss_wlan_disable(struct device *dev, enum icnss_driver_mode mode)
{
+ if (!dev)
+ return -ENODEV;
+
return wlfw_wlan_mode_send_sync_msg(QMI_WLFW_OFF_V01);
}
EXPORT_SYMBOL(icnss_wlan_disable);
-bool icnss_is_qmi_disable(void)
+bool icnss_is_qmi_disable(struct device *dev)
{
return test_bit(SKIP_QMI, &quirks) ? true : false;
}
EXPORT_SYMBOL(icnss_is_qmi_disable);
-int icnss_get_ce_id(int irq)
+int icnss_get_ce_id(struct device *dev, int irq)
{
int i;
- if (!penv || !penv->pdev)
+ if (!penv || !penv->pdev || !dev)
return -ENODEV;
for (i = 0; i < ICNSS_MAX_IRQ_REGISTRATIONS; i++) {
@@ -3189,11 +3224,11 @@ int icnss_get_ce_id(int irq)
}
EXPORT_SYMBOL(icnss_get_ce_id);
-int icnss_get_irq(int ce_id)
+int icnss_get_irq(struct device *dev, int ce_id)
{
int irq;
- if (!penv || !penv->pdev)
+ if (!penv || !penv->pdev || !dev)
return -ENODEV;
if (ce_id >= ICNSS_MAX_IRQ_REGISTRATIONS)
@@ -3569,7 +3604,7 @@ static int icnss_test_mode_fw_test_off(struct icnss_priv *priv)
goto out;
}
- icnss_wlan_disable(ICNSS_OFF);
+ icnss_wlan_disable(&priv->pdev->dev, ICNSS_OFF);
ret = icnss_hw_power_off(priv);
@@ -3610,7 +3645,7 @@ static int icnss_test_mode_fw_test(struct icnss_priv *priv,
set_bit(ICNSS_FW_TEST_MODE, &priv->state);
- ret = icnss_wlan_enable(NULL, mode, NULL);
+ ret = icnss_wlan_enable(&priv->pdev->dev, NULL, mode, NULL);
if (ret)
goto power_off;
diff --git a/drivers/soc/qcom/llcc_perfmon.c b/drivers/soc/qcom/llcc_perfmon.c
index 39276a9..8c86e7d 100644
--- a/drivers/soc/qcom/llcc_perfmon.c
+++ b/drivers/soc/qcom/llcc_perfmon.c
@@ -127,8 +127,11 @@ static void perfmon_counter_dump(struct llcc_perfmon_private *llcc_priv)
unsigned int i, j;
unsigned long long total;
+ if (!llcc_priv->configured_counters)
+ return;
+
llcc_bcast_write(llcc_priv, PERFMON_DUMP, MONITOR_DUMP);
- for (i = 0; i < llcc_priv->configured_counters - 1; i++) {
+ for (i = 0; i < llcc_priv->configured_counters; i++) {
total = 0;
for (j = 0; j < llcc_priv->num_banks; j++) {
regmap_read(llcc_priv->llcc_map, llcc_priv->bank_off[j]
@@ -138,15 +141,6 @@ static void perfmon_counter_dump(struct llcc_perfmon_private *llcc_priv)
llcc_priv->configured[i].counter_dump += total;
}
-
- total = 0;
- for (j = 0; j < llcc_priv->num_banks; j++) {
- regmap_read(llcc_priv->llcc_map, llcc_priv->bank_off[j] +
- LLCC_COUNTER_n_VALUE(i), &val);
- total += val;
- }
-
- llcc_priv->configured[i].counter_dump += total;
}
static ssize_t perfmon_counter_dump_show(struct device *dev,
@@ -288,8 +282,8 @@ static ssize_t perfmon_configure_store(struct device *dev,
llcc_priv->configured[j].port_sel = port_sel;
llcc_priv->configured[j].event_sel = event_sel;
port_ops = llcc_priv->port_ops[port_sel];
- pr_info("configured event %ld counter %d on port %ld\n",
- event_sel, j, port_sel);
+ pr_info("counter %d configured for event %ld from port %ld\n",
+ j, event_sel, port_sel);
port_ops->event_config(llcc_priv, event_sel, j++, true);
if (!(llcc_priv->enables_port & (1 << port_sel)))
if (port_ops->event_enable)
@@ -355,8 +349,8 @@ static ssize_t perfmon_remove_store(struct device *dev,
llcc_priv->configured[j].port_sel = MAX_NUMBER_OF_PORTS;
llcc_priv->configured[j].event_sel = 100;
port_ops = llcc_priv->port_ops[port_sel];
- pr_info("Removed event %ld counter %d from port %ld\n",
- event_sel, j, port_sel);
+ pr_info("removed counter %d for event %ld from port %ld\n",
+ j, event_sel, port_sel);
port_ops->event_config(llcc_priv, event_sel, j++, false);
if (llcc_priv->enables_port & (1 << port_sel))
@@ -531,13 +525,13 @@ static ssize_t perfmon_start_store(struct device *dev,
val = MANUAL_MODE | MONITOR_EN;
if (llcc_priv->expires.tv64) {
- if (hrtimer_is_queued(&llcc_priv->hrtimer))
- hrtimer_forward_now(&llcc_priv->hrtimer,
- llcc_priv->expires);
- else
- hrtimer_start(&llcc_priv->hrtimer,
- llcc_priv->expires,
- HRTIMER_MODE_REL_PINNED);
+ if (hrtimer_is_queued(&llcc_priv->hrtimer))
+ hrtimer_forward_now(&llcc_priv->hrtimer,
+ llcc_priv->expires);
+ else
+ hrtimer_start(&llcc_priv->hrtimer,
+ llcc_priv->expires,
+ HRTIMER_MODE_REL_PINNED);
}
} else {
diff --git a/drivers/soc/qcom/lpm-stats.c b/drivers/soc/qcom/lpm-stats.c
index b37a4ec..4a41eee 100644
--- a/drivers/soc/qcom/lpm-stats.c
+++ b/drivers/soc/qcom/lpm-stats.c
@@ -46,7 +46,7 @@ struct level_stats {
int64_t max_time[CONFIG_MSM_IDLE_STATS_BUCKET_COUNT];
int success_count;
int failed_count;
- int64_t total_time;
+ uint64_t total_time;
uint64_t enter_time;
};
@@ -105,7 +105,7 @@ static void level_stats_print(struct seq_file *m, struct level_stats *stats)
int i = 0;
int64_t bucket_time = 0;
char seqs[MAX_STR_LEN] = {0};
- int64_t s = stats->total_time;
+ uint64_t s = stats->total_time;
uint32_t ns = do_div(s, NSEC_PER_SEC);
snprintf(seqs, MAX_STR_LEN,
diff --git a/drivers/soc/qcom/memory_dump_v2.c b/drivers/soc/qcom/memory_dump_v2.c
index 5ed66bf..b76fe86 100644
--- a/drivers/soc/qcom/memory_dump_v2.c
+++ b/drivers/soc/qcom/memory_dump_v2.c
@@ -18,6 +18,7 @@
#include <linux/of.h>
#include <linux/of_address.h>
#include <soc/qcom/memory_dump.h>
+#include <soc/qcom/minidump.h>
#include <soc/qcom/scm.h>
#include <linux/of_device.h>
#include <linux/dma-mapping.h>
@@ -38,7 +39,18 @@ struct msm_memory_dump {
struct msm_dump_table *table;
};
+struct dump_vaddr_entry {
+ uint32_t id;
+ void *dump_vaddr;
+};
+
+struct msm_mem_dump_vaddr_tbl {
+ uint8_t num_node;
+ struct dump_vaddr_entry *entries;
+};
+
static struct msm_memory_dump memdump;
+static struct msm_mem_dump_vaddr_tbl vaddr_tbl;
uint32_t msm_dump_table_version(void)
{
@@ -89,6 +101,33 @@ static struct msm_dump_table *msm_dump_get_table(enum msm_dump_table_ids id)
return table;
}
+static int msm_dump_data_add_minidump(struct msm_dump_entry *entry)
+{
+ struct msm_dump_data *data;
+ struct md_region md_entry;
+
+ data = (struct msm_dump_data *)(phys_to_virt(entry->addr));
+
+ if (!data->addr || !data->len)
+ return -EINVAL;
+
+ if (!strcmp(data->name, "")) {
+ pr_debug("Entry name is NULL, Use ID %d for minidump\n",
+ entry->id);
+ snprintf(md_entry.name, sizeof(md_entry.name), "KMDT0x%X",
+ entry->id);
+ } else {
+ strlcpy(md_entry.name, data->name, sizeof(md_entry.name));
+ }
+
+ md_entry.phys_addr = data->addr;
+ md_entry.virt_addr = (uintptr_t)phys_to_virt(data->addr);
+ md_entry.size = data->len;
+ md_entry.id = entry->id;
+
+ return msm_minidump_add_region(&md_entry);
+}
+
int msm_dump_data_register(enum msm_dump_table_ids id,
struct msm_dump_entry *entry)
{
@@ -109,10 +148,36 @@ int msm_dump_data_register(enum msm_dump_table_ids id,
table->num_entries++;
dmac_flush_range(table, (void *)table + sizeof(struct msm_dump_table));
+
+ if (msm_dump_data_add_minidump(entry))
+ pr_err("Failed to add entry in Minidump table\n");
+
return 0;
}
EXPORT_SYMBOL(msm_dump_data_register);
+void *get_msm_dump_ptr(enum msm_dump_data_ids id)
+{
+ int i;
+
+ if (!vaddr_tbl.entries)
+ return NULL;
+
+ if (id > MSM_DUMP_DATA_MAX)
+ return NULL;
+
+ for (i = 0; i < vaddr_tbl.num_node; i++) {
+ if (vaddr_tbl.entries[i].id == id)
+ break;
+ }
+
+ if (i == vaddr_tbl.num_node)
+ return NULL;
+
+ return (void *)vaddr_tbl.entries[i].dump_vaddr;
+}
+EXPORT_SYMBOL(get_msm_dump_ptr);
+
static int __init init_memory_dump(void)
{
struct msm_dump_table *table;
@@ -209,6 +274,14 @@ static int mem_dump_probe(struct platform_device *pdev)
struct msm_dump_entry dump_entry;
int ret;
u32 size, id;
+ int i = 0;
+
+ vaddr_tbl.num_node = of_get_child_count(node);
+ vaddr_tbl.entries = devm_kcalloc(&pdev->dev, vaddr_tbl.num_node,
+ sizeof(struct dump_vaddr_entry),
+ GFP_KERNEL);
+ if (!vaddr_tbl.entries)
+ dev_err(&pdev->dev, "Unable to allocate mem for ptr addr\n");
for_each_available_child_of_node(node, child_node) {
ret = of_property_read_u32(child_node, "qcom,dump-size", &size);
@@ -245,6 +318,9 @@ static int mem_dump_probe(struct platform_device *pdev)
dump_data->addr = dump_addr;
dump_data->len = size;
+ strlcpy(dump_data->name, child_node->name,
+ strlen(child_node->name) + 1);
+
dump_entry.id = id;
dump_entry.addr = virt_to_phys(dump_data);
ret = msm_dump_data_register(MSM_DUMP_TABLE_APPS, &dump_entry);
@@ -254,6 +330,10 @@ static int mem_dump_probe(struct platform_device *pdev)
dma_free_coherent(&pdev->dev, size, dump_vaddr,
dump_addr);
devm_kfree(&pdev->dev, dump_data);
+ } else if (vaddr_tbl.entries) {
+ vaddr_tbl.entries[i].id = id;
+ vaddr_tbl.entries[i].dump_vaddr = dump_vaddr;
+ i++;
}
}
return 0;
diff --git a/drivers/soc/qcom/minidump_log.c b/drivers/soc/qcom/minidump_log.c
new file mode 100644
index 0000000..c65dfd9
--- /dev/null
+++ b/drivers/soc/qcom/minidump_log.c
@@ -0,0 +1,104 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/kallsyms.h>
+#include <linux/slab.h>
+#include <linux/thread_info.h>
+#include <soc/qcom/minidump.h>
+#include <asm/sections.h>
+
+static void __init register_log_buf(void)
+{
+ char **log_bufp;
+ uint32_t *log_buf_lenp;
+ struct md_region md_entry;
+
+ log_bufp = (char **)kallsyms_lookup_name("log_buf");
+ log_buf_lenp = (uint32_t *)kallsyms_lookup_name("log_buf_len");
+ if (!log_bufp || !log_buf_lenp) {
+ pr_err("Unable to find log_buf by kallsyms!\n");
+ return;
+ }
+ /*Register logbuf to minidump, first idx would be from bss section */
+ strlcpy(md_entry.name, "KLOGBUF", sizeof(md_entry.name));
+ md_entry.virt_addr = (uintptr_t) (*log_bufp);
+ md_entry.phys_addr = virt_to_phys(*log_bufp);
+ md_entry.size = *log_buf_lenp;
+ if (msm_minidump_add_region(&md_entry))
+ pr_err("Failed to add logbuf in Minidump\n");
+}
+
+static void __init register_kernel_sections(void)
+{
+ struct md_region ksec_entry;
+ char *data_name = "KDATABSS";
+ const size_t static_size = __per_cpu_end - __per_cpu_start;
+ void __percpu *base = (void __percpu *)__per_cpu_start;
+ unsigned int cpu;
+
+ strlcpy(ksec_entry.name, data_name, sizeof(ksec_entry.name));
+ ksec_entry.virt_addr = (uintptr_t)_sdata;
+ ksec_entry.phys_addr = virt_to_phys(_sdata);
+ ksec_entry.size = roundup((__bss_stop - _sdata), 4);
+ if (msm_minidump_add_region(&ksec_entry))
+ pr_err("Failed to add data section in Minidump\n");
+
+ /* Add percpu static sections */
+ for_each_possible_cpu(cpu) {
+ void *start = per_cpu_ptr(base, cpu);
+
+ memset(&ksec_entry, 0, sizeof(ksec_entry));
+ scnprintf(ksec_entry.name, sizeof(ksec_entry.name),
+ "KSPERCPU%d", cpu);
+ ksec_entry.virt_addr = (uintptr_t)start;
+ ksec_entry.phys_addr = per_cpu_ptr_to_phys(start);
+ ksec_entry.size = static_size;
+ if (msm_minidump_add_region(&ksec_entry))
+ pr_err("Failed to add percpu sections in Minidump\n");
+ }
+}
+
+void dump_stack_minidump(u64 sp)
+{
+ struct md_region ksp_entry, ktsk_entry;
+ u32 cpu = smp_processor_id();
+
+ if (sp < KIMAGE_VADDR || sp > -256UL)
+ sp = current_stack_pointer;
+
+ sp &= ~(THREAD_SIZE - 1);
+ scnprintf(ksp_entry.name, sizeof(ksp_entry.name), "KSTACK%d", cpu);
+ ksp_entry.virt_addr = sp;
+ ksp_entry.phys_addr = virt_to_phys((uintptr_t *)sp);
+ ksp_entry.size = THREAD_SIZE;
+ if (msm_minidump_add_region(&ksp_entry))
+ pr_err("Failed to add stack of cpu %d in Minidump\n", cpu);
+
+ scnprintf(ktsk_entry.name, sizeof(ktsk_entry.name), "KTASK%d", cpu);
+ ktsk_entry.virt_addr = (u64)current;
+ ktsk_entry.phys_addr = virt_to_phys((uintptr_t *)current);
+ ktsk_entry.size = sizeof(struct task_struct);
+ if (msm_minidump_add_region(&ktsk_entry))
+ pr_err("Failed to add current task %d in Minidump\n", cpu);
+}
+
+static int __init msm_minidump_log_init(void)
+{
+ register_kernel_sections();
+ register_log_buf();
+ return 0;
+}
+late_initcall(msm_minidump_log_init);
diff --git a/drivers/soc/qcom/minidump_private.h b/drivers/soc/qcom/minidump_private.h
new file mode 100644
index 0000000..81ebb1c
--- /dev/null
+++ b/drivers/soc/qcom/minidump_private.h
@@ -0,0 +1,85 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef __MINIDUMP_PRIVATE_H
+#define __MINIDUMP_PRIVATE_H
+
+#define MD_REVISION 1
+#define SBL_MINIDUMP_SMEM_ID 602
+#define MAX_NUM_OF_SS 10
+#define MD_SS_HLOS_ID 0
+#define SMEM_ENTRY_SIZE 40
+
+/* Bootloader has 16 byte support, 4 bytes reserved for itself */
+#define MAX_REGION_NAME_LENGTH 16
+
+#define MD_REGION_VALID ('V' << 24 | 'A' << 16 | 'L' << 8 | 'I' << 0)
+#define MD_REGION_INVALID ('I' << 24 | 'N' << 16 | 'V' << 8 | 'A' << 0)
+#define MD_REGION_INIT ('I' << 24 | 'N' << 16 | 'I' << 8 | 'T' << 0)
+#define MD_REGION_NOINIT 0
+
+#define MD_SS_ENCR_REQ (0 << 24 | 'Y' << 16 | 'E' << 8 | 'S' << 0)
+#define MD_SS_ENCR_NOTREQ (0 << 24 | 0 << 16 | 'N' << 8 | 'R' << 0)
+#define MD_SS_ENCR_NONE ('N' << 24 | 'O' << 16 | 'N' << 8 | 'E' << 0)
+#define MD_SS_ENCR_DONE ('D' << 24 | 'O' << 16 | 'N' << 8 | 'E' << 0)
+#define MD_SS_ENCR_START ('S' << 24 | 'T' << 16 | 'R' << 8 | 'T' << 0)
+#define MD_SS_ENABLED ('E' << 24 | 'N' << 16 | 'B' << 8 | 'L' << 0)
+#define MD_SS_DISABLED ('D' << 24 | 'S' << 16 | 'B' << 8 | 'L' << 0)
+
+/**
+ * md_ss_region - Minidump region
+ * @name : Name of the region to be dumped
+ * @seq_num: : Use to differentiate regions with same name.
+ * @md_valid : This entry to be dumped (if set to 1)
+ * @region_base_address : Physical address of region to be dumped
+ * @region_size : Size of the region
+ */
+struct md_ss_region {
+ char name[MAX_REGION_NAME_LENGTH];
+ u32 seq_num;
+ u32 md_valid;
+ u64 region_base_address;
+ u64 region_size;
+};
+
+/**
+ * md_ss_toc: Sub system SMEM Table of content
+ * @md_ss_toc_init : SS toc init status
+ * @md_ss_enable_status : if set to 1, Bootloader would dump this SS regions
+ * @encryption_status: Encryption status for this subsystem
+ * @encryption_required : Decides to encrypt the SS regions or not
+ * @ss_region_count : Number of regions added in this SS toc
+ * @md_ss_smem_regions_baseptr : regions base pointer of the Subsystem
+ */
+struct md_ss_toc {
+ u32 md_ss_toc_init;
+ u32 md_ss_enable_status;
+ u32 encryption_status;
+ u32 encryption_required;
+ u32 ss_region_count;
+ struct md_ss_region *md_ss_smem_regions_baseptr;
+};
+
+/**
+ * md_global_toc: Global Table of Content
+ * @md_toc_init : Global Minidump init status
+ * @md_revision : Minidump revision
+ * @md_enable_status : Minidump enable status
+ * @md_ss_toc : Array of subsystems toc
+ */
+struct md_global_toc {
+ u32 md_toc_init;
+ u32 md_revision;
+ u32 md_enable_status;
+ struct md_ss_toc md_ss_toc[MAX_NUM_OF_SS];
+};
+
+#endif
diff --git a/drivers/soc/qcom/msm-spm.c b/drivers/soc/qcom/msm-spm.c
index bffe3f3..3033a4a 100644
--- a/drivers/soc/qcom/msm-spm.c
+++ b/drivers/soc/qcom/msm-spm.c
@@ -170,7 +170,7 @@ static inline uint32_t msm_spm_drv_get_num_spm_entry(
struct msm_spm_driver_data *dev)
{
if (!dev)
- return;
+ return -ENODEV;
msm_spm_drv_load_shadow(dev, MSM_SPM_REG_SAW_ID);
return (dev->reg_shadow[MSM_SPM_REG_SAW_ID] >> 24) & 0xFF;
@@ -198,7 +198,7 @@ static inline void msm_spm_drv_set_vctl2(struct msm_spm_driver_data *dev,
* Ensure that vctl_port is always set to 0.
*/
if (dev->vctl_port) {
- WARN();
+ __WARN();
return;
}
diff --git a/drivers/soc/qcom/msm_bus/msm_bus_dbg_voter.c b/drivers/soc/qcom/msm_bus/msm_bus_dbg_voter.c
index 6c69bec..8e1fc0a 100644
--- a/drivers/soc/qcom/msm_bus/msm_bus_dbg_voter.c
+++ b/drivers/soc/qcom/msm_bus/msm_bus_dbg_voter.c
@@ -27,6 +27,7 @@ struct msm_bus_floor_client_type {
};
static struct class *bus_floor_class;
+static DEFINE_RT_MUTEX(msm_bus_floor_vote_lock);
#define MAX_VOTER_NAME (50)
#define DEFAULT_NODE_WIDTH (8)
#define DBG_NAME(s) (strnstr(s, "-", 7) + 1)
@@ -64,18 +65,22 @@ static ssize_t bus_floor_active_only_store(struct device *dev,
{
struct msm_bus_floor_client_type *cl;
+ rt_mutex_lock(&msm_bus_floor_vote_lock);
cl = dev_get_drvdata(dev);
if (!cl) {
pr_err("%s: Can't find cl", __func__);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return 0;
}
if (kstrtoint(buf, 10, &cl->active_only) != 0) {
pr_err("%s:return error", __func__);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return -EINVAL;
}
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return n;
}
@@ -100,20 +105,24 @@ static ssize_t bus_floor_vote_store(struct device *dev,
struct msm_bus_floor_client_type *cl;
int ret = 0;
+ rt_mutex_lock(&msm_bus_floor_vote_lock);
cl = dev_get_drvdata(dev);
if (!cl) {
pr_err("%s: Can't find cl", __func__);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return 0;
}
if (kstrtoull(buf, 10, &cl->cur_vote_hz) != 0) {
pr_err("%s:return error", __func__);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return -EINVAL;
}
ret = msm_bus_floor_vote_context(dev_name(dev), cl->cur_vote_hz,
cl->active_only);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return n;
}
@@ -126,15 +135,18 @@ static ssize_t bus_floor_vote_store_api(struct device *dev,
char name[10];
u64 vote_khz = 0;
+ rt_mutex_lock(&msm_bus_floor_vote_lock);
cl = dev_get_drvdata(dev);
if (!cl) {
pr_err("%s: Can't find cl", __func__);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return 0;
}
if (sscanf(buf, "%9s %llu", name, &vote_khz) != 2) {
pr_err("%s:return error", __func__);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return -EINVAL;
}
@@ -142,6 +154,7 @@ static ssize_t bus_floor_vote_store_api(struct device *dev,
__func__, name, vote_khz);
ret = msm_bus_floor_vote(name, vote_khz);
+ rt_mutex_unlock(&msm_bus_floor_vote_lock);
return n;
}
diff --git a/drivers/soc/qcom/msm_minidump.c b/drivers/soc/qcom/msm_minidump.c
new file mode 100644
index 0000000..3fe62f1
--- /dev/null
+++ b/drivers/soc/qcom/msm_minidump.c
@@ -0,0 +1,380 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#define pr_fmt(fmt) "Minidump: " fmt
+
+#include <linux/init.h>
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/err.h>
+#include <linux/elf.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/slab.h>
+#include <soc/qcom/smem.h>
+#include <soc/qcom/minidump.h>
+#include "minidump_private.h"
+
+#define MAX_NUM_ENTRIES (CONFIG_MINIDUMP_MAX_ENTRIES + 1)
+#define MAX_STRTBL_SIZE (MAX_NUM_ENTRIES * MAX_REGION_NAME_LENGTH)
+
+/**
+ * md_table : Local Minidump toc holder
+ * @num_regions : Number of regions requested
+ * @md_ss_toc : HLOS toc pointer
+ * @md_gbl_toc : Global toc pointer
+ * @md_regions : HLOS regions base pointer
+ * @entry : array of HLOS regions requested
+ */
+struct md_table {
+ u32 revision;
+ u32 num_regions;
+ struct md_ss_toc *md_ss_toc;
+ struct md_global_toc *md_gbl_toc;
+ struct md_ss_region *md_regions;
+ struct md_region entry[MAX_NUM_ENTRIES];
+};
+
+/**
+ * md_elfhdr: Minidump table elf header
+ * @ehdr: elf main header
+ * @shdr: Section header
+ * @phdr: Program header
+ * @elf_offset: section offset in elf
+ * @strtable_idx: string table current index position
+ */
+struct md_elfhdr {
+ struct elfhdr *ehdr;
+ struct elf_shdr *shdr;
+ struct elf_phdr *phdr;
+ u64 elf_offset;
+ u64 strtable_idx;
+};
+
+/* Protect elfheader and smem table from deferred calls contention */
+static DEFINE_SPINLOCK(mdt_lock);
+static struct md_table minidump_table;
+static struct md_elfhdr minidump_elfheader;
+
+/* Number of pending entries to be added in ToC regions */
+static unsigned int pendings;
+
+static inline char *elf_lookup_string(struct elfhdr *hdr, int offset)
+{
+ char *strtab = elf_str_table(hdr);
+
+ if ((strtab == NULL) || (minidump_elfheader.strtable_idx < offset))
+ return NULL;
+ return strtab + offset;
+}
+
+static inline unsigned int set_section_name(const char *name)
+{
+ char *strtab = elf_str_table(minidump_elfheader.ehdr);
+ int idx = minidump_elfheader.strtable_idx;
+ int ret = 0;
+
+ if ((strtab == NULL) || (name == NULL))
+ return 0;
+
+ ret = idx;
+ idx += strlcpy((strtab + idx), name, MAX_REGION_NAME_LENGTH);
+ minidump_elfheader.strtable_idx = idx + 1;
+
+ return ret;
+}
+
+static inline bool md_check_name(const char *name)
+{
+ struct md_region *mde = minidump_table.entry;
+ int i, regno = minidump_table.num_regions;
+
+ for (i = 0; i < regno; i++, mde++)
+ if (!strcmp(mde->name, name))
+ return true;
+ return false;
+}
+
+/* Return next seq no, if name already exists in the table */
+static inline int md_get_seq_num(const char *name)
+{
+ struct md_ss_region *mde = minidump_table.md_regions;
+ int i, regno = minidump_table.md_ss_toc->ss_region_count;
+ int seqno = 0;
+
+ for (i = 0; i < (regno - 1); i++, mde++) {
+ if (!strcmp(mde->name, name)) {
+ if (mde->seq_num >= seqno)
+ seqno = mde->seq_num + 1;
+ }
+ }
+ return seqno;
+}
+
+/* Update Mini dump table in SMEM */
+static void md_update_ss_toc(const struct md_region *entry)
+{
+ struct md_ss_region *mdr;
+ struct elfhdr *hdr = minidump_elfheader.ehdr;
+ struct elf_shdr *shdr = elf_section(hdr, hdr->e_shnum++);
+ struct elf_phdr *phdr = elf_program(hdr, hdr->e_phnum++);
+ int reg_cnt = minidump_table.md_ss_toc->ss_region_count++;
+
+ mdr = &minidump_table.md_regions[reg_cnt];
+
+ strlcpy(mdr->name, entry->name, sizeof(mdr->name));
+ mdr->region_base_address = entry->phys_addr;
+ mdr->region_size = entry->size;
+ mdr->seq_num = md_get_seq_num(entry->name);
+
+ /* Update elf header */
+ shdr->sh_type = SHT_PROGBITS;
+ shdr->sh_name = set_section_name(mdr->name);
+ shdr->sh_addr = (elf_addr_t)entry->virt_addr;
+ shdr->sh_size = mdr->region_size;
+ shdr->sh_flags = SHF_WRITE;
+ shdr->sh_offset = minidump_elfheader.elf_offset;
+ shdr->sh_entsize = 0;
+
+ phdr->p_type = PT_LOAD;
+ phdr->p_offset = minidump_elfheader.elf_offset;
+ phdr->p_vaddr = entry->virt_addr;
+ phdr->p_paddr = entry->phys_addr;
+ phdr->p_filesz = phdr->p_memsz = mdr->region_size;
+ phdr->p_flags = PF_R | PF_W;
+
+ minidump_elfheader.elf_offset += shdr->sh_size;
+ mdr->md_valid = MD_REGION_VALID;
+}
+
+bool msm_minidump_enabled(void)
+{
+ bool ret = false;
+
+ spin_lock(&mdt_lock);
+ if (minidump_table.md_ss_toc &&
+ (minidump_table.md_ss_toc->md_ss_enable_status ==
+ MD_SS_ENABLED))
+ ret = true;
+ spin_unlock(&mdt_lock);
+ return ret;
+}
+EXPORT_SYMBOL(msm_minidump_enabled);
+
+int msm_minidump_add_region(const struct md_region *entry)
+{
+ u32 entries;
+ struct md_region *mdr;
+ int ret = 0;
+
+ if (!entry)
+ return -EINVAL;
+
+ if ((strlen(entry->name) > MAX_NAME_LENGTH) ||
+ md_check_name(entry->name) || !entry->virt_addr) {
+ pr_err("Invalid entry details\n");
+ return -EINVAL;
+ }
+
+ if (!IS_ALIGNED(entry->size, 4)) {
+ pr_err("size should be 4 byte aligned\n");
+ return -EINVAL;
+ }
+
+ spin_lock(&mdt_lock);
+ entries = minidump_table.num_regions;
+ if (entries >= MAX_NUM_ENTRIES) {
+ pr_err("Maximum entries reached.\n");
+ spin_unlock(&mdt_lock);
+ return -ENOMEM;
+ }
+
+ mdr = &minidump_table.entry[entries];
+ strlcpy(mdr->name, entry->name, sizeof(mdr->name));
+ mdr->virt_addr = entry->virt_addr;
+ mdr->phys_addr = entry->phys_addr;
+ mdr->size = entry->size;
+ mdr->id = entry->id;
+
+ minidump_table.num_regions = entries + 1;
+
+ if (minidump_table.md_ss_toc &&
+ (minidump_table.md_ss_toc->md_ss_enable_status ==
+ MD_SS_ENABLED))
+ md_update_ss_toc(entry);
+ else
+ pendings++;
+
+ spin_unlock(&mdt_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL(msm_minidump_add_region);
+
+static int msm_minidump_add_header(void)
+{
+ struct md_ss_region *mdreg = &minidump_table.md_regions[0];
+ struct elfhdr *ehdr;
+ struct elf_shdr *shdr;
+ struct elf_phdr *phdr;
+ unsigned int strtbl_off, elfh_size, phdr_off;
+ char *banner;
+
+ /* Header buffer contains:
+ * elf header, MAX_NUM_ENTRIES+4 of section and program elf headers,
+ * string table section and linux banner.
+ */
+ elfh_size = sizeof(*ehdr) + MAX_STRTBL_SIZE + (strlen(linux_banner) +
+ 1) + ((sizeof(*shdr) + sizeof(*phdr)) * (MAX_NUM_ENTRIES + 4));
+ elfh_size = ALIGN(elfh_size, 4);
+
+ minidump_elfheader.ehdr = kzalloc(elfh_size, GFP_KERNEL);
+ if (!minidump_elfheader.ehdr)
+ return -ENOMEM;
+
+ strlcpy(mdreg->name, "KELF_HEADER", sizeof(mdreg->name));
+ mdreg->region_base_address = virt_to_phys(minidump_elfheader.ehdr);
+ mdreg->region_size = elfh_size;
+
+ ehdr = minidump_elfheader.ehdr;
+ /* Assign section/program headers offset */
+ minidump_elfheader.shdr = shdr = (struct elf_shdr *)(ehdr + 1);
+ minidump_elfheader.phdr = phdr =
+ (struct elf_phdr *)(shdr + MAX_NUM_ENTRIES);
+ phdr_off = sizeof(*ehdr) + (sizeof(*shdr) * MAX_NUM_ENTRIES);
+
+ memcpy(ehdr->e_ident, ELFMAG, SELFMAG);
+ ehdr->e_ident[EI_CLASS] = ELF_CLASS;
+ ehdr->e_ident[EI_DATA] = ELF_DATA;
+ ehdr->e_ident[EI_VERSION] = EV_CURRENT;
+ ehdr->e_ident[EI_OSABI] = ELF_OSABI;
+ ehdr->e_type = ET_CORE;
+ ehdr->e_machine = ELF_ARCH;
+ ehdr->e_version = EV_CURRENT;
+ ehdr->e_ehsize = sizeof(*ehdr);
+ ehdr->e_phoff = phdr_off;
+ ehdr->e_phentsize = sizeof(*phdr);
+ ehdr->e_shoff = sizeof(*ehdr);
+ ehdr->e_shentsize = sizeof(*shdr);
+ ehdr->e_shstrndx = 1;
+
+ minidump_elfheader.elf_offset = elfh_size;
+
+ /*
+ * First section header should be NULL,
+ * 2nd section is string table.
+ */
+ minidump_elfheader.strtable_idx = 1;
+ strtbl_off = sizeof(*ehdr) +
+ ((sizeof(*phdr) + sizeof(*shdr)) * MAX_NUM_ENTRIES);
+ shdr++;
+ shdr->sh_type = SHT_STRTAB;
+ shdr->sh_offset = (elf_addr_t)strtbl_off;
+ shdr->sh_size = MAX_STRTBL_SIZE;
+ shdr->sh_entsize = 0;
+ shdr->sh_flags = 0;
+ shdr->sh_name = set_section_name("STR_TBL");
+ shdr++;
+
+ /* 3rd section is for minidump_table VA, used by parsers */
+ shdr->sh_type = SHT_PROGBITS;
+ shdr->sh_entsize = 0;
+ shdr->sh_flags = 0;
+ shdr->sh_addr = (elf_addr_t)&minidump_table;
+ shdr->sh_name = set_section_name("minidump_table");
+ shdr++;
+
+ /* 4th section is linux banner */
+ banner = (char *)ehdr + strtbl_off + MAX_STRTBL_SIZE;
+ strlcpy(banner, linux_banner, strlen(linux_banner) + 1);
+
+ shdr->sh_type = SHT_PROGBITS;
+ shdr->sh_offset = (elf_addr_t)(strtbl_off + MAX_STRTBL_SIZE);
+ shdr->sh_size = strlen(linux_banner) + 1;
+ shdr->sh_addr = (elf_addr_t)linux_banner;
+ shdr->sh_entsize = 0;
+ shdr->sh_flags = SHF_WRITE;
+ shdr->sh_name = set_section_name("linux_banner");
+
+ phdr->p_type = PT_LOAD;
+ phdr->p_offset = (elf_addr_t)(strtbl_off + MAX_STRTBL_SIZE);
+ phdr->p_vaddr = (elf_addr_t)linux_banner;
+ phdr->p_paddr = virt_to_phys(linux_banner);
+ phdr->p_filesz = phdr->p_memsz = strlen(linux_banner) + 1;
+ phdr->p_flags = PF_R | PF_W;
+
+ /* Update headers count*/
+ ehdr->e_phnum = 1;
+ ehdr->e_shnum = 4;
+
+ mdreg->md_valid = MD_REGION_VALID;
+ return 0;
+}
+
+static int __init msm_minidump_init(void)
+{
+ unsigned int i, size;
+ struct md_region *mdr;
+ struct md_global_toc *md_global_toc;
+ struct md_ss_toc *md_ss_toc;
+
+ /* Get Minidump table */
+ md_global_toc = smem_get_entry(SBL_MINIDUMP_SMEM_ID, &size, 0,
+ SMEM_ANY_HOST_FLAG);
+ if (IS_ERR_OR_NULL(md_global_toc)) {
+ pr_err("SMEM is not initialized.\n");
+ return -ENODEV;
+ }
+
+ /*Check global minidump support initialization */
+ if (!md_global_toc->md_toc_init) {
+ pr_err("System Minidump TOC not initialized\n");
+ return -ENODEV;
+ }
+
+ minidump_table.md_gbl_toc = md_global_toc;
+ minidump_table.revision = md_global_toc->md_revision;
+ md_ss_toc = &md_global_toc->md_ss_toc[MD_SS_HLOS_ID];
+
+ md_ss_toc->encryption_status = MD_SS_ENCR_NONE;
+ md_ss_toc->encryption_required = MD_SS_ENCR_REQ;
+
+ minidump_table.md_ss_toc = md_ss_toc;
+ minidump_table.md_regions = kzalloc((MAX_NUM_ENTRIES *
+ sizeof(struct md_ss_region)), GFP_KERNEL);
+ if (!minidump_table.md_regions)
+ return -ENOMEM;
+
+ md_ss_toc->md_ss_smem_regions_baseptr =
+ (void *)virt_to_phys(minidump_table.md_regions);
+
+ /* First entry would be ELF header */
+ md_ss_toc->ss_region_count = 1;
+ msm_minidump_add_header();
+
+ /* Add pending entries to HLOS TOC */
+ spin_lock(&mdt_lock);
+ md_ss_toc->md_ss_toc_init = 1;
+ md_ss_toc->md_ss_enable_status = MD_SS_ENABLED;
+ for (i = 0; i < pendings; i++) {
+ mdr = &minidump_table.entry[i];
+ md_update_ss_toc(mdr);
+ }
+
+ pendings = 0;
+ spin_unlock(&mdt_lock);
+
+ pr_info("Enabled with max number of regions %d\n",
+ CONFIG_MINIDUMP_MAX_ENTRIES);
+
+ return 0;
+}
+subsys_initcall(msm_minidump_init)
diff --git a/drivers/soc/qcom/msm_smd.c b/drivers/soc/qcom/msm_smd.c
new file mode 100644
index 0000000..1631984
--- /dev/null
+++ b/drivers/soc/qcom/msm_smd.c
@@ -0,0 +1,3254 @@
+/* drivers/soc/qcom/msm_smd.c
+ *
+ * Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2008-2017, The Linux Foundation. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/wait.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/termios.h>
+#include <linux/ctype.h>
+#include <linux/remote_spinlock.h>
+#include <linux/uaccess.h>
+#include <linux/kfifo.h>
+#include <linux/pm.h>
+#include <linux/notifier.h>
+#include <linux/suspend.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/ipc_logging.h>
+
+#include <soc/qcom/ramdump.h>
+#include <soc/qcom/smd.h>
+#include <soc/qcom/smem.h>
+#include <soc/qcom/subsystem_notif.h>
+#include <soc/qcom/subsystem_restart.h>
+
+#include "smd_private.h"
+#include "smem_private.h"
+
+#define SMSM_SNAPSHOT_CNT 64
+#define SMSM_SNAPSHOT_SIZE ((SMSM_NUM_ENTRIES + 1) * 4 + sizeof(uint64_t))
+#define RSPIN_INIT_WAIT_MS 1000
+#define SMD_FIFO_FULL_RESERVE 4
+#define SMD_FIFO_ADDR_ALIGN_BYTES 3
+
+uint32_t SMSM_NUM_ENTRIES = 8;
+uint32_t SMSM_NUM_HOSTS = 3;
+
+/* Legacy SMSM interrupt notifications */
+#define LEGACY_MODEM_SMSM_MASK (SMSM_RESET | SMSM_INIT | SMSM_SMDINIT)
+
+struct smsm_shared_info {
+ uint32_t *state;
+ uint32_t *intr_mask;
+ uint32_t *intr_mux;
+};
+
+static struct smsm_shared_info smsm_info;
+static struct kfifo smsm_snapshot_fifo;
+static struct wakeup_source smsm_snapshot_ws;
+static int smsm_snapshot_count;
+static DEFINE_SPINLOCK(smsm_snapshot_count_lock);
+
+struct smsm_size_info_type {
+ uint32_t num_hosts;
+ uint32_t num_entries;
+ uint32_t reserved0;
+ uint32_t reserved1;
+};
+
+struct smsm_state_cb_info {
+ struct list_head cb_list;
+ uint32_t mask;
+ void *data;
+ void (*notify)(void *data, uint32_t old_state, uint32_t new_state);
+};
+
+struct smsm_state_info {
+ struct list_head callbacks;
+ uint32_t last_value;
+ uint32_t intr_mask_set;
+ uint32_t intr_mask_clear;
+};
+
+static irqreturn_t smsm_irq_handler(int irq, void *data);
+
+/*
+ * Interrupt configuration consists of static configuration for the supported
+ * processors that is done here along with interrupt configuration that is
+ * added by the separate initialization modules (device tree, platform data, or
+ * hard coded).
+ */
+static struct interrupt_config private_intr_config[NUM_SMD_SUBSYSTEMS] = {
+ [SMD_MODEM] = {
+ .smd.irq_handler = smd_modem_irq_handler,
+ .smsm.irq_handler = smsm_modem_irq_handler,
+ },
+ [SMD_Q6] = {
+ .smd.irq_handler = smd_dsp_irq_handler,
+ .smsm.irq_handler = smsm_dsp_irq_handler,
+ },
+ [SMD_DSPS] = {
+ .smd.irq_handler = smd_dsps_irq_handler,
+ .smsm.irq_handler = smsm_dsps_irq_handler,
+ },
+ [SMD_WCNSS] = {
+ .smd.irq_handler = smd_wcnss_irq_handler,
+ .smsm.irq_handler = smsm_wcnss_irq_handler,
+ },
+ [SMD_MODEM_Q6_FW] = {
+ .smd.irq_handler = smd_modemfw_irq_handler,
+ .smsm.irq_handler = NULL, /* does not support smsm */
+ },
+ [SMD_RPM] = {
+ .smd.irq_handler = smd_rpm_irq_handler,
+ .smsm.irq_handler = NULL, /* does not support smsm */
+ },
+};
+
+struct interrupt_stat interrupt_stats[NUM_SMD_SUBSYSTEMS];
+
+#define SMSM_STATE_ADDR(entry) (smsm_info.state + entry)
+#define SMSM_INTR_MASK_ADDR(entry, host) (smsm_info.intr_mask + \
+ entry * SMSM_NUM_HOSTS + host)
+#define SMSM_INTR_MUX_ADDR(entry) (smsm_info.intr_mux + entry)
+
+int msm_smd_debug_mask = MSM_SMD_POWER_INFO | MSM_SMD_INFO |
+ MSM_SMSM_POWER_INFO;
+module_param_named(debug_mask, msm_smd_debug_mask, int, 0664);
+void *smd_log_ctx;
+void *smsm_log_ctx;
+#define NUM_LOG_PAGES 4
+
+#define IPC_LOG_SMD(level, x...) do { \
+ if (smd_log_ctx) \
+ ipc_log_string(smd_log_ctx, x); \
+ else \
+ printk(level x); \
+ } while (0)
+
+#define IPC_LOG_SMSM(level, x...) do { \
+ if (smsm_log_ctx) \
+ ipc_log_string(smsm_log_ctx, x); \
+ else \
+ printk(level x); \
+ } while (0)
+
+#if defined(CONFIG_MSM_SMD_DEBUG)
+#define SMD_DBG(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMD_DEBUG) \
+ IPC_LOG_SMD(KERN_DEBUG, x); \
+ } while (0)
+
+#define SMSM_DBG(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMSM_DEBUG) \
+ IPC_LOG_SMSM(KERN_DEBUG, x); \
+ } while (0)
+
+#define SMD_INFO(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMD_INFO) \
+ IPC_LOG_SMD(KERN_INFO, x); \
+ } while (0)
+
+#define SMSM_INFO(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMSM_INFO) \
+ IPC_LOG_SMSM(KERN_INFO, x); \
+ } while (0)
+
+#define SMD_POWER_INFO(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMD_POWER_INFO) \
+ IPC_LOG_SMD(KERN_INFO, x); \
+ } while (0)
+
+#define SMSM_POWER_INFO(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMSM_POWER_INFO) \
+ IPC_LOG_SMSM(KERN_INFO, x); \
+ } while (0)
+#else
+#define SMD_DBG(x...) do { } while (0)
+#define SMSM_DBG(x...) do { } while (0)
+#define SMD_INFO(x...) do { } while (0)
+#define SMSM_INFO(x...) do { } while (0)
+#define SMD_POWER_INFO(x...) do { } while (0)
+#define SMSM_POWER_INFO(x...) do { } while (0)
+#endif
+
+static void smd_fake_irq_handler(unsigned long arg);
+static void smsm_cb_snapshot(uint32_t use_wakeup_source);
+
+static struct workqueue_struct *smsm_cb_wq;
+static void notify_smsm_cb_clients_worker(struct work_struct *work);
+static DECLARE_WORK(smsm_cb_work, notify_smsm_cb_clients_worker);
+static DEFINE_MUTEX(smsm_lock);
+static struct smsm_state_info *smsm_states;
+
+static int smd_stream_write_avail(struct smd_channel *ch);
+static int smd_stream_read_avail(struct smd_channel *ch);
+
+static bool pid_is_on_edge(uint32_t edge_num, unsigned int pid);
+
+static inline void smd_write_intr(unsigned int val, void __iomem *addr)
+{
+ wmb(); /* Make sure memory is visible before dorebell */
+ __raw_writel(val, addr);
+}
+
+/**
+ * smd_memcpy_to_fifo() - copy to SMD channel FIFO
+ * @dest: Destination address
+ * @src: Source address
+ * @num_bytes: Number of bytes to copy
+ *
+ * @return: Address of destination
+ *
+ * This function copies num_bytes from src to dest. This is used as the memcpy
+ * function to copy data to SMD FIFO in case the SMD FIFO is naturally aligned.
+ */
+static void *smd_memcpy_to_fifo(void *dest, const void *src, size_t num_bytes)
+{
+
+ memcpy_toio(dest, src, num_bytes);
+ return dest;
+}
+
+/**
+ * smd_memcpy_from_fifo() - copy from SMD channel FIFO
+ * @dest: Destination address
+ * @src: Source address
+ * @num_bytes: Number of bytes to copy
+ *
+ * @return: Address of destination
+ *
+ * This function copies num_bytes from src to dest. This is used as the memcpy
+ * function to copy data from SMD FIFO in case the SMD FIFO is naturally
+ * aligned.
+ */
+static void *smd_memcpy_from_fifo(void *dest, const void *src, size_t num_bytes)
+{
+ memcpy_fromio(dest, src, num_bytes);
+ return dest;
+}
+
+/**
+ * smd_memcpy32_to_fifo() - Copy to SMD channel FIFO
+ *
+ * @dest: Destination address
+ * @src: Source address
+ * @num_bytes: Number of bytes to copy
+ *
+ * @return: On Success, address of destination
+ *
+ * This function copies num_bytes data from src to dest. This is used as the
+ * memcpy function to copy data to SMD FIFO in case the SMD FIFO is 4 byte
+ * aligned.
+ */
+static void *smd_memcpy32_to_fifo(void *dest, const void *src, size_t num_bytes)
+{
+ uint32_t *dest_local = (uint32_t *)dest;
+ uint32_t *src_local = (uint32_t *)src;
+
+ WARN_ON(num_bytes & SMD_FIFO_ADDR_ALIGN_BYTES);
+ WARN_ON(!dest_local ||
+ ((uintptr_t)dest_local & SMD_FIFO_ADDR_ALIGN_BYTES));
+ WARN_ON(!src_local ||
+ ((uintptr_t)src_local & SMD_FIFO_ADDR_ALIGN_BYTES));
+ num_bytes /= sizeof(uint32_t);
+
+ while (num_bytes--)
+ __raw_writel_no_log(*src_local++, dest_local++);
+
+ return dest;
+}
+
+/**
+ * smd_memcpy32_from_fifo() - Copy from SMD channel FIFO
+ * @dest: Destination address
+ * @src: Source address
+ * @num_bytes: Number of bytes to copy
+ *
+ * @return: On Success, destination address
+ *
+ * This function copies num_bytes data from SMD FIFO to dest. This is used as
+ * the memcpy function to copy data from SMD FIFO in case the SMD FIFO is 4 byte
+ * aligned.
+ */
+static void *smd_memcpy32_from_fifo(void *dest, const void *src,
+ size_t num_bytes)
+{
+
+ uint32_t *dest_local = (uint32_t *)dest;
+ uint32_t *src_local = (uint32_t *)src;
+
+ WARN_ON(num_bytes & SMD_FIFO_ADDR_ALIGN_BYTES);
+ WARN_ON(!dest_local ||
+ ((uintptr_t)dest_local & SMD_FIFO_ADDR_ALIGN_BYTES));
+ WARN_ON(!src_local ||
+ ((uintptr_t)src_local & SMD_FIFO_ADDR_ALIGN_BYTES));
+ num_bytes /= sizeof(uint32_t);
+
+ while (num_bytes--)
+ *dest_local++ = __raw_readl_no_log(src_local++);
+
+ return dest;
+}
+
+static inline void log_notify(uint32_t subsystem, smd_channel_t *ch)
+{
+ const char *subsys = smd_edge_to_subsystem(subsystem);
+
+ (void) subsys;
+
+ if (!ch)
+ SMD_POWER_INFO("Apps->%s\n", subsys);
+ else
+ SMD_POWER_INFO(
+ "Apps->%s ch%d '%s': tx%d/rx%d %dr/%dw : %dr/%dw\n",
+ subsys, ch->n, ch->name,
+ ch->fifo_size -
+ (smd_stream_write_avail(ch) + 1),
+ smd_stream_read_avail(ch),
+ ch->half_ch->get_tail(ch->send),
+ ch->half_ch->get_head(ch->send),
+ ch->half_ch->get_tail(ch->recv),
+ ch->half_ch->get_head(ch->recv)
+ );
+}
+
+static inline void notify_modem_smd(smd_channel_t *ch)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_MODEM].smd;
+
+ log_notify(SMD_APPS_MODEM, ch);
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_MODEM].smd_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_dsp_smd(smd_channel_t *ch)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_Q6].smd;
+
+ log_notify(SMD_APPS_QDSP, ch);
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_Q6].smd_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_dsps_smd(smd_channel_t *ch)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_DSPS].smd;
+
+ log_notify(SMD_APPS_DSPS, ch);
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_DSPS].smd_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_wcnss_smd(struct smd_channel *ch)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_WCNSS].smd;
+
+ log_notify(SMD_APPS_WCNSS, ch);
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_WCNSS].smd_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_modemfw_smd(smd_channel_t *ch)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_MODEM_Q6_FW].smd;
+
+ log_notify(SMD_APPS_Q6FW, ch);
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_MODEM_Q6_FW].smd_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_rpm_smd(smd_channel_t *ch)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_RPM].smd;
+
+ if (intr->out_base) {
+ log_notify(SMD_APPS_RPM, ch);
+ ++interrupt_stats[SMD_RPM].smd_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_modem_smsm(void)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_MODEM].smsm;
+
+ SMSM_POWER_INFO("SMSM Apps->%s", "MODEM");
+
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_MODEM].smsm_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_dsp_smsm(void)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_Q6].smsm;
+
+ SMSM_POWER_INFO("SMSM Apps->%s", "ADSP");
+
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_Q6].smsm_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_dsps_smsm(void)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_DSPS].smsm;
+
+ SMSM_POWER_INFO("SMSM Apps->%s", "DSPS");
+
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_DSPS].smsm_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static inline void notify_wcnss_smsm(void)
+{
+ static const struct interrupt_config_item *intr
+ = &private_intr_config[SMD_WCNSS].smsm;
+
+ SMSM_POWER_INFO("SMSM Apps->%s", "WCNSS");
+
+ if (intr->out_base) {
+ ++interrupt_stats[SMD_WCNSS].smsm_out_count;
+ smd_write_intr(intr->out_bit_pos,
+ intr->out_base + intr->out_offset);
+ }
+}
+
+static void notify_other_smsm(uint32_t smsm_entry, uint32_t notify_mask)
+{
+ if (smsm_info.intr_mask &&
+ (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_MODEM))
+ & notify_mask))
+ notify_modem_smsm();
+
+ if (smsm_info.intr_mask &&
+ (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_Q6))
+ & notify_mask))
+ notify_dsp_smsm();
+
+ if (smsm_info.intr_mask &&
+ (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_WCNSS))
+ & notify_mask)) {
+ notify_wcnss_smsm();
+ }
+
+ if (smsm_info.intr_mask &&
+ (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_DSPS))
+ & notify_mask)) {
+ notify_dsps_smsm();
+ }
+
+ if (smsm_info.intr_mask &&
+ (__raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS))
+ & notify_mask)) {
+ smsm_cb_snapshot(1);
+ }
+}
+
+static int smsm_pm_notifier(struct notifier_block *nb,
+ unsigned long event, void *unused)
+{
+ switch (event) {
+ case PM_SUSPEND_PREPARE:
+ smsm_change_state(SMSM_APPS_STATE, SMSM_PROC_AWAKE, 0);
+ break;
+
+ case PM_POST_SUSPEND:
+ smsm_change_state(SMSM_APPS_STATE, 0, SMSM_PROC_AWAKE);
+ break;
+ }
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block smsm_pm_nb = {
+ .notifier_call = smsm_pm_notifier,
+ .priority = 0,
+};
+
+/* the spinlock is used to synchronize between the
+ * irq handler and code that mutates the channel
+ * list or fiddles with channel state
+ */
+static DEFINE_SPINLOCK(smd_lock);
+DEFINE_SPINLOCK(smem_lock);
+
+/* the mutex is used during open() and close()
+ * operations to avoid races while creating or
+ * destroying smd_channel structures
+ */
+static DEFINE_MUTEX(smd_creation_mutex);
+
+struct smd_shared {
+ struct smd_half_channel ch0;
+ struct smd_half_channel ch1;
+};
+
+struct smd_shared_word_access {
+ struct smd_half_channel_word_access ch0;
+ struct smd_half_channel_word_access ch1;
+};
+
+/**
+ * Maps edge type to local and remote processor ID's.
+ */
+static struct edge_to_pid edge_to_pids[] = {
+ [SMD_APPS_MODEM] = {SMD_APPS, SMD_MODEM, "modem"},
+ [SMD_APPS_QDSP] = {SMD_APPS, SMD_Q6, "adsp"},
+ [SMD_MODEM_QDSP] = {SMD_MODEM, SMD_Q6},
+ [SMD_APPS_DSPS] = {SMD_APPS, SMD_DSPS, "dsps"},
+ [SMD_MODEM_DSPS] = {SMD_MODEM, SMD_DSPS},
+ [SMD_QDSP_DSPS] = {SMD_Q6, SMD_DSPS},
+ [SMD_APPS_WCNSS] = {SMD_APPS, SMD_WCNSS, "wcnss"},
+ [SMD_MODEM_WCNSS] = {SMD_MODEM, SMD_WCNSS},
+ [SMD_QDSP_WCNSS] = {SMD_Q6, SMD_WCNSS},
+ [SMD_DSPS_WCNSS] = {SMD_DSPS, SMD_WCNSS},
+ [SMD_APPS_Q6FW] = {SMD_APPS, SMD_MODEM_Q6_FW},
+ [SMD_MODEM_Q6FW] = {SMD_MODEM, SMD_MODEM_Q6_FW},
+ [SMD_QDSP_Q6FW] = {SMD_Q6, SMD_MODEM_Q6_FW},
+ [SMD_DSPS_Q6FW] = {SMD_DSPS, SMD_MODEM_Q6_FW},
+ [SMD_WCNSS_Q6FW] = {SMD_WCNSS, SMD_MODEM_Q6_FW},
+ [SMD_APPS_RPM] = {SMD_APPS, SMD_RPM},
+ [SMD_MODEM_RPM] = {SMD_MODEM, SMD_RPM},
+ [SMD_QDSP_RPM] = {SMD_Q6, SMD_RPM},
+ [SMD_WCNSS_RPM] = {SMD_WCNSS, SMD_RPM},
+ [SMD_TZ_RPM] = {SMD_TZ, SMD_RPM},
+};
+
+struct restart_notifier_block {
+ unsigned int processor;
+ char *name;
+ struct notifier_block nb;
+};
+
+static struct platform_device loopback_tty_pdev = {.name = "LOOPBACK_TTY"};
+
+static LIST_HEAD(smd_ch_closed_list);
+static LIST_HEAD(smd_ch_closing_list);
+static LIST_HEAD(smd_ch_to_close_list);
+
+struct remote_proc_info {
+ unsigned int remote_pid;
+ unsigned int free_space;
+ struct work_struct probe_work;
+ struct list_head ch_list;
+ /* 2 total supported tables of channels */
+ unsigned char ch_allocated[SMEM_NUM_SMD_STREAM_CHANNELS * 2];
+ bool skip_pil;
+};
+
+static struct remote_proc_info remote_info[NUM_SMD_SUBSYSTEMS];
+
+static void finalize_channel_close_fn(struct work_struct *work);
+static DECLARE_WORK(finalize_channel_close_work, finalize_channel_close_fn);
+static struct workqueue_struct *channel_close_wq;
+
+#define PRI_ALLOC_TBL 1
+#define SEC_ALLOC_TBL 2
+static int smd_alloc_channel(struct smd_alloc_elm *alloc_elm, int table_id,
+ struct remote_proc_info *r_info);
+
+static bool smd_edge_inited(int edge)
+{
+ return edge_to_pids[edge].initialized;
+}
+
+/* on smp systems, the probe might get called from multiple cores,
+ * hence use a lock
+ */
+static DEFINE_MUTEX(smd_probe_lock);
+
+/**
+ * scan_alloc_table - Scans a specified SMD channel allocation table in SMEM for
+ * newly created channels that need to be made locally
+ * visable
+ *
+ * @shared: pointer to the table array in SMEM
+ * @smd_ch_allocated: pointer to an array indicating already allocated channels
+ * @table_id: identifier for this channel allocation table
+ * @num_entries: number of entries in this allocation table
+ * @r_info: pointer to the info structure of the remote proc we care about
+ *
+ * The smd_probe_lock must be locked by the calling function. Shared and
+ * smd_ch_allocated are assumed to be valid pointers.
+ */
+static void scan_alloc_table(struct smd_alloc_elm *shared,
+ char *smd_ch_allocated,
+ int table_id,
+ unsigned int num_entries,
+ struct remote_proc_info *r_info)
+{
+ unsigned int n;
+ uint32_t type;
+
+ for (n = 0; n < num_entries; n++) {
+ if (smd_ch_allocated[n])
+ continue;
+
+ /*
+ * channel should be allocated only if APPS processor is
+ * involved
+ */
+ type = SMD_CHANNEL_TYPE(shared[n].type);
+ if (!pid_is_on_edge(type, SMD_APPS) ||
+ !pid_is_on_edge(type, r_info->remote_pid))
+ continue;
+ if (!shared[n].ref_count)
+ continue;
+ if (!shared[n].name[0])
+ continue;
+
+ if (!smd_edge_inited(type)) {
+ SMD_INFO(
+ "Probe skipping proc %d, tbl %d, ch %d, edge not inited\n",
+ r_info->remote_pid, table_id, n);
+ continue;
+ }
+
+ if (!smd_alloc_channel(&shared[n], table_id, r_info))
+ smd_ch_allocated[n] = 1;
+ else
+ SMD_INFO(
+ "Probe skipping proc %d, tbl %d, ch %d, not allocated\n",
+ r_info->remote_pid, table_id, n);
+ }
+}
+
+static void smd_channel_probe_now(struct remote_proc_info *r_info)
+{
+ struct smd_alloc_elm *shared;
+ unsigned int tbl_size;
+
+ shared = smem_get_entry(ID_CH_ALLOC_TBL, &tbl_size,
+ r_info->remote_pid, 0);
+
+ if (!shared) {
+ pr_err("%s: allocation table not initialized\n", __func__);
+ return;
+ }
+
+ mutex_lock(&smd_probe_lock);
+
+ scan_alloc_table(shared, r_info->ch_allocated, PRI_ALLOC_TBL,
+ tbl_size / sizeof(*shared),
+ r_info);
+
+ shared = smem_get_entry(SMEM_CHANNEL_ALLOC_TBL_2, &tbl_size,
+ r_info->remote_pid, 0);
+ if (shared)
+ scan_alloc_table(shared,
+ &(r_info->ch_allocated[SMEM_NUM_SMD_STREAM_CHANNELS]),
+ SEC_ALLOC_TBL,
+ tbl_size / sizeof(*shared),
+ r_info);
+
+ mutex_unlock(&smd_probe_lock);
+}
+
+/**
+ * smd_channel_probe_worker() - Scan for newly created SMD channels and init
+ * local structures so the channels are visable to
+ * local clients
+ *
+ * @work: work_struct corresponding to an instance of this function running on
+ * a workqueue.
+ */
+static void smd_channel_probe_worker(struct work_struct *work)
+{
+ struct remote_proc_info *r_info;
+
+ r_info = container_of(work, struct remote_proc_info, probe_work);
+
+ smd_channel_probe_now(r_info);
+}
+
+/**
+ * get_remote_ch() - gathers remote channel info
+ *
+ * @shared2: Pointer to v2 shared channel structure
+ * @type: Edge type
+ * @pid: Processor ID of processor on edge
+ * @remote_ch: Channel that belongs to processor @pid
+ * @is_word_access_ch: Bool, is this a word aligned access channel
+ *
+ * @returns: 0 on success, error code on failure
+ */
+static int get_remote_ch(void *shared2,
+ uint32_t type, uint32_t pid,
+ void **remote_ch,
+ int is_word_access_ch
+ )
+{
+ if (!remote_ch || !shared2 || !pid_is_on_edge(type, pid) ||
+ !pid_is_on_edge(type, SMD_APPS))
+ return -EINVAL;
+
+ if (is_word_access_ch)
+ *remote_ch =
+ &((struct smd_shared_word_access *)(shared2))->ch1;
+ else
+ *remote_ch = &((struct smd_shared *)(shared2))->ch1;
+
+ return 0;
+}
+
+/**
+ * smd_remote_ss_to_edge() - return edge type from remote ss type
+ * @name: remote subsystem name
+ *
+ * Returns the edge type connected between the local subsystem(APPS)
+ * and remote subsystem @name.
+ */
+int smd_remote_ss_to_edge(const char *name)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(edge_to_pids); ++i) {
+ if (edge_to_pids[i].subsys_name[0] != 0x0) {
+ if (!strcmp(edge_to_pids[i].subsys_name, name))
+ return i;
+ }
+ }
+
+ return -EINVAL;
+}
+EXPORT_SYMBOL(smd_remote_ss_to_edge);
+
+/**
+ * smd_edge_to_pil_str - Returns the PIL string used to load the remote side of
+ * the indicated edge.
+ *
+ * @type - Edge definition
+ * @returns - The PIL string to load the remove side of @type or NULL if the
+ * PIL string does not exist.
+ */
+const char *smd_edge_to_pil_str(uint32_t type)
+{
+ const char *pil_str = NULL;
+
+ if (type < ARRAY_SIZE(edge_to_pids)) {
+ if (!edge_to_pids[type].initialized)
+ return ERR_PTR(-EPROBE_DEFER);
+ if (!remote_info[smd_edge_to_remote_pid(type)].skip_pil) {
+ pil_str = edge_to_pids[type].subsys_name;
+ if (pil_str[0] == 0x0)
+ pil_str = NULL;
+ }
+ }
+ return pil_str;
+}
+EXPORT_SYMBOL(smd_edge_to_pil_str);
+
+/*
+ * Returns a pointer to the subsystem name or NULL if no
+ * subsystem name is available.
+ *
+ * @type - Edge definition
+ */
+const char *smd_edge_to_subsystem(uint32_t type)
+{
+ const char *subsys = NULL;
+
+ if (type < ARRAY_SIZE(edge_to_pids)) {
+ subsys = edge_to_pids[type].subsys_name;
+ if (subsys[0] == 0x0)
+ subsys = NULL;
+ if (!edge_to_pids[type].initialized)
+ subsys = ERR_PTR(-EPROBE_DEFER);
+ }
+ return subsys;
+}
+EXPORT_SYMBOL(smd_edge_to_subsystem);
+
+/*
+ * Returns a pointer to the subsystem name given the
+ * remote processor ID.
+ * subsystem is not necessarily PIL-loadable
+ *
+ * @pid Remote processor ID
+ * @returns Pointer to subsystem name or NULL if not found
+ */
+const char *smd_pid_to_subsystem(uint32_t pid)
+{
+ const char *subsys = NULL;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(edge_to_pids); ++i) {
+ if (pid == edge_to_pids[i].remote_pid) {
+ if (!edge_to_pids[i].initialized) {
+ subsys = ERR_PTR(-EPROBE_DEFER);
+ break;
+ }
+ if (edge_to_pids[i].subsys_name[0] != 0x0) {
+ subsys = edge_to_pids[i].subsys_name;
+ break;
+ } else if (pid == SMD_RPM) {
+ subsys = "rpm";
+ break;
+ }
+ }
+ }
+
+ return subsys;
+}
+EXPORT_SYMBOL(smd_pid_to_subsystem);
+
+static void smd_reset_edge(void *void_ch, unsigned int new_state,
+ int is_word_access_ch)
+{
+ if (is_word_access_ch) {
+ struct smd_half_channel_word_access *ch =
+ (struct smd_half_channel_word_access *)(void_ch);
+ if (ch->state != SMD_SS_CLOSED) {
+ ch->state = new_state;
+ ch->fDSR = 0;
+ ch->fCTS = 0;
+ ch->fCD = 0;
+ ch->fSTATE = 1;
+ }
+ } else {
+ struct smd_half_channel *ch =
+ (struct smd_half_channel *)(void_ch);
+ if (ch->state != SMD_SS_CLOSED) {
+ ch->state = new_state;
+ ch->fDSR = 0;
+ ch->fCTS = 0;
+ ch->fCD = 0;
+ ch->fSTATE = 1;
+ }
+ }
+}
+
+/**
+ * smd_channel_reset_state() - find channels in an allocation table and set them
+ * to the specified state
+ *
+ * @shared: Pointer to the allocation table to scan
+ * @table_id: ID of the table
+ * @new_state: New state that channels should be set to
+ * @pid: Processor ID of the remote processor for the channels
+ * @num_entries: Number of entries in the table
+ *
+ * Scan the indicated table for channels between Apps and @pid. If a valid
+ * channel is found, set the remote side of the channel to @new_state.
+ */
+static void smd_channel_reset_state(struct smd_alloc_elm *shared, int table_id,
+ unsigned int new_state, unsigned int pid,
+ unsigned int num_entries)
+{
+ unsigned int n;
+ void *shared2;
+ uint32_t type;
+ void *remote_ch;
+ int is_word_access;
+ unsigned int base_id;
+
+ switch (table_id) {
+ case PRI_ALLOC_TBL:
+ base_id = SMEM_SMD_BASE_ID;
+ break;
+ case SEC_ALLOC_TBL:
+ base_id = SMEM_SMD_BASE_ID_2;
+ break;
+ default:
+ SMD_INFO("%s: invalid table_id:%d\n", __func__, table_id);
+ return;
+ }
+
+ for (n = 0; n < num_entries; n++) {
+ if (!shared[n].ref_count)
+ continue;
+ if (!shared[n].name[0])
+ continue;
+
+ type = SMD_CHANNEL_TYPE(shared[n].type);
+ is_word_access = is_word_access_ch(type);
+ if (is_word_access)
+ shared2 = smem_find(base_id + n,
+ sizeof(struct smd_shared_word_access), pid,
+ 0);
+ else
+ shared2 = smem_find(base_id + n,
+ sizeof(struct smd_shared), pid, 0);
+ if (!shared2)
+ continue;
+
+ if (!get_remote_ch(shared2, type, pid,
+ &remote_ch, is_word_access))
+ smd_reset_edge(remote_ch, new_state, is_word_access);
+ }
+}
+
+/**
+ * pid_is_on_edge() - checks to see if the processor with id pid is on the
+ * edge specified by edge_num
+ *
+ * @edge_num: the number of the edge which is being tested
+ * @pid: the id of the processor being tested
+ *
+ * @returns: true if on edge, false otherwise
+ */
+static bool pid_is_on_edge(uint32_t edge_num, unsigned int pid)
+{
+ struct edge_to_pid edge;
+
+ if (edge_num >= ARRAY_SIZE(edge_to_pids))
+ return 0;
+
+ edge = edge_to_pids[edge_num];
+ return (edge.local_pid == pid || edge.remote_pid == pid);
+}
+
+void smd_channel_reset(uint32_t restart_pid)
+{
+ struct smd_alloc_elm *shared_pri;
+ struct smd_alloc_elm *shared_sec;
+ unsigned long flags;
+ unsigned int pri_size;
+ unsigned int sec_size;
+
+ SMD_POWER_INFO("%s: starting reset\n", __func__);
+
+ shared_pri = smem_get_entry(ID_CH_ALLOC_TBL, &pri_size, restart_pid, 0);
+ if (!shared_pri) {
+ pr_err("%s: allocation table not initialized\n", __func__);
+ return;
+ }
+ shared_sec = smem_get_entry(SMEM_CHANNEL_ALLOC_TBL_2, &sec_size,
+ restart_pid, 0);
+
+ /* reset SMSM entry */
+ if (smsm_info.state) {
+ writel_relaxed(0, SMSM_STATE_ADDR(restart_pid));
+
+ /* restart SMSM init handshake */
+ if (restart_pid == SMSM_MODEM) {
+ smsm_change_state(SMSM_APPS_STATE,
+ SMSM_INIT | SMSM_SMD_LOOPBACK | SMSM_RESET,
+ 0);
+ }
+
+ /* notify SMSM processors */
+ smsm_irq_handler(0, 0);
+ notify_modem_smsm();
+ notify_dsp_smsm();
+ notify_dsps_smsm();
+ notify_wcnss_smsm();
+ }
+
+ /* change all remote states to CLOSING */
+ mutex_lock(&smd_probe_lock);
+ spin_lock_irqsave(&smd_lock, flags);
+ smd_channel_reset_state(shared_pri, PRI_ALLOC_TBL, SMD_SS_CLOSING,
+ restart_pid, pri_size / sizeof(*shared_pri));
+ if (shared_sec)
+ smd_channel_reset_state(shared_sec, SEC_ALLOC_TBL,
+ SMD_SS_CLOSING, restart_pid,
+ sec_size / sizeof(*shared_sec));
+ spin_unlock_irqrestore(&smd_lock, flags);
+ mutex_unlock(&smd_probe_lock);
+
+ mb(); /* Make sure memory is visible before proceeding */
+ smd_fake_irq_handler(0);
+
+ /* change all remote states to CLOSED */
+ mutex_lock(&smd_probe_lock);
+ spin_lock_irqsave(&smd_lock, flags);
+ smd_channel_reset_state(shared_pri, PRI_ALLOC_TBL, SMD_SS_CLOSED,
+ restart_pid, pri_size / sizeof(*shared_pri));
+ if (shared_sec)
+ smd_channel_reset_state(shared_sec, SEC_ALLOC_TBL,
+ SMD_SS_CLOSED, restart_pid,
+ sec_size / sizeof(*shared_sec));
+ spin_unlock_irqrestore(&smd_lock, flags);
+ mutex_unlock(&smd_probe_lock);
+
+ mb(); /* Make sure memory is visible before proceeding */
+ smd_fake_irq_handler(0);
+
+ SMD_POWER_INFO("%s: finished reset\n", __func__);
+}
+
+/* how many bytes are available for reading */
+static int smd_stream_read_avail(struct smd_channel *ch)
+{
+ unsigned int head = ch->half_ch->get_head(ch->recv);
+ unsigned int tail = ch->half_ch->get_tail(ch->recv);
+ unsigned int fifo_size = ch->fifo_size;
+ unsigned int bytes_avail = head - tail;
+
+ if (head < tail)
+ bytes_avail += fifo_size;
+
+ WARN_ON(bytes_avail >= fifo_size);
+ return bytes_avail;
+}
+
+/* how many bytes we are free to write */
+static int smd_stream_write_avail(struct smd_channel *ch)
+{
+ unsigned int head = ch->half_ch->get_head(ch->send);
+ unsigned int tail = ch->half_ch->get_tail(ch->send);
+ unsigned int fifo_size = ch->fifo_size;
+ unsigned int bytes_avail = tail - head;
+
+ if (tail <= head)
+ bytes_avail += fifo_size;
+ if (bytes_avail < SMD_FIFO_FULL_RESERVE)
+ bytes_avail = 0;
+ else
+ bytes_avail -= SMD_FIFO_FULL_RESERVE;
+
+ WARN_ON(bytes_avail >= fifo_size);
+ return bytes_avail;
+}
+
+static int smd_packet_read_avail(struct smd_channel *ch)
+{
+ if (ch->current_packet) {
+ int n = smd_stream_read_avail(ch);
+
+ if (n > ch->current_packet)
+ n = ch->current_packet;
+ return n;
+ } else {
+ return 0;
+ }
+}
+
+static int smd_packet_write_avail(struct smd_channel *ch)
+{
+ int n = smd_stream_write_avail(ch);
+
+ return n > SMD_HEADER_SIZE ? n - SMD_HEADER_SIZE : 0;
+}
+
+static int ch_is_open(struct smd_channel *ch)
+{
+ return (ch->half_ch->get_state(ch->recv) == SMD_SS_OPENED ||
+ ch->half_ch->get_state(ch->recv) == SMD_SS_FLUSHING)
+ && (ch->half_ch->get_state(ch->send) == SMD_SS_OPENED);
+}
+
+/* provide a pointer and length to readable data in the fifo */
+static unsigned int ch_read_buffer(struct smd_channel *ch, void **ptr)
+{
+ unsigned int head = ch->half_ch->get_head(ch->recv);
+ unsigned int tail = ch->half_ch->get_tail(ch->recv);
+ unsigned int fifo_size = ch->fifo_size;
+
+ WARN_ON(fifo_size >= SZ_1M);
+ WARN_ON(head >= fifo_size);
+ WARN_ON(tail >= fifo_size);
+ WARN_ON(OVERFLOW_ADD_UNSIGNED(uintptr_t, (uintptr_t)ch->recv_data,
+ tail));
+ *ptr = (void *) (ch->recv_data + tail);
+ if (tail <= head)
+ return head - tail;
+ else
+ return fifo_size - tail;
+}
+
+static int read_intr_blocked(struct smd_channel *ch)
+{
+ return ch->half_ch->get_fBLOCKREADINTR(ch->recv);
+}
+
+/* advance the fifo read pointer after data from ch_read_buffer is consumed */
+static void ch_read_done(struct smd_channel *ch, unsigned int count)
+{
+ unsigned int tail = ch->half_ch->get_tail(ch->recv);
+ unsigned int fifo_size = ch->fifo_size;
+
+ WARN_ON(count > smd_stream_read_avail(ch));
+
+ tail += count;
+ if (tail >= fifo_size)
+ tail -= fifo_size;
+ ch->half_ch->set_tail(ch->recv, tail);
+ wmb(); /* Make sure memory is visible before setting signal */
+ ch->half_ch->set_fTAIL(ch->send, 1);
+}
+
+/* basic read interface to ch_read_{buffer,done} used
+ * by smd_*_read() and update_packet_state()
+ * will read-and-discard if the _data pointer is null
+ */
+static int ch_read(struct smd_channel *ch, void *_data, int len)
+{
+ void *ptr;
+ unsigned int n;
+ unsigned char *data = _data;
+ int orig_len = len;
+
+ while (len > 0) {
+ n = ch_read_buffer(ch, &ptr);
+ if (n == 0)
+ break;
+
+ if (n > len)
+ n = len;
+ if (_data)
+ ch->read_from_fifo(data, ptr, n);
+
+ data += n;
+ len -= n;
+ ch_read_done(ch, n);
+ }
+
+ return orig_len - len;
+}
+
+static void update_stream_state(struct smd_channel *ch)
+{
+ /* streams have no special state requiring updating */
+}
+
+static void update_packet_state(struct smd_channel *ch)
+{
+ unsigned int hdr[5];
+ int r;
+ const char *peripheral = NULL;
+
+ /* can't do anything if we're in the middle of a packet */
+ while (ch->current_packet == 0) {
+ /* discard 0 length packets if any */
+
+ /* don't bother unless we can get the full header */
+ if (smd_stream_read_avail(ch) < SMD_HEADER_SIZE)
+ return;
+
+ r = ch_read(ch, hdr, SMD_HEADER_SIZE);
+ WARN_ON(r != SMD_HEADER_SIZE);
+
+ ch->current_packet = hdr[0];
+ if (ch->current_packet > (uint32_t)INT_MAX) {
+ pr_err("%s: Invalid packet size of %d bytes detected. Edge: %d, Channel : %s, RPTR: %d, WPTR: %d",
+ __func__, ch->current_packet, ch->type,
+ ch->name, ch->half_ch->get_tail(ch->recv),
+ ch->half_ch->get_head(ch->recv));
+ peripheral = smd_edge_to_pil_str(ch->type);
+ if (peripheral) {
+ if (subsystem_restart(peripheral) < 0)
+ WARN_ON(1);
+ } else {
+ WARN_ON(1);
+ }
+ }
+ }
+}
+
+/**
+ * ch_write_buffer() - Provide a pointer and length for the next segment of
+ * free space in the FIFO.
+ * @ch: channel
+ * @ptr: Address to pointer for the next segment write
+ * @returns: Maximum size that can be written until the FIFO is either full
+ * or the end of the FIFO has been reached.
+ *
+ * The returned pointer and length are passed to memcpy, so the next segment is
+ * defined as either the space available between the read index (tail) and the
+ * write index (head) or the space available to the end of the FIFO.
+ */
+static unsigned int ch_write_buffer(struct smd_channel *ch, void **ptr)
+{
+ unsigned int head = ch->half_ch->get_head(ch->send);
+ unsigned int tail = ch->half_ch->get_tail(ch->send);
+ unsigned int fifo_size = ch->fifo_size;
+
+ WARN_ON(fifo_size >= SZ_1M);
+ WARN_ON(head >= fifo_size);
+ WARN_ON(tail >= fifo_size);
+ WARN_ON(OVERFLOW_ADD_UNSIGNED(uintptr_t, (uintptr_t)ch->send_data,
+ head));
+
+ *ptr = (void *) (ch->send_data + head);
+ if (head < tail)
+ return tail - head - SMD_FIFO_FULL_RESERVE;
+
+ if (tail < SMD_FIFO_FULL_RESERVE)
+ return fifo_size + tail - head
+ - SMD_FIFO_FULL_RESERVE;
+
+ return fifo_size - head;
+
+}
+
+/* advace the fifo write pointer after freespace
+ * from ch_write_buffer is filled
+ */
+static void ch_write_done(struct smd_channel *ch, unsigned int count)
+{
+ unsigned int head = ch->half_ch->get_head(ch->send);
+ unsigned int fifo_size = ch->fifo_size;
+
+ WARN_ON(count > smd_stream_write_avail(ch));
+ head += count;
+ if (head >= fifo_size)
+ head -= fifo_size;
+ ch->half_ch->set_head(ch->send, head);
+ wmb(); /* Make sure memory is visible before setting signal */
+ ch->half_ch->set_fHEAD(ch->send, 1);
+}
+
+static void ch_set_state(struct smd_channel *ch, unsigned int n)
+{
+ if (n == SMD_SS_OPENED) {
+ ch->half_ch->set_fDSR(ch->send, 1);
+ ch->half_ch->set_fCTS(ch->send, 1);
+ ch->half_ch->set_fCD(ch->send, 1);
+ } else {
+ ch->half_ch->set_fDSR(ch->send, 0);
+ ch->half_ch->set_fCTS(ch->send, 0);
+ ch->half_ch->set_fCD(ch->send, 0);
+ }
+ ch->half_ch->set_state(ch->send, n);
+ ch->half_ch->set_fSTATE(ch->send, 1);
+ ch->notify_other_cpu(ch);
+}
+
+/**
+ * do_smd_probe() - Look for newly created SMD channels a specific processor
+ *
+ * @remote_pid: remote processor id of the proc that may have created channels
+ */
+static void do_smd_probe(unsigned int remote_pid)
+{
+ unsigned int free_space;
+
+ free_space = smem_get_free_space(remote_pid);
+ if (free_space != remote_info[remote_pid].free_space) {
+ remote_info[remote_pid].free_space = free_space;
+ schedule_work(&remote_info[remote_pid].probe_work);
+ }
+}
+
+static void remote_processed_close(struct smd_channel *ch)
+{
+ /* The remote side has observed our close, we can allow a reopen */
+ list_move(&ch->ch_list, &smd_ch_to_close_list);
+ queue_work(channel_close_wq, &finalize_channel_close_work);
+}
+
+static void smd_state_change(struct smd_channel *ch,
+ unsigned int last, unsigned int next)
+{
+ ch->last_state = next;
+
+ SMD_INFO("SMD: ch %d %d -> %d\n", ch->n, last, next);
+
+ switch (next) {
+ case SMD_SS_OPENING:
+ if (last == SMD_SS_OPENED &&
+ ch->half_ch->get_state(ch->send) == SMD_SS_CLOSED) {
+ /* We missed the CLOSING and CLOSED states */
+ remote_processed_close(ch);
+ } else if (ch->half_ch->get_state(ch->send) == SMD_SS_CLOSING ||
+ ch->half_ch->get_state(ch->send) == SMD_SS_CLOSED) {
+ ch->half_ch->set_tail(ch->recv, 0);
+ ch->half_ch->set_head(ch->send, 0);
+ ch->half_ch->set_fBLOCKREADINTR(ch->send, 0);
+ ch->current_packet = 0;
+ ch_set_state(ch, SMD_SS_OPENING);
+ }
+ break;
+ case SMD_SS_OPENED:
+ if (ch->half_ch->get_state(ch->send) == SMD_SS_OPENING) {
+ ch_set_state(ch, SMD_SS_OPENED);
+ ch->notify(ch->priv, SMD_EVENT_OPEN);
+ }
+ break;
+ case SMD_SS_FLUSHING:
+ case SMD_SS_RESET:
+ /* we should force them to close? */
+ break;
+ case SMD_SS_CLOSED:
+ if (ch->half_ch->get_state(ch->send) == SMD_SS_OPENED) {
+ ch_set_state(ch, SMD_SS_CLOSING);
+ ch->pending_pkt_sz = 0;
+ ch->notify(ch->priv, SMD_EVENT_CLOSE);
+ }
+ /* We missed the CLOSING state */
+ if (ch->half_ch->get_state(ch->send) == SMD_SS_CLOSED)
+ remote_processed_close(ch);
+ break;
+ case SMD_SS_CLOSING:
+ if (ch->half_ch->get_state(ch->send) == SMD_SS_CLOSED)
+ remote_processed_close(ch);
+ break;
+ }
+}
+
+static void handle_smd_irq_closing_list(void)
+{
+ unsigned long flags;
+ struct smd_channel *ch;
+ struct smd_channel *index;
+ unsigned int tmp;
+
+ spin_lock_irqsave(&smd_lock, flags);
+ list_for_each_entry_safe(ch, index, &smd_ch_closing_list, ch_list) {
+ if (ch->half_ch->get_fSTATE(ch->recv))
+ ch->half_ch->set_fSTATE(ch->recv, 0);
+ tmp = ch->half_ch->get_state(ch->recv);
+ if (tmp != ch->last_state)
+ smd_state_change(ch, ch->last_state, tmp);
+ }
+ spin_unlock_irqrestore(&smd_lock, flags);
+}
+
+static void handle_smd_irq(struct remote_proc_info *r_info,
+ void (*notify)(smd_channel_t *ch))
+{
+ unsigned long flags;
+ struct smd_channel *ch;
+ unsigned int ch_flags;
+ unsigned int tmp;
+ unsigned char state_change;
+ struct list_head *list;
+
+ list = &r_info->ch_list;
+
+ spin_lock_irqsave(&smd_lock, flags);
+ list_for_each_entry(ch, list, ch_list) {
+ state_change = 0;
+ ch_flags = 0;
+ if (ch_is_open(ch)) {
+ if (ch->half_ch->get_fHEAD(ch->recv)) {
+ ch->half_ch->set_fHEAD(ch->recv, 0);
+ ch_flags |= 1;
+ }
+ if (ch->half_ch->get_fTAIL(ch->recv)) {
+ ch->half_ch->set_fTAIL(ch->recv, 0);
+ ch_flags |= 2;
+ }
+ if (ch->half_ch->get_fSTATE(ch->recv)) {
+ ch->half_ch->set_fSTATE(ch->recv, 0);
+ ch_flags |= 4;
+ }
+ }
+ tmp = ch->half_ch->get_state(ch->recv);
+ if (tmp != ch->last_state) {
+ SMD_POWER_INFO("SMD ch%d '%s' State change %d->%d\n",
+ ch->n, ch->name, ch->last_state, tmp);
+ smd_state_change(ch, ch->last_state, tmp);
+ state_change = 1;
+ }
+ if (ch_flags & 0x3) {
+ ch->update_state(ch);
+ SMD_POWER_INFO(
+ "SMD ch%d '%s' Data event 0x%x tx%d/rx%d %dr/%dw : %dr/%dw\n",
+ ch->n, ch->name,
+ ch_flags,
+ ch->fifo_size -
+ (smd_stream_write_avail(ch) + 1),
+ smd_stream_read_avail(ch),
+ ch->half_ch->get_tail(ch->send),
+ ch->half_ch->get_head(ch->send),
+ ch->half_ch->get_tail(ch->recv),
+ ch->half_ch->get_head(ch->recv)
+ );
+ ch->notify(ch->priv, SMD_EVENT_DATA);
+ }
+ if (ch_flags & 0x4 && !state_change) {
+ SMD_POWER_INFO("SMD ch%d '%s' State update\n",
+ ch->n, ch->name);
+ ch->notify(ch->priv, SMD_EVENT_STATUS);
+ }
+ }
+ spin_unlock_irqrestore(&smd_lock, flags);
+ do_smd_probe(r_info->remote_pid);
+}
+
+static inline void log_irq(uint32_t subsystem)
+{
+ const char *subsys = smd_edge_to_subsystem(subsystem);
+
+ (void) subsys;
+
+ SMD_POWER_INFO("SMD Int %s->Apps\n", subsys);
+}
+
+irqreturn_t smd_modem_irq_handler(int irq, void *data)
+{
+ if (unlikely(!edge_to_pids[SMD_APPS_MODEM].initialized))
+ return IRQ_HANDLED;
+ log_irq(SMD_APPS_MODEM);
+ ++interrupt_stats[SMD_MODEM].smd_in_count;
+ handle_smd_irq(&remote_info[SMD_MODEM], notify_modem_smd);
+ handle_smd_irq_closing_list();
+ return IRQ_HANDLED;
+}
+
+irqreturn_t smd_dsp_irq_handler(int irq, void *data)
+{
+ if (unlikely(!edge_to_pids[SMD_APPS_QDSP].initialized))
+ return IRQ_HANDLED;
+ log_irq(SMD_APPS_QDSP);
+ ++interrupt_stats[SMD_Q6].smd_in_count;
+ handle_smd_irq(&remote_info[SMD_Q6], notify_dsp_smd);
+ handle_smd_irq_closing_list();
+ return IRQ_HANDLED;
+}
+
+irqreturn_t smd_dsps_irq_handler(int irq, void *data)
+{
+ if (unlikely(!edge_to_pids[SMD_APPS_DSPS].initialized))
+ return IRQ_HANDLED;
+ log_irq(SMD_APPS_DSPS);
+ ++interrupt_stats[SMD_DSPS].smd_in_count;
+ handle_smd_irq(&remote_info[SMD_DSPS], notify_dsps_smd);
+ handle_smd_irq_closing_list();
+ return IRQ_HANDLED;
+}
+
+irqreturn_t smd_wcnss_irq_handler(int irq, void *data)
+{
+ if (unlikely(!edge_to_pids[SMD_APPS_WCNSS].initialized))
+ return IRQ_HANDLED;
+ log_irq(SMD_APPS_WCNSS);
+ ++interrupt_stats[SMD_WCNSS].smd_in_count;
+ handle_smd_irq(&remote_info[SMD_WCNSS], notify_wcnss_smd);
+ handle_smd_irq_closing_list();
+ return IRQ_HANDLED;
+}
+
+irqreturn_t smd_modemfw_irq_handler(int irq, void *data)
+{
+ if (unlikely(!edge_to_pids[SMD_APPS_Q6FW].initialized))
+ return IRQ_HANDLED;
+ log_irq(SMD_APPS_Q6FW);
+ ++interrupt_stats[SMD_MODEM_Q6_FW].smd_in_count;
+ handle_smd_irq(&remote_info[SMD_MODEM_Q6_FW], notify_modemfw_smd);
+ handle_smd_irq_closing_list();
+ return IRQ_HANDLED;
+}
+
+irqreturn_t smd_rpm_irq_handler(int irq, void *data)
+{
+ if (unlikely(!edge_to_pids[SMD_APPS_RPM].initialized))
+ return IRQ_HANDLED;
+ log_irq(SMD_APPS_RPM);
+ ++interrupt_stats[SMD_RPM].smd_in_count;
+ handle_smd_irq(&remote_info[SMD_RPM], notify_rpm_smd);
+ handle_smd_irq_closing_list();
+ return IRQ_HANDLED;
+}
+
+static void smd_fake_irq_handler(unsigned long arg)
+{
+ handle_smd_irq(&remote_info[SMD_MODEM], notify_modem_smd);
+ handle_smd_irq(&remote_info[SMD_Q6], notify_dsp_smd);
+ handle_smd_irq(&remote_info[SMD_DSPS], notify_dsps_smd);
+ handle_smd_irq(&remote_info[SMD_WCNSS], notify_wcnss_smd);
+ handle_smd_irq(&remote_info[SMD_MODEM_Q6_FW], notify_modemfw_smd);
+ handle_smd_irq(&remote_info[SMD_RPM], notify_rpm_smd);
+ handle_smd_irq_closing_list();
+}
+
+static int smd_is_packet(struct smd_alloc_elm *alloc_elm)
+{
+ if (SMD_XFER_TYPE(alloc_elm->type) == 1)
+ return 0;
+ else if (SMD_XFER_TYPE(alloc_elm->type) == 2)
+ return 1;
+
+ panic("Unsupported SMD xfer type: %d name:%s edge:%d\n",
+ SMD_XFER_TYPE(alloc_elm->type),
+ alloc_elm->name,
+ SMD_CHANNEL_TYPE(alloc_elm->type));
+}
+
+static int smd_stream_write(smd_channel_t *ch, const void *_data, int len,
+ bool intr_ntfy)
+{
+ void *ptr;
+ const unsigned char *buf = _data;
+ unsigned int xfer;
+ int orig_len = len;
+
+ SMD_DBG("smd_stream_write() %d -> ch%d\n", len, ch->n);
+ if (len < 0)
+ return -EINVAL;
+ else if (len == 0)
+ return 0;
+
+ while ((xfer = ch_write_buffer(ch, &ptr)) != 0) {
+ if (!ch_is_open(ch)) {
+ len = orig_len;
+ break;
+ }
+ if (xfer > len)
+ xfer = len;
+
+ ch->write_to_fifo(ptr, buf, xfer);
+ ch_write_done(ch, xfer);
+ len -= xfer;
+ buf += xfer;
+ if (len == 0)
+ break;
+ }
+
+ if (orig_len - len && intr_ntfy)
+ ch->notify_other_cpu(ch);
+
+ return orig_len - len;
+}
+
+static int smd_packet_write(smd_channel_t *ch, const void *_data, int len,
+ bool intr_ntfy)
+{
+ int ret;
+ unsigned int hdr[5];
+
+ SMD_DBG("smd_packet_write() %d -> ch%d\n", len, ch->n);
+ if (len < 0)
+ return -EINVAL;
+ else if (len == 0)
+ return 0;
+
+ if (smd_stream_write_avail(ch) < (len + SMD_HEADER_SIZE))
+ return -ENOMEM;
+
+ hdr[0] = len;
+ hdr[1] = hdr[2] = hdr[3] = hdr[4] = 0;
+
+
+ ret = smd_stream_write(ch, hdr, sizeof(hdr), false);
+ if (ret < 0 || ret != sizeof(hdr)) {
+ SMD_DBG("%s failed to write pkt header: %d returned\n",
+ __func__, ret);
+ return -EFAULT;
+ }
+
+
+ ret = smd_stream_write(ch, _data, len, true);
+ if (ret < 0 || ret != len) {
+ SMD_DBG("%s failed to write pkt data: %d returned\n",
+ __func__, ret);
+ return ret;
+ }
+
+ return len;
+}
+
+static int smd_stream_read(smd_channel_t *ch, void *data, int len)
+{
+ int r;
+
+ if (len < 0)
+ return -EINVAL;
+
+ r = ch_read(ch, data, len);
+ if (r > 0)
+ if (!read_intr_blocked(ch))
+ ch->notify_other_cpu(ch);
+
+ return r;
+}
+
+static int smd_packet_read(smd_channel_t *ch, void *data, int len)
+{
+ unsigned long flags;
+ int r;
+
+ if (len < 0)
+ return -EINVAL;
+
+ if (ch->current_packet > (uint32_t)INT_MAX) {
+ pr_err("%s: Invalid packet size for Edge %d and Channel %s",
+ __func__, ch->type, ch->name);
+ return -EFAULT;
+ }
+
+ if (len > ch->current_packet)
+ len = ch->current_packet;
+
+ r = ch_read(ch, data, len);
+ if (r > 0)
+ if (!read_intr_blocked(ch))
+ ch->notify_other_cpu(ch);
+
+ spin_lock_irqsave(&smd_lock, flags);
+ ch->current_packet -= r;
+ update_packet_state(ch);
+ spin_unlock_irqrestore(&smd_lock, flags);
+
+ return r;
+}
+
+static int smd_packet_read_from_cb(smd_channel_t *ch, void *data, int len)
+{
+ int r;
+
+ if (len < 0)
+ return -EINVAL;
+
+ if (ch->current_packet > (uint32_t)INT_MAX) {
+ pr_err("%s: Invalid packet size for Edge %d and Channel %s",
+ __func__, ch->type, ch->name);
+ return -EFAULT;
+ }
+
+ if (len > ch->current_packet)
+ len = ch->current_packet;
+
+ r = ch_read(ch, data, len);
+ if (r > 0)
+ if (!read_intr_blocked(ch))
+ ch->notify_other_cpu(ch);
+
+ ch->current_packet -= r;
+ update_packet_state(ch);
+
+ return r;
+}
+
+/**
+ * smd_alloc_v2() - Init local channel structure with information stored in SMEM
+ *
+ * @ch: pointer to the local structure for this channel
+ * @table_id: the id of the table this channel resides in. 1 = first table, 2 =
+ * second table, etc
+ * @r_info: pointer to the info structure of the remote proc for this channel
+ * @returns: -EINVAL for failure; 0 for success
+ *
+ * ch must point to an allocated instance of struct smd_channel that is zeroed
+ * out, and has the n and type members already initialized to the correct values
+ */
+static int smd_alloc(struct smd_channel *ch, int table_id,
+ struct remote_proc_info *r_info)
+{
+ void *buffer;
+ unsigned int buffer_sz;
+ unsigned int base_id;
+ unsigned int fifo_id;
+
+ switch (table_id) {
+ case PRI_ALLOC_TBL:
+ base_id = SMEM_SMD_BASE_ID;
+ fifo_id = SMEM_SMD_FIFO_BASE_ID;
+ break;
+ case SEC_ALLOC_TBL:
+ base_id = SMEM_SMD_BASE_ID_2;
+ fifo_id = SMEM_SMD_FIFO_BASE_ID_2;
+ break;
+ default:
+ SMD_INFO("Invalid table_id:%d passed to smd_alloc\n", table_id);
+ return -EINVAL;
+ }
+
+ if (is_word_access_ch(ch->type)) {
+ struct smd_shared_word_access *shared2;
+
+ shared2 = smem_find(base_id + ch->n, sizeof(*shared2),
+ r_info->remote_pid, 0);
+ if (!shared2) {
+ SMD_INFO("smem_find failed ch=%d\n", ch->n);
+ return -EINVAL;
+ }
+ ch->send = &shared2->ch0;
+ ch->recv = &shared2->ch1;
+ } else {
+ struct smd_shared *shared2;
+
+ shared2 = smem_find(base_id + ch->n, sizeof(*shared2),
+ r_info->remote_pid, 0);
+ if (!shared2) {
+ SMD_INFO("smem_find failed ch=%d\n", ch->n);
+ return -EINVAL;
+ }
+ ch->send = &shared2->ch0;
+ ch->recv = &shared2->ch1;
+ }
+ ch->half_ch = get_half_ch_funcs(ch->type);
+
+ buffer = smem_get_entry(fifo_id + ch->n, &buffer_sz,
+ r_info->remote_pid, 0);
+ if (!buffer) {
+ SMD_INFO("smem_get_entry failed\n");
+ return -EINVAL;
+ }
+
+ /* buffer must be a multiple of 32 size */
+ if ((buffer_sz & (SZ_32 - 1)) != 0) {
+ SMD_INFO("Buffer size: %u not multiple of 32\n", buffer_sz);
+ return -EINVAL;
+ }
+ buffer_sz /= 2;
+ ch->send_data = buffer;
+ ch->recv_data = buffer + buffer_sz;
+ ch->fifo_size = buffer_sz;
+
+ return 0;
+}
+
+/**
+ * smd_alloc_channel() - Create and init local structures for a newly allocated
+ * SMD channel
+ *
+ * @alloc_elm: the allocation element stored in SMEM for this channel
+ * @table_id: the id of the table this channel resides in. 1 = first table, 2 =
+ * seconds table, etc
+ * @r_info: pointer to the info structure of the remote proc for this channel
+ * @returns: error code for failure; 0 for success
+ */
+static int smd_alloc_channel(struct smd_alloc_elm *alloc_elm, int table_id,
+ struct remote_proc_info *r_info)
+{
+ struct smd_channel *ch;
+
+ ch = kzalloc(sizeof(struct smd_channel), GFP_KERNEL);
+ if (ch == 0) {
+ pr_err("smd_alloc_channel() out of memory\n");
+ return -ENOMEM;
+ }
+ ch->n = alloc_elm->cid;
+ ch->type = SMD_CHANNEL_TYPE(alloc_elm->type);
+
+ if (smd_alloc(ch, table_id, r_info)) {
+ kfree(ch);
+ return -ENODEV;
+ }
+
+ /* probe_worker guarentees ch->type will be a valid type */
+ if (ch->type == SMD_APPS_MODEM)
+ ch->notify_other_cpu = notify_modem_smd;
+ else if (ch->type == SMD_APPS_QDSP)
+ ch->notify_other_cpu = notify_dsp_smd;
+ else if (ch->type == SMD_APPS_DSPS)
+ ch->notify_other_cpu = notify_dsps_smd;
+ else if (ch->type == SMD_APPS_WCNSS)
+ ch->notify_other_cpu = notify_wcnss_smd;
+ else if (ch->type == SMD_APPS_Q6FW)
+ ch->notify_other_cpu = notify_modemfw_smd;
+ else if (ch->type == SMD_APPS_RPM)
+ ch->notify_other_cpu = notify_rpm_smd;
+
+ if (smd_is_packet(alloc_elm)) {
+ ch->read = smd_packet_read;
+ ch->write = smd_packet_write;
+ ch->read_avail = smd_packet_read_avail;
+ ch->write_avail = smd_packet_write_avail;
+ ch->update_state = update_packet_state;
+ ch->read_from_cb = smd_packet_read_from_cb;
+ ch->is_pkt_ch = 1;
+ } else {
+ ch->read = smd_stream_read;
+ ch->write = smd_stream_write;
+ ch->read_avail = smd_stream_read_avail;
+ ch->write_avail = smd_stream_write_avail;
+ ch->update_state = update_stream_state;
+ ch->read_from_cb = smd_stream_read;
+ }
+
+ if (is_word_access_ch(ch->type)) {
+ ch->read_from_fifo = smd_memcpy32_from_fifo;
+ ch->write_to_fifo = smd_memcpy32_to_fifo;
+ } else {
+ ch->read_from_fifo = smd_memcpy_from_fifo;
+ ch->write_to_fifo = smd_memcpy_to_fifo;
+ }
+
+ smd_memcpy_from_fifo(ch->name, alloc_elm->name, SMD_MAX_CH_NAME_LEN);
+ ch->name[SMD_MAX_CH_NAME_LEN-1] = 0;
+
+ ch->pdev.name = ch->name;
+ ch->pdev.id = ch->type;
+
+ SMD_INFO("smd_alloc_channel() '%s' cid=%d\n",
+ ch->name, ch->n);
+
+ mutex_lock(&smd_creation_mutex);
+ list_add(&ch->ch_list, &smd_ch_closed_list);
+ mutex_unlock(&smd_creation_mutex);
+
+ platform_device_register(&ch->pdev);
+ if (!strcmp(ch->name, "LOOPBACK") && ch->type == SMD_APPS_MODEM) {
+ /* create a platform driver to be used by smd_tty driver
+ * so that it can access the loopback port
+ */
+ loopback_tty_pdev.id = ch->type;
+ platform_device_register(&loopback_tty_pdev);
+ }
+ return 0;
+}
+
+static void do_nothing_notify(void *priv, unsigned int flags)
+{
+}
+
+static void finalize_channel_close_fn(struct work_struct *work)
+{
+ unsigned long flags;
+ struct smd_channel *ch;
+ struct smd_channel *index;
+
+ mutex_lock(&smd_creation_mutex);
+ spin_lock_irqsave(&smd_lock, flags);
+ list_for_each_entry_safe(ch, index, &smd_ch_to_close_list, ch_list) {
+ list_del(&ch->ch_list);
+ list_add(&ch->ch_list, &smd_ch_closed_list);
+ ch->notify(ch->priv, SMD_EVENT_REOPEN_READY);
+ ch->notify = do_nothing_notify;
+ }
+ spin_unlock_irqrestore(&smd_lock, flags);
+ mutex_unlock(&smd_creation_mutex);
+}
+
+struct smd_channel *smd_get_channel(const char *name, uint32_t type)
+{
+ struct smd_channel *ch;
+
+ mutex_lock(&smd_creation_mutex);
+ list_for_each_entry(ch, &smd_ch_closed_list, ch_list) {
+ if (!strcmp(name, ch->name) &&
+ (type == ch->type)) {
+ list_del(&ch->ch_list);
+ mutex_unlock(&smd_creation_mutex);
+ return ch;
+ }
+ }
+ mutex_unlock(&smd_creation_mutex);
+
+ return NULL;
+}
+
+int smd_named_open_on_edge(const char *name, uint32_t edge,
+ smd_channel_t **_ch,
+ void *priv, void (*notify)(void *, unsigned int))
+{
+ struct smd_channel *ch;
+ unsigned long flags;
+
+ if (edge >= SMD_NUM_TYPE) {
+ pr_err("%s: edge:%d is invalid\n", __func__, edge);
+ return -EINVAL;
+ }
+
+ if (!smd_edge_inited(edge)) {
+ SMD_INFO("smd_open() before smd_init()\n");
+ return -EPROBE_DEFER;
+ }
+
+ SMD_DBG("smd_open('%s', %p, %p)\n", name, priv, notify);
+
+ ch = smd_get_channel(name, edge);
+ if (!ch) {
+ spin_lock_irqsave(&smd_lock, flags);
+ /* check opened list for port */
+ list_for_each_entry(ch,
+ &remote_info[edge_to_pids[edge].remote_pid].ch_list,
+ ch_list) {
+ if (!strcmp(name, ch->name)) {
+ /* channel is already open */
+ spin_unlock_irqrestore(&smd_lock, flags);
+ SMD_DBG("smd_open: channel '%s' already open\n",
+ ch->name);
+ return -EBUSY;
+ }
+ }
+
+ /* check closing list for port */
+ list_for_each_entry(ch, &smd_ch_closing_list, ch_list) {
+ if (!strcmp(name, ch->name) && (edge == ch->type)) {
+ /* channel exists, but is being closed */
+ spin_unlock_irqrestore(&smd_lock, flags);
+ return -EAGAIN;
+ }
+ }
+
+ /* check closing workqueue list for port */
+ list_for_each_entry(ch, &smd_ch_to_close_list, ch_list) {
+ if (!strcmp(name, ch->name) && (edge == ch->type)) {
+ /* channel exists, but is being closed */
+ spin_unlock_irqrestore(&smd_lock, flags);
+ return -EAGAIN;
+ }
+ }
+ spin_unlock_irqrestore(&smd_lock, flags);
+
+ /* one final check to handle closing->closed race condition */
+ ch = smd_get_channel(name, edge);
+ if (!ch)
+ return -ENODEV;
+ }
+
+ if (ch->half_ch->get_fSTATE(ch->send)) {
+ /* remote side hasn't acknowledged our last state transition */
+ SMD_INFO("%s: ch %d valid, waiting for remote to ack state\n",
+ __func__, ch->n);
+ msleep(250);
+ if (ch->half_ch->get_fSTATE(ch->send))
+ SMD_INFO("%s: ch %d - no remote ack, continuing\n",
+ __func__, ch->n);
+ }
+
+ if (notify == 0)
+ notify = do_nothing_notify;
+
+ ch->notify = notify;
+ ch->current_packet = 0;
+ ch->last_state = SMD_SS_CLOSED;
+ ch->priv = priv;
+
+ *_ch = ch;
+
+ SMD_DBG("smd_open: opening '%s'\n", ch->name);
+
+ spin_lock_irqsave(&smd_lock, flags);
+ list_add(&ch->ch_list,
+ &remote_info[edge_to_pids[ch->type].remote_pid].ch_list);
+
+ SMD_DBG("%s: opening ch %d\n", __func__, ch->n);
+
+ smd_state_change(ch, ch->last_state, SMD_SS_OPENING);
+
+ spin_unlock_irqrestore(&smd_lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(smd_named_open_on_edge);
+
+int smd_close(smd_channel_t *ch)
+{
+ unsigned long flags;
+ bool was_opened;
+
+ if (ch == 0)
+ return -EINVAL;
+
+ SMD_INFO("smd_close(%s)\n", ch->name);
+
+ spin_lock_irqsave(&smd_lock, flags);
+ list_del(&ch->ch_list);
+
+ was_opened = ch->half_ch->get_state(ch->recv) == SMD_SS_OPENED;
+ ch_set_state(ch, SMD_SS_CLOSED);
+
+ if (was_opened) {
+ list_add(&ch->ch_list, &smd_ch_closing_list);
+ spin_unlock_irqrestore(&smd_lock, flags);
+ } else {
+ spin_unlock_irqrestore(&smd_lock, flags);
+ ch->notify = do_nothing_notify;
+ mutex_lock(&smd_creation_mutex);
+ list_add(&ch->ch_list, &smd_ch_closed_list);
+ mutex_unlock(&smd_creation_mutex);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(smd_close);
+
+int smd_write_start(smd_channel_t *ch, int len)
+{
+ int ret;
+ unsigned int hdr[5];
+
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+ if (!ch->is_pkt_ch) {
+ pr_err("%s: non-packet channel specified\n", __func__);
+ return -EACCES;
+ }
+ if (len < 1) {
+ pr_err("%s: invalid length: %d\n", __func__, len);
+ return -EINVAL;
+ }
+
+ if (ch->pending_pkt_sz) {
+ pr_err("%s: packet of size: %d in progress\n", __func__,
+ ch->pending_pkt_sz);
+ return -EBUSY;
+ }
+ ch->pending_pkt_sz = len;
+
+ if (smd_stream_write_avail(ch) < (SMD_HEADER_SIZE)) {
+ ch->pending_pkt_sz = 0;
+ SMD_DBG("%s: no space to write packet header\n", __func__);
+ return -EAGAIN;
+ }
+
+ hdr[0] = len;
+ hdr[1] = hdr[2] = hdr[3] = hdr[4] = 0;
+
+
+ ret = smd_stream_write(ch, hdr, sizeof(hdr), true);
+ if (ret < 0 || ret != sizeof(hdr)) {
+ ch->pending_pkt_sz = 0;
+ pr_err("%s: packet header failed to write\n", __func__);
+ return -EPERM;
+ }
+ return 0;
+}
+EXPORT_SYMBOL(smd_write_start);
+
+int smd_write_segment(smd_channel_t *ch, const void *data, int len)
+{
+ int bytes_written;
+
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+ if (len < 1) {
+ pr_err("%s: invalid length: %d\n", __func__, len);
+ return -EINVAL;
+ }
+
+ if (!ch->pending_pkt_sz) {
+ pr_err("%s: no transaction in progress\n", __func__);
+ return -ENOEXEC;
+ }
+ if (ch->pending_pkt_sz - len < 0) {
+ pr_err("%s: segment of size: %d will make packet go over length\n",
+ __func__, len);
+ return -EINVAL;
+ }
+
+ bytes_written = smd_stream_write(ch, data, len, true);
+
+ ch->pending_pkt_sz -= bytes_written;
+
+ return bytes_written;
+}
+EXPORT_SYMBOL(smd_write_segment);
+
+int smd_write_end(smd_channel_t *ch)
+{
+
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+ if (ch->pending_pkt_sz) {
+ pr_err("%s: current packet not completely written\n", __func__);
+ return -E2BIG;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(smd_write_end);
+
+int smd_write_segment_avail(smd_channel_t *ch)
+{
+ int n;
+
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+ if (!ch->is_pkt_ch) {
+ pr_err("%s: non-packet channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ n = smd_stream_write_avail(ch);
+
+ /* pkt hdr already written, no need to reserve space for it */
+ if (ch->pending_pkt_sz)
+ return n;
+
+ return n > SMD_HEADER_SIZE ? n - SMD_HEADER_SIZE : 0;
+}
+EXPORT_SYMBOL(smd_write_segment_avail);
+
+int smd_read(smd_channel_t *ch, void *data, int len)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ return ch->read(ch, data, len);
+}
+EXPORT_SYMBOL(smd_read);
+
+int smd_read_from_cb(smd_channel_t *ch, void *data, int len)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ return ch->read_from_cb(ch, data, len);
+}
+EXPORT_SYMBOL(smd_read_from_cb);
+
+int smd_write(smd_channel_t *ch, const void *data, int len)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ return ch->pending_pkt_sz ? -EBUSY : ch->write(ch, data, len, true);
+}
+EXPORT_SYMBOL(smd_write);
+
+int smd_read_avail(smd_channel_t *ch)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ if (ch->current_packet > (uint32_t)INT_MAX) {
+ pr_err("%s: Invalid packet size for Edge %d and Channel %s",
+ __func__, ch->type, ch->name);
+ return -EFAULT;
+ }
+ return ch->read_avail(ch);
+}
+EXPORT_SYMBOL(smd_read_avail);
+
+int smd_write_avail(smd_channel_t *ch)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ return ch->write_avail(ch);
+}
+EXPORT_SYMBOL(smd_write_avail);
+
+void smd_enable_read_intr(smd_channel_t *ch)
+{
+ if (ch)
+ ch->half_ch->set_fBLOCKREADINTR(ch->send, 0);
+}
+EXPORT_SYMBOL(smd_enable_read_intr);
+
+void smd_disable_read_intr(smd_channel_t *ch)
+{
+ if (ch)
+ ch->half_ch->set_fBLOCKREADINTR(ch->send, 1);
+}
+EXPORT_SYMBOL(smd_disable_read_intr);
+
+/**
+ * Enable/disable receive interrupts for the remote processor used by a
+ * particular channel.
+ * @ch: open channel handle to use for the edge
+ * @mask: 1 = mask interrupts; 0 = unmask interrupts
+ * @cpumask cpumask for the next cpu scheduled to be woken up
+ * @returns: 0 for success; < 0 for failure
+ *
+ * Note that this enables/disables all interrupts from the remote subsystem for
+ * all channels. As such, it should be used with care and only for specific
+ * use cases such as power-collapse sequencing.
+ */
+int smd_mask_receive_interrupt(smd_channel_t *ch, bool mask,
+ const struct cpumask *cpumask)
+{
+ struct irq_chip *irq_chip;
+ struct irq_data *irq_data;
+ struct interrupt_config_item *int_cfg;
+
+ if (!ch)
+ return -EINVAL;
+
+ if (ch->type >= ARRAY_SIZE(edge_to_pids))
+ return -ENODEV;
+
+ int_cfg = &private_intr_config[edge_to_pids[ch->type].remote_pid].smd;
+
+ if (int_cfg->irq_id < 0)
+ return -ENODEV;
+
+ irq_chip = irq_get_chip(int_cfg->irq_id);
+ if (!irq_chip)
+ return -ENODEV;
+
+ irq_data = irq_get_irq_data(int_cfg->irq_id);
+ if (!irq_data)
+ return -ENODEV;
+
+ if (mask) {
+ SMD_POWER_INFO("SMD Masking interrupts from %s\n",
+ edge_to_pids[ch->type].subsys_name);
+ irq_chip->irq_mask(irq_data);
+ if (cpumask)
+ irq_set_affinity(int_cfg->irq_id, cpumask);
+ } else {
+ SMD_POWER_INFO("SMD Unmasking interrupts from %s\n",
+ edge_to_pids[ch->type].subsys_name);
+ irq_chip->irq_unmask(irq_data);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(smd_mask_receive_interrupt);
+
+int smd_cur_packet_size(smd_channel_t *ch)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ if (ch->current_packet > (uint32_t)INT_MAX) {
+ pr_err("%s: Invalid packet size for Edge %d and Channel %s",
+ __func__, ch->type, ch->name);
+ return -EFAULT;
+ }
+ return ch->current_packet;
+}
+EXPORT_SYMBOL(smd_cur_packet_size);
+
+int smd_tiocmget(smd_channel_t *ch)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ return (ch->half_ch->get_fDSR(ch->recv) ? TIOCM_DSR : 0) |
+ (ch->half_ch->get_fCTS(ch->recv) ? TIOCM_CTS : 0) |
+ (ch->half_ch->get_fCD(ch->recv) ? TIOCM_CD : 0) |
+ (ch->half_ch->get_fRI(ch->recv) ? TIOCM_RI : 0) |
+ (ch->half_ch->get_fCTS(ch->send) ? TIOCM_RTS : 0) |
+ (ch->half_ch->get_fDSR(ch->send) ? TIOCM_DTR : 0);
+}
+EXPORT_SYMBOL(smd_tiocmget);
+
+/* this api will be called while holding smd_lock */
+int
+smd_tiocmset_from_cb(smd_channel_t *ch, unsigned int set, unsigned int clear)
+{
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ if (set & TIOCM_DTR)
+ ch->half_ch->set_fDSR(ch->send, 1);
+
+ if (set & TIOCM_RTS)
+ ch->half_ch->set_fCTS(ch->send, 1);
+
+ if (clear & TIOCM_DTR)
+ ch->half_ch->set_fDSR(ch->send, 0);
+
+ if (clear & TIOCM_RTS)
+ ch->half_ch->set_fCTS(ch->send, 0);
+
+ ch->half_ch->set_fSTATE(ch->send, 1);
+ barrier();
+ ch->notify_other_cpu(ch);
+
+ return 0;
+}
+EXPORT_SYMBOL(smd_tiocmset_from_cb);
+
+int smd_tiocmset(smd_channel_t *ch, unsigned int set, unsigned int clear)
+{
+ unsigned long flags;
+
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&smd_lock, flags);
+ smd_tiocmset_from_cb(ch, set, clear);
+ spin_unlock_irqrestore(&smd_lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(smd_tiocmset);
+
+int smd_is_pkt_avail(smd_channel_t *ch)
+{
+ unsigned long flags;
+
+ if (!ch || !ch->is_pkt_ch)
+ return -EINVAL;
+
+ if (ch->current_packet)
+ return 1;
+
+ spin_lock_irqsave(&smd_lock, flags);
+ update_packet_state(ch);
+ spin_unlock_irqrestore(&smd_lock, flags);
+
+ return ch->current_packet ? 1 : 0;
+}
+EXPORT_SYMBOL(smd_is_pkt_avail);
+
+static int smsm_cb_init(void)
+{
+ struct smsm_state_info *state_info;
+ int n;
+ int ret = 0;
+
+ smsm_states = kmalloc(sizeof(struct smsm_state_info)*SMSM_NUM_ENTRIES,
+ GFP_KERNEL);
+
+ if (!smsm_states) {
+ pr_err("%s: SMSM init failed\n", __func__);
+ return -ENOMEM;
+ }
+
+ smsm_cb_wq = create_singlethread_workqueue("smsm_cb_wq");
+ if (!smsm_cb_wq) {
+ pr_err("%s: smsm_cb_wq creation failed\n", __func__);
+ kfree(smsm_states);
+ return -EFAULT;
+ }
+
+ mutex_lock(&smsm_lock);
+ for (n = 0; n < SMSM_NUM_ENTRIES; n++) {
+ state_info = &smsm_states[n];
+ state_info->last_value = __raw_readl(SMSM_STATE_ADDR(n));
+ state_info->intr_mask_set = 0x0;
+ state_info->intr_mask_clear = 0x0;
+ INIT_LIST_HEAD(&state_info->callbacks);
+ }
+ mutex_unlock(&smsm_lock);
+
+ return ret;
+}
+
+static int smsm_init(void)
+{
+ int i;
+ struct smsm_size_info_type *smsm_size_info;
+ unsigned long flags;
+ unsigned long j_start;
+ static int first = 1;
+ remote_spinlock_t *remote_spinlock;
+
+ if (!first)
+ return 0;
+ first = 0;
+
+ /* Verify that remote spinlock is not deadlocked */
+ remote_spinlock = smem_get_remote_spinlock();
+ j_start = jiffies;
+ while (!remote_spin_trylock_irqsave(remote_spinlock, flags)) {
+ if (jiffies_to_msecs(jiffies - j_start) > RSPIN_INIT_WAIT_MS) {
+ panic("%s: Remote processor %d will not release spinlock\n",
+ __func__, remote_spin_owner(remote_spinlock));
+ }
+ }
+ remote_spin_unlock_irqrestore(remote_spinlock, flags);
+
+ smsm_size_info = smem_find(SMEM_SMSM_SIZE_INFO,
+ sizeof(struct smsm_size_info_type), 0,
+ SMEM_ANY_HOST_FLAG);
+ if (smsm_size_info) {
+ SMSM_NUM_ENTRIES = smsm_size_info->num_entries;
+ SMSM_NUM_HOSTS = smsm_size_info->num_hosts;
+ }
+
+ i = kfifo_alloc(&smsm_snapshot_fifo,
+ sizeof(uint32_t) * SMSM_NUM_ENTRIES * SMSM_SNAPSHOT_CNT,
+ GFP_KERNEL);
+ if (i) {
+ pr_err("%s: SMSM state fifo alloc failed %d\n", __func__, i);
+ return i;
+ }
+ wakeup_source_init(&smsm_snapshot_ws, "smsm_snapshot");
+
+ if (!smsm_info.state) {
+ smsm_info.state = smem_alloc(ID_SHARED_STATE,
+ SMSM_NUM_ENTRIES *
+ sizeof(uint32_t), 0,
+ SMEM_ANY_HOST_FLAG);
+
+ if (smsm_info.state)
+ __raw_writel(0, SMSM_STATE_ADDR(SMSM_APPS_STATE));
+ }
+
+ if (!smsm_info.intr_mask) {
+ smsm_info.intr_mask = smem_alloc(SMEM_SMSM_CPU_INTR_MASK,
+ SMSM_NUM_ENTRIES *
+ SMSM_NUM_HOSTS *
+ sizeof(uint32_t), 0,
+ SMEM_ANY_HOST_FLAG);
+
+ if (smsm_info.intr_mask) {
+ for (i = 0; i < SMSM_NUM_ENTRIES; i++)
+ __raw_writel(0x0,
+ SMSM_INTR_MASK_ADDR(i, SMSM_APPS));
+
+ /* Configure legacy modem bits */
+ __raw_writel(LEGACY_MODEM_SMSM_MASK,
+ SMSM_INTR_MASK_ADDR(SMSM_MODEM_STATE,
+ SMSM_APPS));
+ }
+ }
+
+ i = smsm_cb_init();
+ if (i)
+ return i;
+
+ wmb(); /* Make sure memory is visible before proceeding */
+
+ smsm_pm_notifier(&smsm_pm_nb, PM_POST_SUSPEND, NULL);
+ i = register_pm_notifier(&smsm_pm_nb);
+ if (i)
+ pr_err("%s: power state notif error %d\n", __func__, i);
+
+ return 0;
+}
+
+static void smsm_cb_snapshot(uint32_t use_wakeup_source)
+{
+ int n;
+ uint32_t new_state;
+ unsigned long flags;
+ int ret;
+ uint64_t timestamp;
+
+ timestamp = sched_clock();
+ ret = kfifo_avail(&smsm_snapshot_fifo);
+ if (ret < SMSM_SNAPSHOT_SIZE) {
+ pr_err("%s: SMSM snapshot full %d\n", __func__, ret);
+ return;
+ }
+
+ /*
+ * To avoid a race condition with notify_smsm_cb_clients_worker, the
+ * following sequence must be followed:
+ * 1) increment snapshot count
+ * 2) insert data into FIFO
+ *
+ * Potentially in parallel, the worker:
+ * a) verifies >= 1 snapshots are in FIFO
+ * b) processes snapshot
+ * c) decrements reference count
+ *
+ * This order ensures that 1 will always occur before abc.
+ */
+ if (use_wakeup_source) {
+ spin_lock_irqsave(&smsm_snapshot_count_lock, flags);
+ if (smsm_snapshot_count == 0) {
+ SMSM_POWER_INFO("SMSM snapshot wake lock\n");
+ __pm_stay_awake(&smsm_snapshot_ws);
+ }
+ ++smsm_snapshot_count;
+ spin_unlock_irqrestore(&smsm_snapshot_count_lock, flags);
+ }
+
+ /* queue state entries */
+ for (n = 0; n < SMSM_NUM_ENTRIES; n++) {
+ new_state = __raw_readl(SMSM_STATE_ADDR(n));
+
+ ret = kfifo_in(&smsm_snapshot_fifo,
+ &new_state, sizeof(new_state));
+ if (ret != sizeof(new_state)) {
+ pr_err("%s: SMSM snapshot failure %d\n", __func__, ret);
+ goto restore_snapshot_count;
+ }
+ }
+
+ ret = kfifo_in(&smsm_snapshot_fifo, ×tamp, sizeof(timestamp));
+ if (ret != sizeof(timestamp)) {
+ pr_err("%s: SMSM snapshot failure %d\n", __func__, ret);
+ goto restore_snapshot_count;
+ }
+
+ /* queue wakelock usage flag */
+ ret = kfifo_in(&smsm_snapshot_fifo,
+ &use_wakeup_source, sizeof(use_wakeup_source));
+ if (ret != sizeof(use_wakeup_source)) {
+ pr_err("%s: SMSM snapshot failure %d\n", __func__, ret);
+ goto restore_snapshot_count;
+ }
+
+ queue_work(smsm_cb_wq, &smsm_cb_work);
+ return;
+
+restore_snapshot_count:
+ if (use_wakeup_source) {
+ spin_lock_irqsave(&smsm_snapshot_count_lock, flags);
+ if (smsm_snapshot_count) {
+ --smsm_snapshot_count;
+ if (smsm_snapshot_count == 0) {
+ SMSM_POWER_INFO("SMSM snapshot wake unlock\n");
+ __pm_relax(&smsm_snapshot_ws);
+ }
+ } else {
+ pr_err("%s: invalid snapshot count\n", __func__);
+ }
+ spin_unlock_irqrestore(&smsm_snapshot_count_lock, flags);
+ }
+}
+
+static irqreturn_t smsm_irq_handler(int irq, void *data)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&smem_lock, flags);
+ if (!smsm_info.state) {
+ SMSM_INFO("<SM NO STATE>\n");
+ } else {
+ unsigned int old_apps, apps;
+ unsigned int modm;
+
+ modm = __raw_readl(SMSM_STATE_ADDR(SMSM_MODEM_STATE));
+ old_apps = apps = __raw_readl(SMSM_STATE_ADDR(SMSM_APPS_STATE));
+
+ SMSM_DBG("<SM %08x %08x>\n", apps, modm);
+ if (modm & SMSM_RESET) {
+ pr_err("SMSM: Modem SMSM state changed to SMSM_RESET.\n");
+ } else if (modm & SMSM_INIT) {
+ if (!(apps & SMSM_INIT))
+ apps |= SMSM_INIT;
+ if (modm & SMSM_SMDINIT)
+ apps |= SMSM_SMDINIT;
+ }
+
+ if (old_apps != apps) {
+ SMSM_DBG("<SM %08x NOTIFY>\n", apps);
+ __raw_writel(apps, SMSM_STATE_ADDR(SMSM_APPS_STATE));
+ notify_other_smsm(SMSM_APPS_STATE, (old_apps ^ apps));
+ }
+
+ smsm_cb_snapshot(1);
+ }
+ spin_unlock_irqrestore(&smem_lock, flags);
+ return IRQ_HANDLED;
+}
+
+irqreturn_t smsm_modem_irq_handler(int irq, void *data)
+{
+ SMSM_POWER_INFO("SMSM Int Modem->Apps\n");
+ ++interrupt_stats[SMD_MODEM].smsm_in_count;
+ return smsm_irq_handler(irq, data);
+}
+
+irqreturn_t smsm_dsp_irq_handler(int irq, void *data)
+{
+ SMSM_POWER_INFO("SMSM Int LPASS->Apps\n");
+ ++interrupt_stats[SMD_Q6].smsm_in_count;
+ return smsm_irq_handler(irq, data);
+}
+
+irqreturn_t smsm_dsps_irq_handler(int irq, void *data)
+{
+ SMSM_POWER_INFO("SMSM Int DSPS->Apps\n");
+ ++interrupt_stats[SMD_DSPS].smsm_in_count;
+ return smsm_irq_handler(irq, data);
+}
+
+irqreturn_t smsm_wcnss_irq_handler(int irq, void *data)
+{
+ SMSM_POWER_INFO("SMSM Int WCNSS->Apps\n");
+ ++interrupt_stats[SMD_WCNSS].smsm_in_count;
+ return smsm_irq_handler(irq, data);
+}
+
+/*
+ * Changes the global interrupt mask. The set and clear masks are re-applied
+ * every time the global interrupt mask is updated for callback registration
+ * and de-registration.
+ *
+ * The clear mask is applied first, so if a bit is set to 1 in both the clear
+ * mask and the set mask, the result will be that the interrupt is set.
+ *
+ * @smsm_entry SMSM entry to change
+ * @clear_mask 1 = clear bit, 0 = no-op
+ * @set_mask 1 = set bit, 0 = no-op
+ *
+ * @returns 0 for success, < 0 for error
+ */
+int smsm_change_intr_mask(uint32_t smsm_entry,
+ uint32_t clear_mask, uint32_t set_mask)
+{
+ uint32_t old_mask, new_mask;
+ unsigned long flags;
+
+ if (smsm_entry >= SMSM_NUM_ENTRIES) {
+ pr_err("smsm_change_state: Invalid entry %d\n",
+ smsm_entry);
+ return -EINVAL;
+ }
+
+ if (!smsm_info.intr_mask) {
+ pr_err("smsm_change_intr_mask <SM NO STATE>\n");
+ return -EIO;
+ }
+
+ spin_lock_irqsave(&smem_lock, flags);
+ smsm_states[smsm_entry].intr_mask_clear = clear_mask;
+ smsm_states[smsm_entry].intr_mask_set = set_mask;
+
+ old_mask = __raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS));
+ new_mask = (old_mask & ~clear_mask) | set_mask;
+ __raw_writel(new_mask, SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS));
+
+ wmb(); /* Make sure memory is visible before proceeding */
+ spin_unlock_irqrestore(&smem_lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(smsm_change_intr_mask);
+
+int smsm_get_intr_mask(uint32_t smsm_entry, uint32_t *intr_mask)
+{
+ if (smsm_entry >= SMSM_NUM_ENTRIES) {
+ pr_err("smsm_change_state: Invalid entry %d\n",
+ smsm_entry);
+ return -EINVAL;
+ }
+
+ if (!smsm_info.intr_mask) {
+ pr_err("smsm_change_intr_mask <SM NO STATE>\n");
+ return -EIO;
+ }
+
+ *intr_mask = __raw_readl(SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS));
+ return 0;
+}
+EXPORT_SYMBOL(smsm_get_intr_mask);
+
+int smsm_change_state(uint32_t smsm_entry,
+ uint32_t clear_mask, uint32_t set_mask)
+{
+ unsigned long flags;
+ uint32_t old_state, new_state;
+
+ if (smsm_entry >= SMSM_NUM_ENTRIES) {
+ pr_err("smsm_change_state: Invalid entry %d",
+ smsm_entry);
+ return -EINVAL;
+ }
+
+ if (!smsm_info.state) {
+ pr_err("smsm_change_state <SM NO STATE>\n");
+ return -EIO;
+ }
+ spin_lock_irqsave(&smem_lock, flags);
+
+ old_state = __raw_readl(SMSM_STATE_ADDR(smsm_entry));
+ new_state = (old_state & ~clear_mask) | set_mask;
+ __raw_writel(new_state, SMSM_STATE_ADDR(smsm_entry));
+ SMSM_POWER_INFO("%s %d:%08x->%08x", __func__, smsm_entry,
+ old_state, new_state);
+ notify_other_smsm(SMSM_APPS_STATE, (old_state ^ new_state));
+
+ spin_unlock_irqrestore(&smem_lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(smsm_change_state);
+
+uint32_t smsm_get_state(uint32_t smsm_entry)
+{
+ uint32_t rv = 0;
+
+ /* needs interface change to return error code */
+ if (smsm_entry >= SMSM_NUM_ENTRIES) {
+ pr_err("smsm_change_state: Invalid entry %d",
+ smsm_entry);
+ return 0;
+ }
+
+ if (!smsm_info.state)
+ pr_err("smsm_get_state <SM NO STATE>\n");
+ else
+ rv = __raw_readl(SMSM_STATE_ADDR(smsm_entry));
+
+ return rv;
+}
+EXPORT_SYMBOL(smsm_get_state);
+
+/**
+ * Performs SMSM callback client notifiction.
+ */
+void notify_smsm_cb_clients_worker(struct work_struct *work)
+{
+ struct smsm_state_cb_info *cb_info;
+ struct smsm_state_info *state_info;
+ int n;
+ uint32_t new_state;
+ uint32_t state_changes;
+ uint32_t use_wakeup_source;
+ int ret;
+ unsigned long flags;
+ uint64_t t_snapshot;
+ uint64_t t_start;
+ unsigned long nanosec_rem;
+
+ while (kfifo_len(&smsm_snapshot_fifo) >= SMSM_SNAPSHOT_SIZE) {
+ t_start = sched_clock();
+ mutex_lock(&smsm_lock);
+ for (n = 0; n < SMSM_NUM_ENTRIES; n++) {
+ state_info = &smsm_states[n];
+
+ ret = kfifo_out(&smsm_snapshot_fifo, &new_state,
+ sizeof(new_state));
+ if (ret != sizeof(new_state)) {
+ pr_err("%s: snapshot underflow %d\n",
+ __func__, ret);
+ mutex_unlock(&smsm_lock);
+ return;
+ }
+
+ state_changes = state_info->last_value ^ new_state;
+ if (state_changes) {
+ SMSM_POWER_INFO("SMSM Change %d: %08x->%08x\n",
+ n, state_info->last_value,
+ new_state);
+ list_for_each_entry(cb_info,
+ &state_info->callbacks, cb_list) {
+
+ if (cb_info->mask & state_changes)
+ cb_info->notify(cb_info->data,
+ state_info->last_value,
+ new_state);
+ }
+ state_info->last_value = new_state;
+ }
+ }
+
+ ret = kfifo_out(&smsm_snapshot_fifo, &t_snapshot,
+ sizeof(t_snapshot));
+ if (ret != sizeof(t_snapshot)) {
+ pr_err("%s: snapshot underflow %d\n",
+ __func__, ret);
+ mutex_unlock(&smsm_lock);
+ return;
+ }
+
+ /* read wakelock flag */
+ ret = kfifo_out(&smsm_snapshot_fifo, &use_wakeup_source,
+ sizeof(use_wakeup_source));
+ if (ret != sizeof(use_wakeup_source)) {
+ pr_err("%s: snapshot underflow %d\n",
+ __func__, ret);
+ mutex_unlock(&smsm_lock);
+ return;
+ }
+ mutex_unlock(&smsm_lock);
+
+ if (use_wakeup_source) {
+ spin_lock_irqsave(&smsm_snapshot_count_lock, flags);
+ if (smsm_snapshot_count) {
+ --smsm_snapshot_count;
+ if (smsm_snapshot_count == 0) {
+ SMSM_POWER_INFO(
+ "SMSM snapshot wake unlock\n");
+ __pm_relax(&smsm_snapshot_ws);
+ }
+ } else {
+ pr_err("%s: invalid snapshot count\n",
+ __func__);
+ }
+ spin_unlock_irqrestore(&smsm_snapshot_count_lock,
+ flags);
+ }
+
+ t_start = t_start - t_snapshot;
+ nanosec_rem = do_div(t_start, 1000000000U);
+ SMSM_POWER_INFO(
+ "SMSM snapshot queue response time %6u.%09lu s\n",
+ (unsigned int)t_start, nanosec_rem);
+ }
+}
+
+
+/**
+ * Registers callback for SMSM state notifications when the specified
+ * bits change.
+ *
+ * @smsm_entry Processor entry to deregister
+ * @mask Bits to deregister (if result is 0, callback is removed)
+ * @notify Notification function to deregister
+ * @data Opaque data passed in to callback
+ *
+ * @returns Status code
+ * <0 error code
+ * 0 inserted new entry
+ * 1 updated mask of existing entry
+ */
+int smsm_state_cb_register(uint32_t smsm_entry, uint32_t mask,
+ void (*notify)(void *, uint32_t, uint32_t), void *data)
+{
+ struct smsm_state_info *state;
+ struct smsm_state_cb_info *cb_info;
+ struct smsm_state_cb_info *cb_found = 0;
+ uint32_t new_mask = 0;
+ int ret = 0;
+
+ if (smsm_entry >= SMSM_NUM_ENTRIES)
+ return -EINVAL;
+
+ mutex_lock(&smsm_lock);
+
+ if (!smsm_states) {
+ /* smsm not yet initialized */
+ ret = -ENODEV;
+ goto cleanup;
+ }
+
+ state = &smsm_states[smsm_entry];
+ list_for_each_entry(cb_info,
+ &state->callbacks, cb_list) {
+ if (!ret && (cb_info->notify == notify) &&
+ (cb_info->data == data)) {
+ cb_info->mask |= mask;
+ cb_found = cb_info;
+ ret = 1;
+ }
+ new_mask |= cb_info->mask;
+ }
+
+ if (!cb_found) {
+ cb_info = kmalloc(sizeof(struct smsm_state_cb_info),
+ GFP_ATOMIC);
+ if (!cb_info) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ cb_info->mask = mask;
+ cb_info->notify = notify;
+ cb_info->data = data;
+ INIT_LIST_HEAD(&cb_info->cb_list);
+ list_add_tail(&cb_info->cb_list,
+ &state->callbacks);
+ new_mask |= mask;
+ }
+
+ /* update interrupt notification mask */
+ if (smsm_entry == SMSM_MODEM_STATE)
+ new_mask |= LEGACY_MODEM_SMSM_MASK;
+
+ if (smsm_info.intr_mask) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&smem_lock, flags);
+ new_mask = (new_mask & ~state->intr_mask_clear)
+ | state->intr_mask_set;
+ __raw_writel(new_mask,
+ SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS));
+ wmb(); /* Make sure memory is visible before proceeding */
+ spin_unlock_irqrestore(&smem_lock, flags);
+ }
+
+cleanup:
+ mutex_unlock(&smsm_lock);
+ return ret;
+}
+EXPORT_SYMBOL(smsm_state_cb_register);
+
+
+/**
+ * Deregisters for SMSM state notifications for the specified bits.
+ *
+ * @smsm_entry Processor entry to deregister
+ * @mask Bits to deregister (if result is 0, callback is removed)
+ * @notify Notification function to deregister
+ * @data Opaque data passed in to callback
+ *
+ * @returns Status code
+ * <0 error code
+ * 0 not found
+ * 1 updated mask
+ * 2 removed callback
+ */
+int smsm_state_cb_deregister(uint32_t smsm_entry, uint32_t mask,
+ void (*notify)(void *, uint32_t, uint32_t), void *data)
+{
+ struct smsm_state_cb_info *cb_info;
+ struct smsm_state_cb_info *cb_tmp;
+ struct smsm_state_info *state;
+ uint32_t new_mask = 0;
+ int ret = 0;
+
+ if (smsm_entry >= SMSM_NUM_ENTRIES)
+ return -EINVAL;
+
+ mutex_lock(&smsm_lock);
+
+ if (!smsm_states) {
+ /* smsm not yet initialized */
+ mutex_unlock(&smsm_lock);
+ return -ENODEV;
+ }
+
+ state = &smsm_states[smsm_entry];
+ list_for_each_entry_safe(cb_info, cb_tmp,
+ &state->callbacks, cb_list) {
+ if (!ret && (cb_info->notify == notify) &&
+ (cb_info->data == data)) {
+ cb_info->mask &= ~mask;
+ ret = 1;
+ if (!cb_info->mask) {
+ /* no mask bits set, remove callback */
+ list_del(&cb_info->cb_list);
+ kfree(cb_info);
+ ret = 2;
+ continue;
+ }
+ }
+ new_mask |= cb_info->mask;
+ }
+
+ /* update interrupt notification mask */
+ if (smsm_entry == SMSM_MODEM_STATE)
+ new_mask |= LEGACY_MODEM_SMSM_MASK;
+
+ if (smsm_info.intr_mask) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&smem_lock, flags);
+ new_mask = (new_mask & ~state->intr_mask_clear)
+ | state->intr_mask_set;
+ __raw_writel(new_mask,
+ SMSM_INTR_MASK_ADDR(smsm_entry, SMSM_APPS));
+ wmb(); /* Make sure memory is visible before proceeding */
+ spin_unlock_irqrestore(&smem_lock, flags);
+ }
+
+ mutex_unlock(&smsm_lock);
+ return ret;
+}
+EXPORT_SYMBOL(smsm_state_cb_deregister);
+
+static int restart_notifier_cb(struct notifier_block *this,
+ unsigned long code,
+ void *data);
+
+static struct restart_notifier_block restart_notifiers[] = {
+ {SMD_MODEM, "modem", .nb.notifier_call = restart_notifier_cb},
+ {SMD_Q6, "lpass", .nb.notifier_call = restart_notifier_cb},
+ {SMD_WCNSS, "wcnss", .nb.notifier_call = restart_notifier_cb},
+ {SMD_DSPS, "dsps", .nb.notifier_call = restart_notifier_cb},
+ {SMD_MODEM, "gss", .nb.notifier_call = restart_notifier_cb},
+ {SMD_Q6, "adsp", .nb.notifier_call = restart_notifier_cb},
+ {SMD_DSPS, "slpi", .nb.notifier_call = restart_notifier_cb},
+};
+
+static int restart_notifier_cb(struct notifier_block *this,
+ unsigned long code,
+ void *data)
+{
+ remote_spinlock_t *remote_spinlock;
+
+ /*
+ * Some SMD or SMSM clients assume SMD/SMSM SSR handling will be
+ * done in the AFTER_SHUTDOWN level. If this ever changes, extra
+ * care should be taken to verify no clients are broken.
+ */
+ if (code == SUBSYS_AFTER_SHUTDOWN) {
+ struct restart_notifier_block *notifier;
+
+ notifier = container_of(this,
+ struct restart_notifier_block, nb);
+ SMD_INFO("%s: ssrestart for processor %d ('%s')\n",
+ __func__, notifier->processor,
+ notifier->name);
+
+ remote_spinlock = smem_get_remote_spinlock();
+ remote_spin_release(remote_spinlock, notifier->processor);
+ remote_spin_release_all(notifier->processor);
+
+ smd_channel_reset(notifier->processor);
+ }
+
+ return NOTIFY_DONE;
+}
+
+/**
+ * smd_post_init() - SMD post initialization
+ * @remote_pid: remote pid that has been initialized. Ignored when is_legacy=1
+ *
+ * This function is used by the device tree initialization to complete the SMD
+ * init sequence.
+ */
+void smd_post_init(unsigned int remote_pid)
+{
+ smd_channel_probe_now(&remote_info[remote_pid]);
+}
+
+/**
+ * smsm_post_init() - SMSM post initialization
+ * @returns: 0 for success, standard Linux error code otherwise
+ *
+ * This function is used by the legacy and device tree initialization
+ * to complete the SMSM init sequence.
+ */
+int smsm_post_init(void)
+{
+ int ret;
+
+ ret = smsm_init();
+ if (ret) {
+ pr_err("smsm_init() failed ret = %d\n", ret);
+ return ret;
+ }
+ smsm_irq_handler(0, 0);
+
+ return ret;
+}
+
+/**
+ * smd_get_intr_config() - Get interrupt configuration structure
+ * @edge: edge type identifes local and remote processor
+ * @returns: pointer to interrupt configuration
+ *
+ * This function returns the interrupt configuration of remote processor
+ * based on the edge type.
+ */
+struct interrupt_config *smd_get_intr_config(uint32_t edge)
+{
+ if (edge >= ARRAY_SIZE(edge_to_pids))
+ return NULL;
+ return &private_intr_config[edge_to_pids[edge].remote_pid];
+}
+
+/**
+ * smd_get_edge_remote_pid() - Get the remote processor ID
+ * @edge: edge type identifes local and remote processor
+ * @returns: remote processor ID
+ *
+ * This function returns remote processor ID based on edge type.
+ */
+int smd_edge_to_remote_pid(uint32_t edge)
+{
+ if (edge >= ARRAY_SIZE(edge_to_pids))
+ return -EINVAL;
+ return edge_to_pids[edge].remote_pid;
+}
+
+/**
+ * smd_get_edge_local_pid() - Get the local processor ID
+ * @edge: edge type identifies local and remote processor
+ * @returns: local processor ID
+ *
+ * This function returns local processor ID based on edge type.
+ */
+int smd_edge_to_local_pid(uint32_t edge)
+{
+ if (edge >= ARRAY_SIZE(edge_to_pids))
+ return -EINVAL;
+ return edge_to_pids[edge].local_pid;
+}
+
+/**
+ * smd_proc_set_skip_pil() - Mark if the indicated processor is be loaded by PIL
+ * @pid: the processor id to mark
+ * @skip_pil: true if @pid cannot by loaded by PIL
+ */
+void smd_proc_set_skip_pil(unsigned int pid, bool skip_pil)
+{
+ if (pid >= NUM_SMD_SUBSYSTEMS) {
+ pr_err("%s: invalid pid:%d\n", __func__, pid);
+ return;
+ }
+ remote_info[pid].skip_pil = skip_pil;
+}
+
+/**
+ * smd_set_edge_subsys_name() - Set the subsystem name
+ * @edge: edge type identifies local and remote processor
+ * @subsys_name: pointer to subsystem name
+ *
+ * This function is used to set the subsystem name for given edge type.
+ */
+void smd_set_edge_subsys_name(uint32_t edge, const char *subsys_name)
+{
+ if (edge < ARRAY_SIZE(edge_to_pids))
+ if (subsys_name)
+ strlcpy(edge_to_pids[edge].subsys_name,
+ subsys_name, SMD_MAX_CH_NAME_LEN);
+ else
+ strlcpy(edge_to_pids[edge].subsys_name,
+ "", SMD_MAX_CH_NAME_LEN);
+ else
+ pr_err("%s: Invalid edge type[%d]\n", __func__, edge);
+}
+
+/**
+ * smd_reset_all_edge_subsys_name() - Reset the subsystem name
+ *
+ * This function is used to reset the subsystem name of all edges in
+ * targets where configuration information is available through
+ * device tree.
+ */
+void smd_reset_all_edge_subsys_name(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(edge_to_pids); i++)
+ strlcpy(edge_to_pids[i].subsys_name,
+ "", sizeof(""));
+}
+
+/**
+ * smd_set_edge_initialized() - Set the edge initialized status
+ * @edge: edge type identifies local and remote processor
+ *
+ * This function set the initialized varibale based on edge type.
+ */
+void smd_set_edge_initialized(uint32_t edge)
+{
+ if (edge < ARRAY_SIZE(edge_to_pids))
+ edge_to_pids[edge].initialized = true;
+ else
+ pr_err("%s: Invalid edge type[%d]\n", __func__, edge);
+}
+
+/**
+ * smd_cfg_smd_intr() - Set the SMD interrupt configuration
+ * @proc: remote processor ID
+ * @mask: bit position in IRQ register
+ * @ptr: IRQ register
+ *
+ * This function is called in Legacy init sequence and used to set
+ * the SMD interrupt configurations for particular processor.
+ */
+void smd_cfg_smd_intr(uint32_t proc, uint32_t mask, void *ptr)
+{
+ private_intr_config[proc].smd.out_bit_pos = mask;
+ private_intr_config[proc].smd.out_base = ptr;
+ private_intr_config[proc].smd.out_offset = 0;
+}
+
+/*
+ * smd_cfg_smsm_intr() - Set the SMSM interrupt configuration
+ * @proc: remote processor ID
+ * @mask: bit position in IRQ register
+ * @ptr: IRQ register
+ *
+ * This function is called in Legacy init sequence and used to set
+ * the SMSM interrupt configurations for particular processor.
+ */
+void smd_cfg_smsm_intr(uint32_t proc, uint32_t mask, void *ptr)
+{
+ private_intr_config[proc].smsm.out_bit_pos = mask;
+ private_intr_config[proc].smsm.out_base = ptr;
+ private_intr_config[proc].smsm.out_offset = 0;
+}
+
+static __init int modem_restart_late_init(void)
+{
+ int i;
+ void *handle;
+ struct restart_notifier_block *nb;
+
+ for (i = 0; i < ARRAY_SIZE(restart_notifiers); i++) {
+ nb = &restart_notifiers[i];
+ handle = subsys_notif_register_notifier(nb->name, &nb->nb);
+ SMD_DBG("%s: registering notif for '%s', handle=%p\n",
+ __func__, nb->name, handle);
+ }
+
+ return 0;
+}
+late_initcall(modem_restart_late_init);
+
+int __init msm_smd_init(void)
+{
+ static bool registered;
+ int rc;
+ int i;
+
+ if (registered)
+ return 0;
+
+ smd_log_ctx = ipc_log_context_create(NUM_LOG_PAGES, "smd", 0);
+ if (!smd_log_ctx) {
+ pr_err("%s: unable to create SMD logging context\n", __func__);
+ msm_smd_debug_mask = 0;
+ }
+
+ smsm_log_ctx = ipc_log_context_create(NUM_LOG_PAGES, "smsm", 0);
+ if (!smsm_log_ctx) {
+ pr_err("%s: unable to create SMSM logging context\n", __func__);
+ msm_smd_debug_mask = 0;
+ }
+
+ registered = true;
+
+ for (i = 0; i < NUM_SMD_SUBSYSTEMS; ++i) {
+ remote_info[i].remote_pid = i;
+ remote_info[i].free_space = UINT_MAX;
+ INIT_WORK(&remote_info[i].probe_work, smd_channel_probe_worker);
+ INIT_LIST_HEAD(&remote_info[i].ch_list);
+ }
+
+ channel_close_wq = create_singlethread_workqueue("smd_channel_close");
+ if (IS_ERR(channel_close_wq)) {
+ pr_err("%s: create_singlethread_workqueue ENOMEM\n", __func__);
+ return -ENOMEM;
+ }
+
+ rc = msm_smd_driver_register();
+ if (rc) {
+ pr_err("%s: msm_smd_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+ return 0;
+}
+
+arch_initcall(msm_smd_init);
+
+MODULE_DESCRIPTION("MSM Shared Memory Core");
+MODULE_AUTHOR("Brian Swetland <swetland@google.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/soc/qcom/pil-q6v5-mss.c b/drivers/soc/qcom/pil-q6v5-mss.c
index 0477064..2ca0615 100644
--- a/drivers/soc/qcom/pil-q6v5-mss.c
+++ b/drivers/soc/qcom/pil-q6v5-mss.c
@@ -38,6 +38,7 @@
#define PROXY_TIMEOUT_MS 10000
#define MAX_SSR_REASON_LEN 256U
#define STOP_ACK_TIMEOUT_MS 1000
+#define QDSP6SS_NMI_STATUS 0x44
#define subsys_to_drv(d) container_of(d, struct modem_data, subsys_desc)
@@ -74,12 +75,17 @@ static void restart_modem(struct modem_data *drv)
static irqreturn_t modem_err_fatal_intr_handler(int irq, void *dev_id)
{
struct modem_data *drv = subsys_to_drv(dev_id);
+ u32 nmi_status = readl_relaxed(drv->q6->reg_base + QDSP6SS_NMI_STATUS);
/* Ignore if we're the one that set the force stop GPIO */
if (drv->crash_shutdown)
return IRQ_HANDLED;
- pr_err("Fatal error on the modem.\n");
+ if (nmi_status & 0x04)
+ pr_err("%s: Fatal error on the modem due to TZ NMI\n",
+ __func__);
+ else
+ pr_err("%s: Fatal error on the modem\n", __func__);
subsys_set_crash_status(drv->subsys, CRASH_STATUS_ERR_FATAL);
restart_modem(drv);
return IRQ_HANDLED;
diff --git a/drivers/soc/qcom/qdss_bridge.c b/drivers/soc/qcom/qdss_bridge.c
new file mode 100644
index 0000000..8668155
--- /dev/null
+++ b/drivers/soc/qcom/qdss_bridge.c
@@ -0,0 +1,463 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#define KMSG_COMPONENT "QDSS diag bridge"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/ratelimit.h>
+#include <linux/workqueue.h>
+#include <linux/platform_device.h>
+#include <linux/moduleparam.h>
+#include <linux/msm_mhi.h>
+#include <linux/usb/usb_qdss.h>
+#include "qdss_bridge.h"
+
+#define MODULE_NAME "qdss_bridge"
+
+#define QDSS_BUF_SIZE (16*1024)
+#define MHI_CLIENT_QDSS_IN 9
+
+/* Max number of objects needed */
+static int poolsize = 32;
+module_param(poolsize, int, 0644);
+
+/* Size of single buffer */
+static int itemsize = QDSS_BUF_SIZE;
+module_param(itemsize, int, 0644);
+
+static int qdss_destroy_buf_tbl(struct qdss_bridge_drvdata *drvdata)
+{
+ struct list_head *start, *temp;
+ struct qdss_buf_tbl_lst *entry = NULL;
+
+ list_for_each_safe(start, temp, &drvdata->buf_tbl) {
+ entry = list_entry(start, struct qdss_buf_tbl_lst, link);
+ list_del(&entry->link);
+ kfree(entry->buf);
+ kfree(entry->usb_req);
+ kfree(entry);
+ }
+
+ return 0;
+}
+
+static int qdss_create_buf_tbl(struct qdss_bridge_drvdata *drvdata)
+{
+ struct qdss_buf_tbl_lst *entry;
+ void *buf;
+ struct qdss_request *usb_req;
+ int i;
+
+ for (i = 0; i < poolsize; i++) {
+ entry = kzalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ goto err;
+
+ buf = kzalloc(QDSS_BUF_SIZE, GFP_KERNEL);
+ usb_req = kzalloc(sizeof(*usb_req), GFP_KERNEL);
+
+ entry->buf = buf;
+ entry->usb_req = usb_req;
+ atomic_set(&entry->available, 1);
+ list_add_tail(&entry->link, &drvdata->buf_tbl);
+
+ if (!buf || !usb_req)
+ goto err;
+ }
+
+ return 0;
+err:
+ qdss_destroy_buf_tbl(drvdata);
+ return -ENOMEM;
+}
+
+struct qdss_buf_tbl_lst *qdss_get_buf_tbl_entry(
+ struct qdss_bridge_drvdata *drvdata,
+ void *buf)
+{
+ struct qdss_buf_tbl_lst *entry;
+
+ list_for_each_entry(entry, &drvdata->buf_tbl, link) {
+ if (atomic_read(&entry->available))
+ continue;
+ if (entry->buf == buf)
+ return entry;
+ }
+
+ return NULL;
+}
+
+struct qdss_buf_tbl_lst *qdss_get_entry(struct qdss_bridge_drvdata *drvdata)
+{
+ struct qdss_buf_tbl_lst *item;
+
+ list_for_each_entry(item, &drvdata->buf_tbl, link)
+ if (atomic_cmpxchg(&item->available, 1, 0) == 1)
+ return item;
+
+ return NULL;
+}
+
+static void qdss_buf_tbl_remove(struct qdss_bridge_drvdata *drvdata,
+ void *buf)
+{
+ struct qdss_buf_tbl_lst *entry = NULL;
+
+ list_for_each_entry(entry, &drvdata->buf_tbl, link) {
+ if (entry->buf != buf)
+ continue;
+ atomic_set(&entry->available, 1);
+ return;
+ }
+
+ pr_err_ratelimited("Failed to find buffer for removal\n");
+}
+
+static void mhi_ch_close(struct qdss_bridge_drvdata *drvdata)
+{
+ flush_workqueue(drvdata->mhi_wq);
+ qdss_destroy_buf_tbl(drvdata);
+ mhi_close_channel(drvdata->hdl);
+}
+
+static void mhi_close_work_fn(struct work_struct *work)
+{
+ struct qdss_bridge_drvdata *drvdata =
+ container_of(work,
+ struct qdss_bridge_drvdata,
+ close_work);
+
+ usb_qdss_close(drvdata->usb_ch);
+ mhi_ch_close(drvdata);
+}
+
+static void mhi_read_work_fn(struct work_struct *work)
+{
+ int err = 0;
+ enum MHI_FLAGS mhi_flags = MHI_EOT;
+ struct qdss_buf_tbl_lst *entry;
+
+ struct qdss_bridge_drvdata *drvdata =
+ container_of(work,
+ struct qdss_bridge_drvdata,
+ read_work);
+
+ do {
+ if (!drvdata->opened)
+ break;
+ entry = qdss_get_entry(drvdata);
+ if (!entry)
+ break;
+
+ err = mhi_queue_xfer(drvdata->hdl, entry->buf, QDSS_BUF_SIZE,
+ mhi_flags);
+ if (err) {
+ pr_err_ratelimited("Unable to read from MHI buffer err:%d",
+ err);
+ goto fail;
+ }
+ } while (entry);
+
+ return;
+fail:
+ qdss_buf_tbl_remove(drvdata, entry->buf);
+ queue_work(drvdata->mhi_wq, &drvdata->read_work);
+}
+
+static int mhi_queue_read(struct qdss_bridge_drvdata *drvdata)
+{
+ queue_work(drvdata->mhi_wq, &(drvdata->read_work));
+ return 0;
+}
+
+static int usb_write(struct qdss_bridge_drvdata *drvdata,
+ struct mhi_result *result)
+{
+ int ret = 0;
+ struct qdss_buf_tbl_lst *entry;
+
+ entry = qdss_get_buf_tbl_entry(drvdata, result->buf_addr);
+ if (!entry)
+ return -EINVAL;
+
+ entry->usb_req->buf = result->buf_addr;
+ entry->usb_req->length = result->bytes_xferd;
+ ret = usb_qdss_data_write(drvdata->usb_ch, entry->usb_req);
+
+ return ret;
+}
+
+static void mhi_read_done_work_fn(struct work_struct *work)
+{
+ unsigned char *buf = NULL;
+ struct mhi_result result;
+ int err = 0;
+ struct qdss_bridge_drvdata *drvdata =
+ container_of(work,
+ struct qdss_bridge_drvdata,
+ read_done_work);
+
+ do {
+ err = mhi_poll_inbound(drvdata->hdl, &result);
+ if (err) {
+ pr_debug("MHI poll failed err:%d\n", err);
+ break;
+ }
+ buf = result.buf_addr;
+ if (!buf)
+ break;
+ err = usb_write(drvdata, &result);
+ if (err)
+ qdss_buf_tbl_remove(drvdata, buf);
+ } while (1);
+}
+
+static void usb_write_done(struct qdss_bridge_drvdata *drvdata,
+ struct qdss_request *d_req)
+{
+ if (d_req->status) {
+ pr_err_ratelimited("USB write failed err:%d\n", d_req->status);
+ mhi_queue_read(drvdata);
+ return;
+ }
+ qdss_buf_tbl_remove(drvdata, d_req->buf);
+ mhi_queue_read(drvdata);
+}
+
+static void usb_notifier(void *priv, unsigned int event,
+ struct qdss_request *d_req, struct usb_qdss_ch *ch)
+{
+ struct qdss_bridge_drvdata *drvdata = priv;
+
+ if (!drvdata)
+ return;
+
+ switch (event) {
+ case USB_QDSS_CONNECT:
+ usb_qdss_alloc_req(drvdata->usb_ch, poolsize, 0);
+ mhi_queue_read(drvdata);
+ break;
+
+ case USB_QDSS_DISCONNECT:
+ /* Leave MHI/USB open.Only close on MHI disconnect */
+ break;
+
+ case USB_QDSS_DATA_WRITE_DONE:
+ usb_write_done(drvdata, d_req);
+ break;
+
+ default:
+ break;
+ }
+}
+
+static int mhi_ch_open(struct qdss_bridge_drvdata *drvdata)
+{
+ int ret;
+
+ if (drvdata->opened)
+ return 0;
+
+ ret = mhi_open_channel(drvdata->hdl);
+ if (ret) {
+ pr_err("Unable to open MHI channel\n");
+ return ret;
+ }
+
+ ret = mhi_get_free_desc(drvdata->hdl);
+ if (ret <= 0)
+ return -EIO;
+
+ drvdata->opened = 1;
+ return 0;
+}
+
+static void qdss_bridge_open_work_fn(struct work_struct *work)
+{
+ struct qdss_bridge_drvdata *drvdata =
+ container_of(work,
+ struct qdss_bridge_drvdata,
+ open_work);
+ int ret;
+
+ ret = mhi_ch_open(drvdata);
+ if (ret)
+ goto err_open;
+
+ ret = qdss_create_buf_tbl(drvdata);
+ if (ret)
+ goto err;
+
+ drvdata->usb_ch = usb_qdss_open("qdss_mdm", drvdata, usb_notifier);
+ if (IS_ERR_OR_NULL(drvdata->usb_ch)) {
+ ret = PTR_ERR(drvdata->usb_ch);
+ goto err;
+ }
+
+ return;
+err:
+ mhi_ch_close(drvdata);
+err_open:
+ pr_err("Open work failed with err:%d\n", ret);
+}
+
+static void mhi_notifier(struct mhi_cb_info *cb_info)
+{
+ struct mhi_result *result;
+ struct qdss_bridge_drvdata *drvdata;
+
+ if (!cb_info)
+ return;
+
+ result = cb_info->result;
+ if (!result) {
+ pr_err_ratelimited("Failed to obtain MHI result\n");
+ return;
+ }
+
+ drvdata = (struct qdss_bridge_drvdata *)cb_info->result->user_data;
+ if (!drvdata) {
+ pr_err_ratelimited("MHI returned invalid drvdata\n");
+ return;
+ }
+
+ switch (cb_info->cb_reason) {
+ case MHI_CB_MHI_ENABLED:
+ queue_work(drvdata->mhi_wq, &drvdata->open_work);
+ break;
+
+ case MHI_CB_XFER:
+ if (!drvdata->opened)
+ break;
+
+ queue_work(drvdata->mhi_wq, &drvdata->read_done_work);
+ break;
+
+ case MHI_CB_MHI_DISABLED:
+ if (!drvdata->opened)
+ break;
+
+ drvdata->opened = 0;
+ queue_work(drvdata->mhi_wq, &drvdata->close_work);
+ break;
+
+ default:
+ pr_err_ratelimited("MHI returned invalid cb reason 0x%x\n",
+ cb_info->cb_reason);
+ break;
+ }
+}
+
+static int qdss_mhi_register_ch(struct qdss_bridge_drvdata *drvdata)
+{
+ struct mhi_client_info_t *client_info;
+ int ret;
+ struct mhi_client_info_t *mhi_info;
+
+ client_info = devm_kzalloc(drvdata->dev, sizeof(*client_info),
+ GFP_KERNEL);
+ if (!client_info)
+ return -ENOMEM;
+
+ client_info->mhi_client_cb = mhi_notifier;
+ drvdata->client_info = client_info;
+
+ mhi_info = client_info;
+ mhi_info->chan = MHI_CLIENT_QDSS_IN;
+ mhi_info->dev = drvdata->dev;
+ mhi_info->node_name = "qcom,mhi";
+ mhi_info->user_data = drvdata;
+
+ ret = mhi_register_channel(&drvdata->hdl, mhi_info);
+ return ret;
+}
+
+int qdss_mhi_init(struct qdss_bridge_drvdata *drvdata)
+{
+ int ret;
+
+ drvdata->mhi_wq = create_singlethread_workqueue(MODULE_NAME);
+ if (!drvdata->mhi_wq)
+ return -ENOMEM;
+
+ INIT_WORK(&(drvdata->read_work), mhi_read_work_fn);
+ INIT_WORK(&(drvdata->read_done_work), mhi_read_done_work_fn);
+ INIT_WORK(&(drvdata->open_work), qdss_bridge_open_work_fn);
+ INIT_WORK(&(drvdata->close_work), mhi_close_work_fn);
+ INIT_LIST_HEAD(&drvdata->buf_tbl);
+ drvdata->opened = 0;
+
+ ret = qdss_mhi_register_ch(drvdata);
+ if (ret) {
+ destroy_workqueue(drvdata->mhi_wq);
+ pr_err("Unable to register MHI read channel err:%d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int qdss_mhi_probe(struct platform_device *pdev)
+{
+ int ret;
+ struct device *dev = &pdev->dev;
+ struct qdss_bridge_drvdata *drvdata;
+
+ drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
+ if (!drvdata) {
+ ret = -ENOMEM;
+ return ret;
+ }
+
+ drvdata->dev = &pdev->dev;
+ platform_set_drvdata(pdev, drvdata);
+
+ ret = qdss_mhi_init(drvdata);
+ if (ret)
+ goto err;
+
+ return 0;
+err:
+ pr_err("Device probe failed err:%d\n", ret);
+ return ret;
+}
+
+static const struct of_device_id qdss_mhi_table[] = {
+ {.compatible = "qcom,qdss-mhi"},
+ {},
+};
+
+static struct platform_driver qdss_mhi_driver = {
+ .probe = qdss_mhi_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = qdss_mhi_table,
+ },
+};
+
+static int __init qdss_bridge_init(void)
+{
+ return platform_driver_register(&qdss_mhi_driver);
+}
+
+static void __exit qdss_bridge_exit(void)
+{
+ platform_driver_unregister(&qdss_mhi_driver);
+}
+
+module_init(qdss_bridge_init);
+module_exit(qdss_bridge_exit);
+MODULE_LICENSE("GPL v2")
+MODULE_DESCRIPTION("QDSS Bridge driver");
diff --git a/drivers/soc/qcom/qdss_bridge.h b/drivers/soc/qcom/qdss_bridge.h
new file mode 100644
index 0000000..97b9c40
--- /dev/null
+++ b/drivers/soc/qcom/qdss_bridge.h
@@ -0,0 +1,37 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _QDSS_BRIDGE_H
+#define _QDSS_BRIDGE_H
+
+struct qdss_buf_tbl_lst {
+ struct list_head link;
+ unsigned char *buf;
+ struct qdss_request *usb_req;
+ atomic_t available;
+};
+
+struct qdss_bridge_drvdata {
+ struct device *dev;
+ bool opened;
+ struct work_struct read_work;
+ struct work_struct read_done_work;
+ struct work_struct open_work;
+ struct work_struct close_work;
+ struct workqueue_struct *mhi_wq;
+ struct mhi_client_handle *hdl;
+ struct mhi_client_info_t *client_info;
+ struct list_head buf_tbl;
+ struct usb_qdss_ch *usb_ch;
+};
+
+#endif
diff --git a/drivers/soc/qcom/ramdump.c b/drivers/soc/qcom/ramdump.c
index e4c1bb8..7758c64 100644
--- a/drivers/soc/qcom/ramdump.c
+++ b/drivers/soc/qcom/ramdump.c
@@ -454,23 +454,6 @@ static int _do_ramdump(void *handle, struct ramdump_segment *segments,
}
-static inline struct elf_shdr *elf_sheader(struct elfhdr *hdr)
-{
- return (struct elf_shdr *)((size_t)hdr + (size_t)hdr->e_shoff);
-}
-
-static inline struct elf_shdr *elf_section(struct elfhdr *hdr, int idx)
-{
- return &elf_sheader(hdr)[idx];
-}
-
-static inline char *elf_str_table(struct elfhdr *hdr)
-{
- if (hdr->e_shstrndx == SHN_UNDEF)
- return NULL;
- return (char *)hdr + elf_section(hdr, hdr->e_shstrndx)->sh_offset;
-}
-
static inline unsigned int set_section_name(const char *name,
struct elfhdr *ehdr)
{
diff --git a/drivers/soc/qcom/rpm-smd-debug.c b/drivers/soc/qcom/rpm-smd-debug.c
new file mode 100644
index 0000000..6ae9f08
--- /dev/null
+++ b/drivers/soc/qcom/rpm-smd-debug.c
@@ -0,0 +1,151 @@
+/* Copyright (c) 2013-2014, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt) "rpm-smd-debug: %s(): " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/debugfs.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <soc/qcom/rpm-smd.h>
+
+#define MAX_MSG_BUFFER 350
+#define MAX_KEY_VALUE_PAIRS 20
+
+static struct dentry *rpm_debugfs_dir;
+
+static u32 string_to_uint(const u8 *str)
+{
+ int i, len;
+ u32 output = 0;
+
+ len = strnlen(str, sizeof(u32));
+ for (i = 0; i < len; i++)
+ output |= str[i] << (i * 8);
+
+ return output;
+}
+
+static ssize_t rsc_ops_write(struct file *fp, const char __user *user_buffer,
+ size_t count, loff_t *position)
+{
+ char buf[MAX_MSG_BUFFER], rsc_type_str[6] = {}, rpm_set[8] = {},
+ key_str[6] = {};
+ int i, pos = -1, set = -1, nelems = -1;
+ char *cmp;
+ uint32_t rsc_type = 0, rsc_id = 0, key = 0, data = 0;
+ struct msm_rpm_request *req;
+
+ count = min(count, sizeof(buf) - 1);
+ if (copy_from_user(&buf, user_buffer, count))
+ return -EFAULT;
+ buf[count] = '\0';
+ cmp = strstrip(buf);
+
+ if (sscanf(cmp, "%7s %5s %u %d %n", rpm_set, rsc_type_str,
+ &rsc_id, &nelems, &pos) != 4) {
+ pr_err("Invalid number of arguments passed\n");
+ goto err;
+ }
+
+ if (strlen(rpm_set) > 6 || strlen(rsc_type_str) > 4) {
+ pr_err("Invalid value of set or resource type\n");
+ goto err;
+ }
+
+ if (!strcmp(rpm_set, "active"))
+ set = 0;
+ else if (!strcmp(rpm_set, "sleep"))
+ set = 1;
+
+ rsc_type = string_to_uint(rsc_type_str);
+
+ if (set < 0 || nelems < 0) {
+ pr_err("Invalid value of set or nelems\n");
+ goto err;
+ }
+ if (nelems > MAX_KEY_VALUE_PAIRS) {
+ pr_err("Exceeded max no of key-value entries\n");
+ goto err;
+ }
+
+ req = msm_rpm_create_request(set, rsc_type, rsc_id, nelems);
+ if (!req)
+ return -ENOMEM;
+
+ for (i = 0; i < nelems; i++) {
+ cmp += pos;
+ if (sscanf(cmp, "%5s %n", key_str, &pos) != 1) {
+ pr_err("Invalid number of arguments passed\n");
+ goto err;
+ }
+
+ if (strlen(key_str) > 4) {
+ pr_err("Key value cannot be more than 4 charecters");
+ goto err;
+ }
+ key = string_to_uint(key_str);
+ if (!key) {
+ pr_err("Key values entered incorrectly\n");
+ goto err;
+ }
+
+ cmp += pos;
+ if (sscanf(cmp, "%u %n", &data, &pos) != 1) {
+ pr_err("Invalid number of arguments passed\n");
+ goto err;
+ }
+
+ if (msm_rpm_add_kvp_data(req, key,
+ (void *)&data, sizeof(data)))
+ goto err_request;
+ }
+
+ if (msm_rpm_wait_for_ack(msm_rpm_send_request(req)))
+ pr_err("Sending the RPM message failed\n");
+
+err_request:
+ msm_rpm_free_request(req);
+err:
+ return count;
+}
+
+static const struct file_operations rsc_ops = {
+ .write = rsc_ops_write,
+};
+
+static int __init rpm_smd_debugfs_init(void)
+{
+ rpm_debugfs_dir = debugfs_create_dir("rpm_send_msg", NULL);
+ if (!rpm_debugfs_dir)
+ return -ENOMEM;
+
+ if (!debugfs_create_file("message", 0200, rpm_debugfs_dir, NULL,
+ &rsc_ops))
+ return -ENOMEM;
+
+ return 0;
+}
+late_initcall(rpm_smd_debugfs_init);
+
+static void __exit rpm_smd_debugfs_exit(void)
+{
+ debugfs_remove_recursive(rpm_debugfs_dir);
+}
+module_exit(rpm_smd_debugfs_exit);
+
+MODULE_DESCRIPTION("RPM SMD Debug Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/rpm-smd.c b/drivers/soc/qcom/rpm-smd.c
new file mode 100644
index 0000000..3fc7fbf
--- /dev/null
+++ b/drivers/soc/qcom/rpm-smd.c
@@ -0,0 +1,2168 @@
+/* Copyright (c) 2012-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#define pr_fmt(fmt) "%s: " fmt, __func__
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/bug.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/of_address.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/device.h>
+#include <linux/notifier.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <linux/platform_device.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/rbtree.h>
+#include <soc/qcom/rpm-notifier.h>
+#include <soc/qcom/rpm-smd.h>
+#include <soc/qcom/smd.h>
+#include <soc/qcom/glink_rpm_xprt.h>
+#include <soc/qcom/glink.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/trace_rpm_smd.h>
+
+/* Debug Definitions */
+enum {
+ MSM_RPM_LOG_REQUEST_PRETTY = BIT(0),
+ MSM_RPM_LOG_REQUEST_RAW = BIT(1),
+ MSM_RPM_LOG_REQUEST_SHOW_MSG_ID = BIT(2),
+};
+
+static int msm_rpm_debug_mask;
+module_param_named(
+ debug_mask, msm_rpm_debug_mask, int, 0644
+);
+
+struct msm_rpm_driver_data {
+ const char *ch_name;
+ uint32_t ch_type;
+ smd_channel_t *ch_info;
+ struct work_struct work;
+ spinlock_t smd_lock_write;
+ spinlock_t smd_lock_read;
+ struct completion smd_open;
+};
+
+struct glink_apps_rpm_data {
+ const char *name;
+ const char *edge;
+ const char *xprt;
+ void *glink_handle;
+ struct glink_link_info *link_info;
+ struct glink_open_config *open_cfg;
+ struct work_struct work;
+};
+
+static bool glink_enabled;
+static struct glink_apps_rpm_data *glink_data;
+
+#define DEFAULT_BUFFER_SIZE 256
+#define DEBUG_PRINT_BUFFER_SIZE 512
+#define MAX_SLEEP_BUFFER 128
+#define GFP_FLAG(noirq) (noirq ? GFP_ATOMIC : GFP_NOIO)
+#define INV_RSC "resource does not exist"
+#define ERR "err\0"
+#define MAX_ERR_BUFFER_SIZE 128
+#define MAX_WAIT_ON_ACK 24
+#define INIT_ERROR 1
+#define V1_PROTOCOL_VERSION 0x31726576 /* rev1 */
+#define V0_PROTOCOL_VERSION 0 /* rev0 */
+#define RPM_MSG_TYPE_OFFSET 16
+#define RPM_MSG_TYPE_SIZE 8
+#define RPM_SET_TYPE_OFFSET 28
+#define RPM_SET_TYPE_SIZE 4
+#define RPM_REQ_LEN_OFFSET 0
+#define RPM_REQ_LEN_SIZE 16
+#define RPM_MSG_VERSION_OFFSET 24
+#define RPM_MSG_VERSION_SIZE 8
+#define RPM_MSG_VERSION 1
+#define RPM_MSG_SET_OFFSET 28
+#define RPM_MSG_SET_SIZE 4
+#define RPM_RSC_ID_OFFSET 16
+#define RPM_RSC_ID_SIZE 12
+#define RPM_DATA_LEN_OFFSET 0
+#define RPM_DATA_LEN_SIZE 16
+#define RPM_HDR_SIZE ((rpm_msg_fmt_ver == RPM_MSG_V0_FMT) ?\
+ sizeof(struct rpm_v0_hdr) : sizeof(struct rpm_v1_hdr))
+#define CLEAR_FIELD(offset, size) (~GENMASK(offset + size - 1, offset))
+
+static ATOMIC_NOTIFIER_HEAD(msm_rpm_sleep_notifier);
+static bool standalone;
+static int probe_status = -EPROBE_DEFER;
+static int msm_rpm_read_smd_data(char *buf);
+static void msm_rpm_process_ack(uint32_t msg_id, int errno);
+
+int msm_rpm_register_notifier(struct notifier_block *nb)
+{
+ return atomic_notifier_chain_register(&msm_rpm_sleep_notifier, nb);
+}
+
+int msm_rpm_unregister_notifier(struct notifier_block *nb)
+{
+ return atomic_notifier_chain_unregister(&msm_rpm_sleep_notifier, nb);
+}
+
+static struct workqueue_struct *msm_rpm_smd_wq;
+
+enum {
+ MSM_RPM_MSG_REQUEST_TYPE = 0,
+ MSM_RPM_MSG_TYPE_NR,
+};
+
+static const uint32_t msm_rpm_request_service_v1[MSM_RPM_MSG_TYPE_NR] = {
+ 0x716572, /* 'req\0' */
+};
+
+enum {
+ RPM_V1_REQUEST_SERVICE,
+ RPM_V1_SYSTEMDB_SERVICE,
+ RPM_V1_COMMAND_SERVICE,
+ RPM_V1_ACK_SERVICE,
+ RPM_V1_NACK_SERVICE,
+} msm_rpm_request_service_v2;
+
+struct rpm_v0_hdr {
+ uint32_t service_type;
+ uint32_t request_len;
+};
+
+struct rpm_v1_hdr {
+ uint32_t request_hdr;
+};
+
+struct rpm_message_header_v0 {
+ struct rpm_v0_hdr hdr;
+ uint32_t msg_id;
+ enum msm_rpm_set set;
+ uint32_t resource_type;
+ uint32_t resource_id;
+ uint32_t data_len;
+};
+
+struct rpm_message_header_v1 {
+ struct rpm_v1_hdr hdr;
+ uint32_t msg_id;
+ uint32_t resource_type;
+ uint32_t request_details;
+};
+
+struct msm_rpm_ack_msg_v0 {
+ uint32_t req;
+ uint32_t req_len;
+ uint32_t rsc_id;
+ uint32_t msg_len;
+ uint32_t id_ack;
+};
+
+struct msm_rpm_ack_msg_v1 {
+ uint32_t request_hdr;
+ uint32_t id_ack;
+};
+
+struct kvp {
+ unsigned int k;
+ unsigned int s;
+};
+
+struct msm_rpm_kvp_data {
+ uint32_t key;
+ uint32_t nbytes; /* number of bytes */
+ uint8_t *value;
+ bool valid;
+};
+
+struct slp_buf {
+ struct rb_node node;
+ char ubuf[MAX_SLEEP_BUFFER];
+ char *buf;
+ bool valid;
+};
+
+enum rpm_msg_fmts {
+ RPM_MSG_V0_FMT,
+ RPM_MSG_V1_FMT
+};
+
+static uint32_t rpm_msg_fmt_ver;
+module_param_named(
+ rpm_msg_fmt_ver, rpm_msg_fmt_ver, uint, 0444
+);
+
+static struct rb_root tr_root = RB_ROOT;
+static int (*msm_rpm_send_buffer)(char *buf, uint32_t size, bool noirq);
+static int msm_rpm_send_smd_buffer(char *buf, uint32_t size, bool noirq);
+static int msm_rpm_glink_send_buffer(char *buf, uint32_t size, bool noirq);
+static uint32_t msm_rpm_get_next_msg_id(void);
+
+static inline uint32_t get_offset_value(uint32_t val, uint32_t offset,
+ uint32_t size)
+{
+ return (((val) & GENMASK(offset + size - 1, offset))
+ >> offset);
+}
+
+static inline void change_offset_value(uint32_t *val, uint32_t offset,
+ uint32_t size, int32_t val1)
+{
+ uint32_t member = *val;
+ uint32_t offset_val = get_offset_value(member, offset, size);
+ uint32_t mask = (1 << size) - 1;
+
+ offset_val += val1;
+ *val &= CLEAR_FIELD(offset, size);
+ *val |= ((offset_val & mask) << offset);
+}
+
+static inline void set_offset_value(uint32_t *val, uint32_t offset,
+ uint32_t size, uint32_t val1)
+{
+ uint32_t mask = (1 << size) - 1;
+
+ *val &= CLEAR_FIELD(offset, size);
+ *val |= ((val1 & mask) << offset);
+}
+static uint32_t get_msg_id(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct rpm_message_header_v0 *)buf)->msg_id;
+
+ return ((struct rpm_message_header_v1 *)buf)->msg_id;
+
+}
+
+static uint32_t get_ack_msg_id(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct msm_rpm_ack_msg_v0 *)buf)->id_ack;
+
+ return ((struct msm_rpm_ack_msg_v1 *)buf)->id_ack;
+
+}
+
+static uint32_t get_rsc_type(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct rpm_message_header_v0 *)buf)->resource_type;
+
+ return ((struct rpm_message_header_v1 *)buf)->resource_type;
+
+}
+
+static uint32_t get_set_type(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct rpm_message_header_v0 *)buf)->set;
+
+ return get_offset_value(((struct rpm_message_header_v1 *)buf)->
+ request_details, RPM_SET_TYPE_OFFSET,
+ RPM_SET_TYPE_SIZE);
+}
+
+static uint32_t get_data_len(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct rpm_message_header_v0 *)buf)->data_len;
+
+ return get_offset_value(((struct rpm_message_header_v1 *)buf)->
+ request_details, RPM_DATA_LEN_OFFSET,
+ RPM_DATA_LEN_SIZE);
+}
+
+static uint32_t get_rsc_id(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct rpm_message_header_v0 *)buf)->resource_id;
+
+ return get_offset_value(((struct rpm_message_header_v1 *)buf)->
+ request_details, RPM_RSC_ID_OFFSET,
+ RPM_RSC_ID_SIZE);
+}
+
+static uint32_t get_ack_req_len(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct msm_rpm_ack_msg_v0 *)buf)->req_len;
+
+ return get_offset_value(((struct msm_rpm_ack_msg_v1 *)buf)->
+ request_hdr, RPM_REQ_LEN_OFFSET,
+ RPM_REQ_LEN_SIZE);
+}
+
+static uint32_t get_ack_msg_type(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct msm_rpm_ack_msg_v0 *)buf)->req;
+
+ return get_offset_value(((struct msm_rpm_ack_msg_v1 *)buf)->
+ request_hdr, RPM_MSG_TYPE_OFFSET,
+ RPM_MSG_TYPE_SIZE);
+}
+
+static uint32_t get_req_len(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return ((struct rpm_message_header_v0 *)buf)->hdr.request_len;
+
+ return get_offset_value(((struct rpm_message_header_v1 *)buf)->
+ hdr.request_hdr, RPM_REQ_LEN_OFFSET,
+ RPM_REQ_LEN_SIZE);
+}
+
+static void set_msg_ver(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver) {
+ set_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ hdr.request_hdr, RPM_MSG_VERSION_OFFSET,
+ RPM_MSG_VERSION_SIZE, val);
+ } else {
+ set_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ hdr.request_hdr, RPM_MSG_VERSION_OFFSET,
+ RPM_MSG_VERSION_SIZE, 0);
+ }
+}
+
+static void set_req_len(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT) {
+ ((struct rpm_message_header_v0 *)buf)->hdr.request_len = val;
+ } else {
+ set_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ hdr.request_hdr, RPM_REQ_LEN_OFFSET,
+ RPM_REQ_LEN_SIZE, val);
+ }
+}
+
+static void change_req_len(char *buf, int32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT) {
+ ((struct rpm_message_header_v0 *)buf)->hdr.request_len += val;
+ } else {
+ change_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ hdr.request_hdr, RPM_REQ_LEN_OFFSET,
+ RPM_REQ_LEN_SIZE, val);
+ }
+}
+
+static void set_msg_type(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT) {
+ ((struct rpm_message_header_v0 *)buf)->hdr.service_type =
+ msm_rpm_request_service_v1[val];
+ } else {
+ set_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ hdr.request_hdr, RPM_MSG_TYPE_OFFSET,
+ RPM_MSG_TYPE_SIZE, RPM_V1_REQUEST_SERVICE);
+ }
+}
+
+static void set_rsc_id(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ ((struct rpm_message_header_v0 *)buf)->resource_id = val;
+ else
+ set_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ request_details, RPM_RSC_ID_OFFSET,
+ RPM_RSC_ID_SIZE, val);
+}
+
+static void set_data_len(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ ((struct rpm_message_header_v0 *)buf)->data_len = val;
+ else
+ set_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ request_details, RPM_DATA_LEN_OFFSET,
+ RPM_DATA_LEN_SIZE, val);
+}
+static void change_data_len(char *buf, int32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ ((struct rpm_message_header_v0 *)buf)->data_len += val;
+ else
+ change_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ request_details, RPM_DATA_LEN_OFFSET,
+ RPM_DATA_LEN_SIZE, val);
+}
+
+static void set_set_type(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ ((struct rpm_message_header_v0 *)buf)->set = val;
+ else
+ set_offset_value(&((struct rpm_message_header_v1 *)buf)->
+ request_details, RPM_SET_TYPE_OFFSET,
+ RPM_SET_TYPE_SIZE, val);
+}
+static void set_msg_id(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ ((struct rpm_message_header_v0 *)buf)->msg_id = val;
+ else
+ ((struct rpm_message_header_v1 *)buf)->msg_id = val;
+
+}
+
+static void set_rsc_type(char *buf, uint32_t val)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ ((struct rpm_message_header_v0 *)buf)->resource_type = val;
+ else
+ ((struct rpm_message_header_v1 *)buf)->resource_type = val;
+}
+
+static inline int get_buf_len(char *buf)
+{
+ return get_req_len(buf) + RPM_HDR_SIZE;
+}
+
+static inline struct kvp *get_first_kvp(char *buf)
+{
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ return (struct kvp *)(buf +
+ sizeof(struct rpm_message_header_v0));
+ else
+ return (struct kvp *)(buf +
+ sizeof(struct rpm_message_header_v1));
+}
+
+static inline struct kvp *get_next_kvp(struct kvp *k)
+{
+ return (struct kvp *)((void *)k + sizeof(*k) + k->s);
+}
+
+static inline void *get_data(struct kvp *k)
+{
+ return (void *)k + sizeof(*k);
+}
+
+
+static void delete_kvp(char *buf, struct kvp *d)
+{
+ struct kvp *n;
+ int dec;
+ uint32_t size;
+
+ n = get_next_kvp(d);
+ dec = (void *)n - (void *)d;
+ size = get_data_len(buf) -
+ ((void *)n - (void *)get_first_kvp(buf));
+
+ memcpy((void *)d, (void *)n, size);
+
+ change_data_len(buf, -dec);
+ change_req_len(buf, -dec);
+}
+
+static inline void update_kvp_data(struct kvp *dest, struct kvp *src)
+{
+ memcpy(get_data(dest), get_data(src), src->s);
+}
+
+static void add_kvp(char *buf, struct kvp *n)
+{
+ int32_t inc = sizeof(*n) + n->s;
+
+ if (get_req_len(buf) + inc > MAX_SLEEP_BUFFER) {
+ WARN_ON(get_req_len(buf) + inc > MAX_SLEEP_BUFFER);
+ return;
+ }
+
+ memcpy(buf + get_buf_len(buf), n, inc);
+
+ change_data_len(buf, inc);
+ change_req_len(buf, inc);
+}
+
+static struct slp_buf *tr_search(struct rb_root *root, char *slp)
+{
+ unsigned int type = get_rsc_type(slp);
+ unsigned int id = get_rsc_id(slp);
+ struct rb_node *node = root->rb_node;
+
+ while (node) {
+ struct slp_buf *cur = rb_entry(node, struct slp_buf, node);
+ unsigned int ctype = get_rsc_type(cur->buf);
+ unsigned int cid = get_rsc_id(cur->buf);
+
+ if (type < ctype)
+ node = node->rb_left;
+ else if (type > ctype)
+ node = node->rb_right;
+ else if (id < cid)
+ node = node->rb_left;
+ else if (id > cid)
+ node = node->rb_right;
+ else
+ return cur;
+ }
+ return NULL;
+}
+
+static int tr_insert(struct rb_root *root, struct slp_buf *slp)
+{
+ unsigned int type = get_rsc_type(slp->buf);
+ unsigned int id = get_rsc_id(slp->buf);
+ struct rb_node **node = &(root->rb_node), *parent = NULL;
+
+ while (*node) {
+ struct slp_buf *curr = rb_entry(*node, struct slp_buf, node);
+ unsigned int ctype = get_rsc_type(curr->buf);
+ unsigned int cid = get_rsc_id(curr->buf);
+
+ parent = *node;
+
+ if (type < ctype)
+ node = &((*node)->rb_left);
+ else if (type > ctype)
+ node = &((*node)->rb_right);
+ else if (id < cid)
+ node = &((*node)->rb_left);
+ else if (id > cid)
+ node = &((*node)->rb_right);
+ else
+ return -EINVAL;
+ }
+
+ rb_link_node(&slp->node, parent, node);
+ rb_insert_color(&slp->node, root);
+ slp->valid = true;
+ return 0;
+}
+
+#define for_each_kvp(buf, k) \
+ for (k = (struct kvp *)get_first_kvp(buf); \
+ ((void *)k - (void *)get_first_kvp(buf)) < \
+ get_data_len(buf);\
+ k = get_next_kvp(k))
+
+
+static void tr_update(struct slp_buf *s, char *buf)
+{
+ struct kvp *e, *n;
+
+ for_each_kvp(buf, n) {
+ bool found = false;
+
+ for_each_kvp(s->buf, e) {
+ if (n->k == e->k) {
+ found = true;
+ if (n->s == e->s) {
+ void *e_data = get_data(e);
+ void *n_data = get_data(n);
+
+ if (memcmp(e_data, n_data, n->s)) {
+ update_kvp_data(e, n);
+ s->valid = true;
+ }
+ } else {
+ delete_kvp(s->buf, e);
+ add_kvp(s->buf, n);
+ s->valid = true;
+ }
+ break;
+ }
+
+ }
+ if (!found) {
+ add_kvp(s->buf, n);
+ s->valid = true;
+ }
+ }
+}
+static atomic_t msm_rpm_msg_id = ATOMIC_INIT(0);
+
+struct msm_rpm_request {
+ uint8_t *client_buf;
+ struct msm_rpm_kvp_data *kvp;
+ uint32_t num_elements;
+ uint32_t write_idx;
+ uint8_t *buf;
+ uint32_t numbytes;
+};
+
+/*
+ * Data related to message acknowledgment
+ */
+
+LIST_HEAD(msm_rpm_wait_list);
+
+struct msm_rpm_wait_data {
+ struct list_head list;
+ uint32_t msg_id;
+ bool ack_recd;
+ int errno;
+ struct completion ack;
+ bool delete_on_ack;
+};
+DEFINE_SPINLOCK(msm_rpm_list_lock);
+
+
+
+LIST_HEAD(msm_rpm_ack_list);
+
+static struct tasklet_struct data_tasklet;
+
+static inline uint32_t msm_rpm_get_msg_id_from_ack(uint8_t *buf)
+{
+ return get_ack_msg_id(buf);
+}
+
+static inline int msm_rpm_get_error_from_ack(uint8_t *buf)
+{
+ uint8_t *tmp;
+ uint32_t req_len = get_ack_req_len(buf);
+ uint32_t msg_type = get_ack_msg_type(buf);
+ int rc = -ENODEV;
+ uint32_t err;
+ uint32_t ack_msg_size = rpm_msg_fmt_ver ?
+ sizeof(struct msm_rpm_ack_msg_v1) :
+ sizeof(struct msm_rpm_ack_msg_v0);
+
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT &&
+ msg_type == RPM_V1_ACK_SERVICE) {
+ return 0;
+ } else if (rpm_msg_fmt_ver && msg_type == RPM_V1_NACK_SERVICE) {
+ err = *(uint32_t *)(buf + sizeof(struct msm_rpm_ack_msg_v1));
+ return err;
+ }
+
+ req_len -= ack_msg_size;
+ req_len += 2 * sizeof(uint32_t);
+ if (!req_len)
+ return 0;
+
+ pr_err("%s:rpm returned error or nack req_len: %d id_ack: %d\n",
+ __func__, req_len, get_ack_msg_id(buf));
+
+ tmp = buf + ack_msg_size;
+
+ if (memcmp(tmp, ERR, sizeof(uint32_t))) {
+ pr_err("%s rpm returned error\n", __func__);
+ WARN_ON(1);
+ }
+
+ tmp += 2 * sizeof(uint32_t);
+
+ if (!(memcmp(tmp, INV_RSC, min_t(uint32_t, req_len,
+ sizeof(INV_RSC))-1))) {
+ pr_err("%s(): RPM NACK Unsupported resource\n", __func__);
+ rc = -EINVAL;
+ } else {
+ pr_err("%s(): RPM NACK Invalid header\n", __func__);
+ }
+
+ return rc;
+}
+
+int msm_rpm_smd_buffer_request(struct msm_rpm_request *cdata,
+ uint32_t size, gfp_t flag)
+{
+ struct slp_buf *slp;
+ static DEFINE_SPINLOCK(slp_buffer_lock);
+ unsigned long flags;
+ char *buf;
+
+ buf = cdata->buf;
+
+ if (size > MAX_SLEEP_BUFFER)
+ return -ENOMEM;
+
+ spin_lock_irqsave(&slp_buffer_lock, flags);
+ slp = tr_search(&tr_root, buf);
+
+ if (!slp) {
+ slp = kzalloc(sizeof(struct slp_buf), GFP_ATOMIC);
+ if (!slp) {
+ spin_unlock_irqrestore(&slp_buffer_lock, flags);
+ return -ENOMEM;
+ }
+ slp->buf = PTR_ALIGN(&slp->ubuf[0], sizeof(u32));
+ memcpy(slp->buf, buf, size);
+ if (tr_insert(&tr_root, slp))
+ pr_err("Error updating sleep request\n");
+ } else {
+ /* handle unsent requests */
+ tr_update(slp, buf);
+ }
+ trace_rpm_smd_sleep_set(get_msg_id(cdata->client_buf),
+ get_rsc_type(cdata->client_buf),
+ get_req_len(cdata->client_buf));
+
+ spin_unlock_irqrestore(&slp_buffer_lock, flags);
+
+ return 0;
+}
+
+static struct msm_rpm_driver_data msm_rpm_data = {
+ .smd_open = COMPLETION_INITIALIZER(msm_rpm_data.smd_open),
+};
+
+static int msm_rpm_glink_rx_poll(void *glink_handle)
+{
+ int ret;
+
+ ret = glink_rpm_rx_poll(glink_handle);
+ if (ret >= 0)
+ /*
+ * Sleep for 50us at a time before checking
+ * for packet availability. The 50us is based
+ * on the the time rpm could take to process
+ * and send an ack for the sleep set request.
+ */
+ udelay(50);
+ else
+ pr_err("Not receieve an ACK from RPM. ret = %d\n", ret);
+
+ return ret;
+}
+
+/*
+ * Returns
+ * = 0 on successful reads
+ * > 0 on successful reads with no further data
+ * standard Linux error codes on failure.
+ */
+static int msm_rpm_read_sleep_ack(void)
+{
+ int ret;
+ char buf[MAX_ERR_BUFFER_SIZE] = {0};
+
+ if (glink_enabled)
+ ret = msm_rpm_glink_rx_poll(glink_data->glink_handle);
+ else {
+ ret = msm_rpm_read_smd_data(buf);
+ if (!ret)
+ ret = smd_is_pkt_avail(msm_rpm_data.ch_info);
+ }
+ return ret;
+}
+
+static int msm_rpm_flush_requests(bool print)
+{
+ struct rb_node *t;
+ int ret;
+ int count = 0;
+
+ for (t = rb_first(&tr_root); t; t = rb_next(t)) {
+
+ struct slp_buf *s = rb_entry(t, struct slp_buf, node);
+ unsigned int type = get_rsc_type(s->buf);
+ unsigned int id = get_rsc_id(s->buf);
+
+ if (!s->valid)
+ continue;
+
+ set_msg_id(s->buf, msm_rpm_get_next_msg_id());
+
+ if (!glink_enabled)
+ ret = msm_rpm_send_smd_buffer(s->buf,
+ get_buf_len(s->buf), true);
+ else
+ ret = msm_rpm_glink_send_buffer(s->buf,
+ get_buf_len(s->buf), true);
+
+ WARN_ON(ret != get_buf_len(s->buf));
+ trace_rpm_smd_send_sleep_set(get_msg_id(s->buf), type, id);
+
+ s->valid = false;
+ count++;
+
+ /*
+ * RPM acks need to be handled here if we have sent 24
+ * messages such that we do not overrun SMD buffer. Since
+ * we expect only sleep sets at this point (RPM PC would be
+ * disallowed if we had pending active requests), we need not
+ * process these sleep set acks.
+ */
+ if (count >= MAX_WAIT_ON_ACK) {
+ int ret = msm_rpm_read_sleep_ack();
+
+ if (ret >= 0)
+ count--;
+ else
+ return ret;
+ }
+ }
+ return 0;
+}
+
+static void msm_rpm_notify_sleep_chain(char *buf,
+ struct msm_rpm_kvp_data *kvp)
+{
+ struct msm_rpm_notifier_data notif;
+
+ notif.rsc_type = get_rsc_type(buf);
+ notif.rsc_id = get_req_len(buf);
+ notif.key = kvp->key;
+ notif.size = kvp->nbytes;
+ notif.value = kvp->value;
+ atomic_notifier_call_chain(&msm_rpm_sleep_notifier, 0, ¬if);
+}
+
+static int msm_rpm_add_kvp_data_common(struct msm_rpm_request *handle,
+ uint32_t key, const uint8_t *data, int size, bool noirq)
+{
+ uint32_t i;
+ uint32_t data_size, msg_size;
+
+ if (probe_status)
+ return probe_status;
+
+ if (!handle || !data) {
+ pr_err("%s(): Invalid handle/data\n", __func__);
+ return -EINVAL;
+ }
+
+ if (size < 0)
+ return -EINVAL;
+
+ data_size = ALIGN(size, SZ_4);
+ msg_size = data_size + 8;
+
+ for (i = 0; i < handle->write_idx; i++) {
+ if (handle->kvp[i].key != key)
+ continue;
+ if (handle->kvp[i].nbytes != data_size) {
+ kfree(handle->kvp[i].value);
+ handle->kvp[i].value = NULL;
+ } else {
+ if (!memcmp(handle->kvp[i].value, data, data_size))
+ return 0;
+ }
+ break;
+ }
+
+ if (i >= handle->num_elements) {
+ pr_err("Number of resources exceeds max allocated\n");
+ return -ENOMEM;
+ }
+
+ if (i == handle->write_idx)
+ handle->write_idx++;
+
+ if (!handle->kvp[i].value) {
+ handle->kvp[i].value = kzalloc(data_size, GFP_FLAG(noirq));
+
+ if (!handle->kvp[i].value)
+ return -ENOMEM;
+ } else {
+ /* We enter the else case, if a key already exists but the
+ * data doesn't match. In which case, we should zero the data
+ * out.
+ */
+ memset(handle->kvp[i].value, 0, data_size);
+ }
+
+ if (!handle->kvp[i].valid)
+ change_data_len(handle->client_buf, msg_size);
+ else
+ change_data_len(handle->client_buf,
+ (data_size - handle->kvp[i].nbytes));
+
+ handle->kvp[i].nbytes = data_size;
+ handle->kvp[i].key = key;
+ memcpy(handle->kvp[i].value, data, size);
+ handle->kvp[i].valid = true;
+
+ return 0;
+
+}
+
+static struct msm_rpm_request *msm_rpm_create_request_common(
+ enum msm_rpm_set set, uint32_t rsc_type, uint32_t rsc_id,
+ int num_elements, bool noirq)
+{
+ struct msm_rpm_request *cdata;
+ uint32_t buf_size;
+
+ if (probe_status)
+ return ERR_PTR(probe_status);
+
+ cdata = kzalloc(sizeof(struct msm_rpm_request),
+ GFP_FLAG(noirq));
+
+ if (!cdata) {
+ pr_err("Cannot allocate memory for client data\n");
+ goto cdata_alloc_fail;
+ }
+
+ if (rpm_msg_fmt_ver == RPM_MSG_V0_FMT)
+ buf_size = sizeof(struct rpm_message_header_v0);
+ else
+ buf_size = sizeof(struct rpm_message_header_v1);
+
+ cdata->client_buf = kzalloc(buf_size, GFP_FLAG(noirq));
+
+ if (!cdata->client_buf)
+ goto client_buf_alloc_fail;
+
+ set_set_type(cdata->client_buf, set);
+ set_rsc_type(cdata->client_buf, rsc_type);
+ set_rsc_id(cdata->client_buf, rsc_id);
+
+ cdata->num_elements = num_elements;
+ cdata->write_idx = 0;
+
+ cdata->kvp = kcalloc(num_elements, sizeof(struct msm_rpm_kvp_data),
+ GFP_FLAG(noirq));
+
+ if (!cdata->kvp) {
+ pr_warn("%s(): Cannot allocate memory for key value data\n",
+ __func__);
+ goto kvp_alloc_fail;
+ }
+
+ cdata->buf = kzalloc(DEFAULT_BUFFER_SIZE, GFP_FLAG(noirq));
+
+ if (!cdata->buf)
+ goto buf_alloc_fail;
+
+ cdata->numbytes = DEFAULT_BUFFER_SIZE;
+ return cdata;
+
+buf_alloc_fail:
+ kfree(cdata->kvp);
+kvp_alloc_fail:
+ kfree(cdata->client_buf);
+client_buf_alloc_fail:
+ kfree(cdata);
+cdata_alloc_fail:
+ return NULL;
+
+}
+
+void msm_rpm_free_request(struct msm_rpm_request *handle)
+{
+ int i;
+
+ if (!handle)
+ return;
+ for (i = 0; i < handle->num_elements; i++)
+ kfree(handle->kvp[i].value);
+ kfree(handle->kvp);
+ kfree(handle->client_buf);
+ kfree(handle->buf);
+ kfree(handle);
+}
+EXPORT_SYMBOL(msm_rpm_free_request);
+
+struct msm_rpm_request *msm_rpm_create_request(
+ enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, int num_elements)
+{
+ return msm_rpm_create_request_common(set, rsc_type, rsc_id,
+ num_elements, false);
+}
+EXPORT_SYMBOL(msm_rpm_create_request);
+
+struct msm_rpm_request *msm_rpm_create_request_noirq(
+ enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, int num_elements)
+{
+ return msm_rpm_create_request_common(set, rsc_type, rsc_id,
+ num_elements, true);
+}
+EXPORT_SYMBOL(msm_rpm_create_request_noirq);
+
+int msm_rpm_add_kvp_data(struct msm_rpm_request *handle,
+ uint32_t key, const uint8_t *data, int size)
+{
+ return msm_rpm_add_kvp_data_common(handle, key, data, size, false);
+
+}
+EXPORT_SYMBOL(msm_rpm_add_kvp_data);
+
+int msm_rpm_add_kvp_data_noirq(struct msm_rpm_request *handle,
+ uint32_t key, const uint8_t *data, int size)
+{
+ return msm_rpm_add_kvp_data_common(handle, key, data, size, true);
+}
+EXPORT_SYMBOL(msm_rpm_add_kvp_data_noirq);
+
+/* Runs in interrupt context */
+static void msm_rpm_notify(void *data, unsigned int event)
+{
+ struct msm_rpm_driver_data *pdata = (struct msm_rpm_driver_data *)data;
+
+ WARN_ON(!pdata);
+
+ if (!(pdata->ch_info))
+ return;
+
+ switch (event) {
+ case SMD_EVENT_DATA:
+ tasklet_schedule(&data_tasklet);
+ trace_rpm_smd_interrupt_notify("interrupt notification");
+ break;
+ case SMD_EVENT_OPEN:
+ complete(&pdata->smd_open);
+ break;
+ case SMD_EVENT_CLOSE:
+ case SMD_EVENT_STATUS:
+ case SMD_EVENT_REOPEN_READY:
+ break;
+ default:
+ pr_info("Unknown SMD event\n");
+
+ }
+}
+
+bool msm_rpm_waiting_for_ack(void)
+{
+ bool ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&msm_rpm_list_lock, flags);
+ ret = list_empty(&msm_rpm_wait_list);
+ spin_unlock_irqrestore(&msm_rpm_list_lock, flags);
+
+ return !ret;
+}
+
+static struct msm_rpm_wait_data *msm_rpm_get_entry_from_msg_id(uint32_t msg_id)
+{
+ struct list_head *ptr;
+ struct msm_rpm_wait_data *elem = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&msm_rpm_list_lock, flags);
+
+ list_for_each(ptr, &msm_rpm_wait_list) {
+ elem = list_entry(ptr, struct msm_rpm_wait_data, list);
+ if (elem && (elem->msg_id == msg_id))
+ break;
+ elem = NULL;
+ }
+ spin_unlock_irqrestore(&msm_rpm_list_lock, flags);
+ return elem;
+}
+
+static uint32_t msm_rpm_get_next_msg_id(void)
+{
+ uint32_t id;
+
+ /*
+ * A message id of 0 is used by the driver to indicate a error
+ * condition. The RPM driver uses a id of 1 to indicate unsent data
+ * when the data sent over hasn't been modified. This isn't a error
+ * scenario and wait for ack returns a success when the message id is 1.
+ */
+
+ do {
+ id = atomic_inc_return(&msm_rpm_msg_id);
+ } while ((id == 0) || (id == 1) || msm_rpm_get_entry_from_msg_id(id));
+
+ return id;
+}
+
+static int msm_rpm_add_wait_list(uint32_t msg_id, bool delete_on_ack)
+{
+ unsigned long flags;
+ struct msm_rpm_wait_data *data =
+ kzalloc(sizeof(struct msm_rpm_wait_data), GFP_ATOMIC);
+
+ if (!data)
+ return -ENOMEM;
+
+ init_completion(&data->ack);
+ data->ack_recd = false;
+ data->msg_id = msg_id;
+ data->errno = INIT_ERROR;
+ data->delete_on_ack = delete_on_ack;
+ spin_lock_irqsave(&msm_rpm_list_lock, flags);
+ if (delete_on_ack)
+ list_add_tail(&data->list, &msm_rpm_wait_list);
+ else
+ list_add(&data->list, &msm_rpm_wait_list);
+ spin_unlock_irqrestore(&msm_rpm_list_lock, flags);
+
+ return 0;
+}
+
+static void msm_rpm_free_list_entry(struct msm_rpm_wait_data *elem)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&msm_rpm_list_lock, flags);
+ list_del(&elem->list);
+ spin_unlock_irqrestore(&msm_rpm_list_lock, flags);
+ kfree(elem);
+}
+
+static void msm_rpm_process_ack(uint32_t msg_id, int errno)
+{
+ struct list_head *ptr, *next;
+ struct msm_rpm_wait_data *elem = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&msm_rpm_list_lock, flags);
+
+ list_for_each_safe(ptr, next, &msm_rpm_wait_list) {
+ elem = list_entry(ptr, struct msm_rpm_wait_data, list);
+ if (elem->msg_id == msg_id) {
+ elem->errno = errno;
+ elem->ack_recd = true;
+ complete(&elem->ack);
+ if (elem->delete_on_ack) {
+ list_del(&elem->list);
+ kfree(elem);
+ }
+ break;
+ }
+ }
+ /* Special case where the sleep driver doesn't
+ * wait for ACKs. This would decrease the latency involved with
+ * entering RPM assisted power collapse.
+ */
+ if (!elem)
+ trace_rpm_smd_ack_recvd(0, msg_id, 0xDEADBEEF);
+
+ spin_unlock_irqrestore(&msm_rpm_list_lock, flags);
+}
+
+struct msm_rpm_kvp_packet {
+ uint32_t id;
+ uint32_t len;
+ uint32_t val;
+};
+
+static int msm_rpm_read_smd_data(char *buf)
+{
+ int pkt_sz;
+ int bytes_read = 0;
+
+ pkt_sz = smd_cur_packet_size(msm_rpm_data.ch_info);
+
+ if (!pkt_sz)
+ return -EAGAIN;
+
+ if (pkt_sz > MAX_ERR_BUFFER_SIZE) {
+ pr_err("rpm_smd pkt_sz is greater than max size\n");
+ goto error;
+ }
+
+ if (pkt_sz != smd_read_avail(msm_rpm_data.ch_info))
+ return -EAGAIN;
+
+ do {
+ int len;
+
+ len = smd_read(msm_rpm_data.ch_info, buf + bytes_read, pkt_sz);
+ pkt_sz -= len;
+ bytes_read += len;
+
+ } while (pkt_sz > 0);
+
+ if (pkt_sz < 0) {
+ pr_err("rpm_smd pkt_sz is less than zero\n");
+ goto error;
+ }
+ return 0;
+error:
+ WARN_ON(1);
+
+ return 0;
+}
+
+static void data_fn_tasklet(unsigned long data)
+{
+ uint32_t msg_id;
+ int errno;
+ char buf[MAX_ERR_BUFFER_SIZE] = {0};
+
+ spin_lock(&msm_rpm_data.smd_lock_read);
+ while (smd_is_pkt_avail(msm_rpm_data.ch_info)) {
+ if (msm_rpm_read_smd_data(buf))
+ break;
+ msg_id = msm_rpm_get_msg_id_from_ack(buf);
+ errno = msm_rpm_get_error_from_ack(buf);
+ trace_rpm_smd_ack_recvd(0, msg_id, errno);
+ msm_rpm_process_ack(msg_id, errno);
+ }
+ spin_unlock(&msm_rpm_data.smd_lock_read);
+}
+
+static void msm_rpm_log_request(struct msm_rpm_request *cdata)
+{
+ char buf[DEBUG_PRINT_BUFFER_SIZE];
+ size_t buflen = DEBUG_PRINT_BUFFER_SIZE;
+ char name[5];
+ u32 value;
+ uint32_t i;
+ int j, prev_valid;
+ int valid_count = 0;
+ int pos = 0;
+ uint32_t res_type, rsc_id;
+
+ name[4] = 0;
+
+ for (i = 0; i < cdata->write_idx; i++)
+ if (cdata->kvp[i].valid)
+ valid_count++;
+
+ pos += scnprintf(buf + pos, buflen - pos, "%sRPM req: ", KERN_INFO);
+ if (msm_rpm_debug_mask & MSM_RPM_LOG_REQUEST_SHOW_MSG_ID)
+ pos += scnprintf(buf + pos, buflen - pos, "msg_id=%u, ",
+ get_msg_id(cdata->client_buf));
+ pos += scnprintf(buf + pos, buflen - pos, "s=%s",
+ (get_set_type(cdata->client_buf) ==
+ MSM_RPM_CTX_ACTIVE_SET ? "act" : "slp"));
+
+ res_type = get_rsc_type(cdata->client_buf);
+ rsc_id = get_rsc_id(cdata->client_buf);
+ if ((msm_rpm_debug_mask & MSM_RPM_LOG_REQUEST_PRETTY)
+ && (msm_rpm_debug_mask & MSM_RPM_LOG_REQUEST_RAW)) {
+ /* Both pretty and raw formatting */
+ memcpy(name, &res_type, sizeof(uint32_t));
+ pos += scnprintf(buf + pos, buflen - pos,
+ ", rsc_type=0x%08X (%s), rsc_id=%u; ",
+ res_type, name, rsc_id);
+
+ for (i = 0, prev_valid = 0; i < cdata->write_idx; i++) {
+ if (!cdata->kvp[i].valid)
+ continue;
+
+ memcpy(name, &cdata->kvp[i].key, sizeof(uint32_t));
+ pos += scnprintf(buf + pos, buflen - pos,
+ "[key=0x%08X (%s), value=%s",
+ cdata->kvp[i].key, name,
+ (cdata->kvp[i].nbytes ? "0x" : "null"));
+
+ for (j = 0; j < cdata->kvp[i].nbytes; j++)
+ pos += scnprintf(buf + pos, buflen - pos,
+ "%02X ",
+ cdata->kvp[i].value[j]);
+
+ if (cdata->kvp[i].nbytes)
+ pos += scnprintf(buf + pos, buflen - pos, "(");
+
+ for (j = 0; j < cdata->kvp[i].nbytes; j += 4) {
+ value = 0;
+ memcpy(&value, &cdata->kvp[i].value[j],
+ min_t(uint32_t, sizeof(uint32_t),
+ cdata->kvp[i].nbytes - j));
+ pos += scnprintf(buf + pos, buflen - pos, "%u",
+ value);
+ if (j + 4 < cdata->kvp[i].nbytes)
+ pos += scnprintf(buf + pos,
+ buflen - pos, " ");
+ }
+ if (cdata->kvp[i].nbytes)
+ pos += scnprintf(buf + pos, buflen - pos, ")");
+ pos += scnprintf(buf + pos, buflen - pos, "]");
+ if (prev_valid + 1 < valid_count)
+ pos += scnprintf(buf + pos, buflen - pos, ", ");
+ prev_valid++;
+ }
+ } else if (msm_rpm_debug_mask & MSM_RPM_LOG_REQUEST_PRETTY) {
+ /* Pretty formatting only */
+ memcpy(name, &res_type, sizeof(uint32_t));
+ pos += scnprintf(buf + pos, buflen - pos, " %s %u; ", name,
+ rsc_id);
+
+ for (i = 0, prev_valid = 0; i < cdata->write_idx; i++) {
+ if (!cdata->kvp[i].valid)
+ continue;
+
+ memcpy(name, &cdata->kvp[i].key, sizeof(uint32_t));
+ pos += scnprintf(buf + pos, buflen - pos, "%s=%s",
+ name, (cdata->kvp[i].nbytes ? "" : "null"));
+
+ for (j = 0; j < cdata->kvp[i].nbytes; j += 4) {
+ value = 0;
+ memcpy(&value, &cdata->kvp[i].value[j],
+ min_t(uint32_t, sizeof(uint32_t),
+ cdata->kvp[i].nbytes - j));
+ pos += scnprintf(buf + pos, buflen - pos, "%u",
+ value);
+
+ if (j + 4 < cdata->kvp[i].nbytes)
+ pos += scnprintf(buf + pos,
+ buflen - pos, " ");
+ }
+ if (prev_valid + 1 < valid_count)
+ pos += scnprintf(buf + pos, buflen - pos, ", ");
+ prev_valid++;
+ }
+ } else {
+ /* Raw formatting only */
+ pos += scnprintf(buf + pos, buflen - pos,
+ ", rsc_type=0x%08X, rsc_id=%u; ", res_type, rsc_id);
+
+ for (i = 0, prev_valid = 0; i < cdata->write_idx; i++) {
+ if (!cdata->kvp[i].valid)
+ continue;
+
+ pos += scnprintf(buf + pos, buflen - pos,
+ "[key=0x%08X, value=%s",
+ cdata->kvp[i].key,
+ (cdata->kvp[i].nbytes ? "0x" : "null"));
+ for (j = 0; j < cdata->kvp[i].nbytes; j++) {
+ pos += scnprintf(buf + pos, buflen - pos,
+ "%02X",
+ cdata->kvp[i].value[j]);
+ if (j + 1 < cdata->kvp[i].nbytes)
+ pos += scnprintf(buf + pos,
+ buflen - pos, " ");
+ }
+ pos += scnprintf(buf + pos, buflen - pos, "]");
+ if (prev_valid + 1 < valid_count)
+ pos += scnprintf(buf + pos, buflen - pos, ", ");
+ prev_valid++;
+ }
+ }
+
+ pos += scnprintf(buf + pos, buflen - pos, "\n");
+ printk(buf);
+}
+
+static int msm_rpm_send_smd_buffer(char *buf, uint32_t size, bool noirq)
+{
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&msm_rpm_data.smd_lock_write, flags);
+ ret = smd_write_avail(msm_rpm_data.ch_info);
+
+ while ((ret = smd_write_avail(msm_rpm_data.ch_info)) < size) {
+ if (ret < 0)
+ break;
+ if (!noirq) {
+ spin_unlock_irqrestore(
+ &msm_rpm_data.smd_lock_write, flags);
+ cpu_relax();
+ spin_lock_irqsave(
+ &msm_rpm_data.smd_lock_write, flags);
+ } else
+ udelay(5);
+ }
+
+ if (ret < 0) {
+ pr_err("SMD not initialized\n");
+ spin_unlock_irqrestore(
+ &msm_rpm_data.smd_lock_write, flags);
+ return ret;
+ }
+
+ ret = smd_write(msm_rpm_data.ch_info, buf, size);
+ spin_unlock_irqrestore(&msm_rpm_data.smd_lock_write, flags);
+ return ret;
+}
+
+static int msm_rpm_glink_send_buffer(char *buf, uint32_t size, bool noirq)
+{
+ int ret;
+ unsigned long flags;
+ int timeout = 50;
+
+ spin_lock_irqsave(&msm_rpm_data.smd_lock_write, flags);
+ do {
+ ret = glink_tx(glink_data->glink_handle, buf, buf,
+ size, GLINK_TX_SINGLE_THREADED);
+ if (ret == -EBUSY || ret == -ENOSPC) {
+ if (!noirq) {
+ spin_unlock_irqrestore(
+ &msm_rpm_data.smd_lock_write, flags);
+ cpu_relax();
+ spin_lock_irqsave(
+ &msm_rpm_data.smd_lock_write, flags);
+ } else {
+ udelay(5);
+ }
+ timeout--;
+ } else {
+ ret = 0;
+ }
+ } while (ret && timeout);
+ spin_unlock_irqrestore(&msm_rpm_data.smd_lock_write, flags);
+
+ if (!timeout)
+ return 0;
+ else
+ return size;
+}
+
+static int msm_rpm_send_data(struct msm_rpm_request *cdata,
+ int msg_type, bool noirq, bool noack)
+{
+ uint8_t *tmpbuff;
+ int ret;
+ uint32_t i;
+ uint32_t msg_size;
+ int msg_hdr_sz, req_hdr_sz;
+ uint32_t data_len = get_data_len(cdata->client_buf);
+ uint32_t set = get_set_type(cdata->client_buf);
+ uint32_t msg_id;
+
+ if (probe_status)
+ return probe_status;
+
+ if (!data_len)
+ return 1;
+
+ msg_hdr_sz = rpm_msg_fmt_ver ? sizeof(struct rpm_message_header_v1) :
+ sizeof(struct rpm_message_header_v0);
+
+ req_hdr_sz = RPM_HDR_SIZE;
+ set_msg_type(cdata->client_buf, msg_type);
+
+ set_req_len(cdata->client_buf, data_len + msg_hdr_sz - req_hdr_sz);
+ msg_size = get_req_len(cdata->client_buf) + req_hdr_sz;
+
+ /* populate data_len */
+ if (msg_size > cdata->numbytes) {
+ kfree(cdata->buf);
+ cdata->numbytes = msg_size;
+ cdata->buf = kzalloc(msg_size, GFP_FLAG(noirq));
+ }
+
+ if (!cdata->buf) {
+ pr_err("Failed malloc\n");
+ return 0;
+ }
+
+ tmpbuff = cdata->buf;
+
+ tmpbuff += msg_hdr_sz;
+ for (i = 0; (i < cdata->write_idx); i++) {
+ /* Sanity check */
+ WARN_ON((tmpbuff - cdata->buf) > cdata->numbytes);
+
+ if (!cdata->kvp[i].valid)
+ continue;
+
+ memcpy(tmpbuff, &cdata->kvp[i].key, sizeof(uint32_t));
+ tmpbuff += sizeof(uint32_t);
+
+ memcpy(tmpbuff, &cdata->kvp[i].nbytes, sizeof(uint32_t));
+ tmpbuff += sizeof(uint32_t);
+
+ memcpy(tmpbuff, cdata->kvp[i].value, cdata->kvp[i].nbytes);
+ tmpbuff += cdata->kvp[i].nbytes;
+
+ if (set == MSM_RPM_CTX_SLEEP_SET)
+ msm_rpm_notify_sleep_chain(cdata->client_buf,
+ &cdata->kvp[i]);
+
+ }
+
+ memcpy(cdata->buf, cdata->client_buf, msg_hdr_sz);
+ if ((set == MSM_RPM_CTX_SLEEP_SET) &&
+ !msm_rpm_smd_buffer_request(cdata, msg_size,
+ GFP_FLAG(noirq)))
+ return 1;
+
+ msg_id = msm_rpm_get_next_msg_id();
+ /* Set the version bit for new protocol */
+ set_msg_ver(cdata->buf, rpm_msg_fmt_ver);
+ set_msg_id(cdata->buf, msg_id);
+ set_msg_id(cdata->client_buf, msg_id);
+
+ if (msm_rpm_debug_mask
+ & (MSM_RPM_LOG_REQUEST_PRETTY | MSM_RPM_LOG_REQUEST_RAW))
+ msm_rpm_log_request(cdata);
+
+ if (standalone) {
+ for (i = 0; (i < cdata->write_idx); i++)
+ cdata->kvp[i].valid = false;
+
+ set_data_len(cdata->client_buf, 0);
+ ret = msg_id;
+ return ret;
+ }
+
+ msm_rpm_add_wait_list(msg_id, noack);
+
+ ret = msm_rpm_send_buffer(&cdata->buf[0], msg_size, noirq);
+
+ if (ret == msg_size) {
+ for (i = 0; (i < cdata->write_idx); i++)
+ cdata->kvp[i].valid = false;
+ set_data_len(cdata->client_buf, 0);
+ ret = msg_id;
+ trace_rpm_smd_send_active_set(msg_id,
+ get_rsc_type(cdata->client_buf),
+ get_rsc_id(cdata->client_buf));
+ } else if (ret < msg_size) {
+ struct msm_rpm_wait_data *rc;
+
+ ret = 0;
+ pr_err("Failed to write data msg_size:%d ret:%d msg_id:%d\n",
+ msg_size, ret, msg_id);
+ rc = msm_rpm_get_entry_from_msg_id(msg_id);
+ if (rc)
+ msm_rpm_free_list_entry(rc);
+ }
+ return ret;
+}
+
+static int _msm_rpm_send_request(struct msm_rpm_request *handle, bool noack)
+{
+ int ret;
+ static DEFINE_MUTEX(send_mtx);
+
+ mutex_lock(&send_mtx);
+ ret = msm_rpm_send_data(handle, MSM_RPM_MSG_REQUEST_TYPE, false, noack);
+ mutex_unlock(&send_mtx);
+
+ return ret;
+}
+
+int msm_rpm_send_request(struct msm_rpm_request *handle)
+{
+ return _msm_rpm_send_request(handle, false);
+}
+EXPORT_SYMBOL(msm_rpm_send_request);
+
+int msm_rpm_send_request_noirq(struct msm_rpm_request *handle)
+{
+ return msm_rpm_send_data(handle, MSM_RPM_MSG_REQUEST_TYPE, true, false);
+}
+EXPORT_SYMBOL(msm_rpm_send_request_noirq);
+
+void *msm_rpm_send_request_noack(struct msm_rpm_request *handle)
+{
+ int ret;
+
+ ret = _msm_rpm_send_request(handle, true);
+
+ return ret < 0 ? ERR_PTR(ret) : NULL;
+}
+EXPORT_SYMBOL(msm_rpm_send_request_noack);
+
+int msm_rpm_wait_for_ack(uint32_t msg_id)
+{
+ struct msm_rpm_wait_data *elem;
+ int rc = 0;
+
+ if (!msg_id) {
+ pr_err("Invalid msg id\n");
+ return -ENOMEM;
+ }
+
+ if (msg_id == 1)
+ return rc;
+
+ if (standalone)
+ return rc;
+
+ elem = msm_rpm_get_entry_from_msg_id(msg_id);
+ if (!elem)
+ return rc;
+
+ wait_for_completion(&elem->ack);
+ trace_rpm_smd_ack_recvd(0, msg_id, 0xDEADFEED);
+
+ rc = elem->errno;
+ msm_rpm_free_list_entry(elem);
+
+ return rc;
+}
+EXPORT_SYMBOL(msm_rpm_wait_for_ack);
+
+static void msm_rpm_smd_read_data_noirq(uint32_t msg_id)
+{
+ uint32_t id = 0;
+
+ while (id != msg_id) {
+ if (smd_is_pkt_avail(msm_rpm_data.ch_info)) {
+ int errno;
+ char buf[MAX_ERR_BUFFER_SIZE] = {};
+
+ msm_rpm_read_smd_data(buf);
+ id = msm_rpm_get_msg_id_from_ack(buf);
+ errno = msm_rpm_get_error_from_ack(buf);
+ trace_rpm_smd_ack_recvd(1, msg_id, errno);
+ msm_rpm_process_ack(id, errno);
+ }
+ }
+}
+
+static void msm_rpm_glink_read_data_noirq(struct msm_rpm_wait_data *elem)
+{
+ int ret;
+
+ /* Use rx_poll method to read the message from RPM */
+ while (elem->errno) {
+ ret = glink_rpm_rx_poll(glink_data->glink_handle);
+ if (ret >= 0) {
+ /*
+ * We might have receieve the notification.
+ * Now we have to check whether the notification
+ * received is what we are interested?
+ * Wait for few usec to get the notification
+ * before re-trying the poll again.
+ */
+ udelay(50);
+ } else {
+ pr_err("rx poll return error = %d\n", ret);
+ }
+ }
+}
+
+int msm_rpm_wait_for_ack_noirq(uint32_t msg_id)
+{
+ struct msm_rpm_wait_data *elem;
+ unsigned long flags;
+ int rc = 0;
+
+ if (!msg_id) {
+ pr_err("Invalid msg id\n");
+ return -ENOMEM;
+ }
+
+ if (msg_id == 1)
+ return 0;
+
+ if (standalone)
+ return 0;
+
+ spin_lock_irqsave(&msm_rpm_data.smd_lock_read, flags);
+
+ elem = msm_rpm_get_entry_from_msg_id(msg_id);
+
+ if (!elem)
+ /* Should this be a bug
+ * Is it ok for another thread to read the msg?
+ */
+ goto wait_ack_cleanup;
+
+ if (elem->errno != INIT_ERROR) {
+ rc = elem->errno;
+ msm_rpm_free_list_entry(elem);
+ goto wait_ack_cleanup;
+ }
+
+ if (!glink_enabled)
+ msm_rpm_smd_read_data_noirq(msg_id);
+ else
+ msm_rpm_glink_read_data_noirq(elem);
+
+ rc = elem->errno;
+
+ msm_rpm_free_list_entry(elem);
+wait_ack_cleanup:
+ spin_unlock_irqrestore(&msm_rpm_data.smd_lock_read, flags);
+
+ if (!glink_enabled)
+ if (smd_is_pkt_avail(msm_rpm_data.ch_info))
+ tasklet_schedule(&data_tasklet);
+ return rc;
+}
+EXPORT_SYMBOL(msm_rpm_wait_for_ack_noirq);
+
+void *msm_rpm_send_message_noack(enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, struct msm_rpm_kvp *kvp, int nelems)
+{
+ int i, rc;
+ struct msm_rpm_request *req =
+ msm_rpm_create_request_common(set, rsc_type, rsc_id, nelems,
+ false);
+
+ if (IS_ERR(req))
+ return req;
+
+ if (!req)
+ return ERR_PTR(ENOMEM);
+
+ for (i = 0; i < nelems; i++) {
+ rc = msm_rpm_add_kvp_data(req, kvp[i].key,
+ kvp[i].data, kvp[i].length);
+ if (rc)
+ goto bail;
+ }
+
+ rc = PTR_ERR(msm_rpm_send_request_noack(req));
+bail:
+ msm_rpm_free_request(req);
+ return rc < 0 ? ERR_PTR(rc) : NULL;
+}
+EXPORT_SYMBOL(msm_rpm_send_message_noack);
+
+int msm_rpm_send_message(enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, struct msm_rpm_kvp *kvp, int nelems)
+{
+ int i, rc;
+ struct msm_rpm_request *req =
+ msm_rpm_create_request(set, rsc_type, rsc_id, nelems);
+
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ if (!req)
+ return -ENOMEM;
+
+ for (i = 0; i < nelems; i++) {
+ rc = msm_rpm_add_kvp_data(req, kvp[i].key,
+ kvp[i].data, kvp[i].length);
+ if (rc)
+ goto bail;
+ }
+
+ rc = msm_rpm_wait_for_ack(msm_rpm_send_request(req));
+bail:
+ msm_rpm_free_request(req);
+ return rc;
+}
+EXPORT_SYMBOL(msm_rpm_send_message);
+
+int msm_rpm_send_message_noirq(enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, struct msm_rpm_kvp *kvp, int nelems)
+{
+ int i, rc;
+ struct msm_rpm_request *req =
+ msm_rpm_create_request_noirq(set, rsc_type, rsc_id, nelems);
+
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ if (!req)
+ return -ENOMEM;
+
+ for (i = 0; i < nelems; i++) {
+ rc = msm_rpm_add_kvp_data_noirq(req, kvp[i].key,
+ kvp[i].data, kvp[i].length);
+ if (rc)
+ goto bail;
+ }
+
+ rc = msm_rpm_wait_for_ack_noirq(msm_rpm_send_request_noirq(req));
+bail:
+ msm_rpm_free_request(req);
+ return rc;
+}
+EXPORT_SYMBOL(msm_rpm_send_message_noirq);
+
+/**
+ * During power collapse, the rpm driver disables the SMD interrupts to make
+ * sure that the interrupt doesn't wakes us from sleep.
+ */
+int msm_rpm_enter_sleep(bool print, const struct cpumask *cpumask)
+{
+ int ret = 0;
+
+ if (standalone)
+ return 0;
+
+ if (!glink_enabled)
+ ret = smd_mask_receive_interrupt(msm_rpm_data.ch_info,
+ true, cpumask);
+ else
+ ret = glink_rpm_mask_rx_interrupt(glink_data->glink_handle,
+ true, (void *)cpumask);
+
+ if (!ret) {
+ ret = msm_rpm_flush_requests(print);
+
+ if (ret) {
+ if (!glink_enabled)
+ smd_mask_receive_interrupt(
+ msm_rpm_data.ch_info, false, NULL);
+ else
+ glink_rpm_mask_rx_interrupt(
+ glink_data->glink_handle, false, NULL);
+ }
+ }
+ return ret;
+}
+EXPORT_SYMBOL(msm_rpm_enter_sleep);
+
+/**
+ * When the system resumes from power collapse, the SMD interrupt disabled by
+ * enter function has to reenabled to continue processing SMD message.
+ */
+void msm_rpm_exit_sleep(void)
+{
+ int ret;
+
+ if (standalone)
+ return;
+
+ do {
+ ret = msm_rpm_read_sleep_ack();
+ } while (ret > 0);
+
+ if (!glink_enabled)
+ smd_mask_receive_interrupt(msm_rpm_data.ch_info, false, NULL);
+ else
+ glink_rpm_mask_rx_interrupt(glink_data->glink_handle,
+ false, NULL);
+}
+EXPORT_SYMBOL(msm_rpm_exit_sleep);
+
+/*
+ * Whenever there is a data from RPM, notify_rx will be called.
+ * This function is invoked either interrupt OR polling context.
+ */
+static void msm_rpm_trans_notify_rx(void *handle, const void *priv,
+ const void *pkt_priv, const void *ptr, size_t size)
+{
+ uint32_t msg_id;
+ int errno;
+ char buf[MAX_ERR_BUFFER_SIZE] = {0};
+ struct msm_rpm_wait_data *elem;
+ static DEFINE_SPINLOCK(rx_notify_lock);
+ unsigned long flags;
+
+ if (!size)
+ return;
+
+ WARN_ON(size > MAX_ERR_BUFFER_SIZE);
+
+ spin_lock_irqsave(&rx_notify_lock, flags);
+ memcpy(buf, ptr, size);
+ msg_id = msm_rpm_get_msg_id_from_ack(buf);
+ errno = msm_rpm_get_error_from_ack(buf);
+ elem = msm_rpm_get_entry_from_msg_id(msg_id);
+
+ /*
+ * It is applicable for sleep set requests
+ * Sleep set requests are not added to the
+ * wait queue list. Without this check we
+ * run into NULL pointer deferrence issue.
+ */
+ if (!elem) {
+ spin_unlock_irqrestore(&rx_notify_lock, flags);
+ glink_rx_done(handle, ptr, 0);
+ return;
+ }
+
+ msm_rpm_process_ack(msg_id, errno);
+ spin_unlock_irqrestore(&rx_notify_lock, flags);
+
+ glink_rx_done(handle, ptr, 0);
+}
+
+static void msm_rpm_trans_notify_state(void *handle, const void *priv,
+ unsigned int event)
+{
+ switch (event) {
+ case GLINK_CONNECTED:
+ glink_data->glink_handle = handle;
+
+ if (IS_ERR_OR_NULL(glink_data->glink_handle)) {
+ pr_err("glink_handle %d\n",
+ (int)PTR_ERR(glink_data->glink_handle));
+ WARN_ON(1);
+ }
+
+ /*
+ * Do not allow clients to send data to RPM until glink
+ * is fully open.
+ */
+ probe_status = 0;
+ pr_info("glink config params: transport=%s, edge=%s, name=%s\n",
+ glink_data->xprt,
+ glink_data->edge,
+ glink_data->name);
+ break;
+ default:
+ pr_err("Unrecognized event %d\n", event);
+ break;
+ };
+}
+
+static void msm_rpm_trans_notify_tx_done(void *handle, const void *priv,
+ const void *pkt_priv, const void *ptr)
+{
+}
+
+static void msm_rpm_glink_open_work(struct work_struct *work)
+{
+ pr_debug("Opening glink channel\n");
+ glink_data->glink_handle = glink_open(glink_data->open_cfg);
+
+ if (IS_ERR_OR_NULL(glink_data->glink_handle)) {
+ pr_err("Error: glink_open failed %d\n",
+ (int)PTR_ERR(glink_data->glink_handle));
+ WARN_ON(1);
+ }
+}
+
+static void msm_rpm_glink_notifier_cb(struct glink_link_state_cb_info *cb_info,
+ void *priv)
+{
+ struct glink_open_config *open_config;
+ static bool first = true;
+
+ if (!cb_info) {
+ pr_err("Missing callback data\n");
+ return;
+ }
+
+ switch (cb_info->link_state) {
+ case GLINK_LINK_STATE_UP:
+ if (first)
+ first = false;
+ else
+ break;
+ open_config = kzalloc(sizeof(*open_config), GFP_KERNEL);
+ if (!open_config) {
+ pr_err("Could not allocate memory\n");
+ break;
+ }
+
+ glink_data->open_cfg = open_config;
+ pr_debug("glink link state up cb receieved\n");
+ INIT_WORK(&glink_data->work, msm_rpm_glink_open_work);
+
+ open_config->priv = glink_data;
+ open_config->name = glink_data->name;
+ open_config->edge = glink_data->edge;
+ open_config->notify_rx = msm_rpm_trans_notify_rx;
+ open_config->notify_tx_done = msm_rpm_trans_notify_tx_done;
+ open_config->notify_state = msm_rpm_trans_notify_state;
+ schedule_work(&glink_data->work);
+ break;
+ default:
+ pr_err("Unrecognised state = %d\n", cb_info->link_state);
+ break;
+ };
+}
+
+static int msm_rpm_glink_dt_parse(struct platform_device *pdev,
+ struct glink_apps_rpm_data *glink_data)
+{
+ char *key = NULL;
+ int ret;
+
+ if (of_device_is_compatible(pdev->dev.of_node, "qcom,rpm-glink")) {
+ glink_enabled = true;
+ } else {
+ pr_warn("qcom,rpm-glink compatible not matches\n");
+ ret = -EINVAL;
+ return ret;
+ }
+
+ key = "qcom,glink-edge";
+ ret = of_property_read_string(pdev->dev.of_node, key,
+ &glink_data->edge);
+ if (ret) {
+ pr_err("Failed to read node: %s, key=%s\n",
+ pdev->dev.of_node->full_name, key);
+ return ret;
+ }
+
+ key = "rpm-channel-name";
+ ret = of_property_read_string(pdev->dev.of_node, key,
+ &glink_data->name);
+ if (ret)
+ pr_err("%s(): Failed to read node: %s, key=%s\n", __func__,
+ pdev->dev.of_node->full_name, key);
+
+ return ret;
+}
+
+static int msm_rpm_glink_link_setup(struct glink_apps_rpm_data *glink_data,
+ struct platform_device *pdev)
+{
+ struct glink_link_info *link_info;
+ void *link_state_cb_handle;
+ struct device *dev = &pdev->dev;
+ int ret = 0;
+
+ link_info = devm_kzalloc(dev, sizeof(struct glink_link_info),
+ GFP_KERNEL);
+ if (!link_info) {
+ ret = -ENOMEM;
+ return ret;
+ }
+
+ glink_data->link_info = link_info;
+
+ /*
+ * Setup link info parameters
+ */
+ link_info->edge = glink_data->edge;
+ link_info->glink_link_state_notif_cb =
+ msm_rpm_glink_notifier_cb;
+ link_state_cb_handle = glink_register_link_state_cb(link_info, NULL);
+ if (IS_ERR_OR_NULL(link_state_cb_handle)) {
+ pr_err("Could not register cb\n");
+ ret = PTR_ERR(link_state_cb_handle);
+ return ret;
+ }
+
+ spin_lock_init(&msm_rpm_data.smd_lock_read);
+ spin_lock_init(&msm_rpm_data.smd_lock_write);
+
+ return ret;
+}
+
+static int msm_rpm_dev_glink_probe(struct platform_device *pdev)
+{
+ int ret = -ENOMEM;
+ struct device *dev = &pdev->dev;
+
+ glink_data = devm_kzalloc(dev, sizeof(*glink_data), GFP_KERNEL);
+ if (!glink_data)
+ return ret;
+
+ ret = msm_rpm_glink_dt_parse(pdev, glink_data);
+ if (ret < 0) {
+ devm_kfree(dev, glink_data);
+ return ret;
+ }
+
+ ret = msm_rpm_glink_link_setup(glink_data, pdev);
+ if (ret < 0) {
+ /*
+ * If the glink setup fails there is no
+ * fall back mechanism to SMD.
+ */
+ pr_err("GLINK setup fail ret = %d\n", ret);
+ WARN_ON(1);
+ }
+
+ return ret;
+}
+
+static int msm_rpm_dev_probe(struct platform_device *pdev)
+{
+ char *key = NULL;
+ int ret = 0;
+ void __iomem *reg_base;
+ uint32_t version = V0_PROTOCOL_VERSION; /* set to default v0 format */
+
+ /*
+ * Check for standalone support
+ */
+ key = "rpm-standalone";
+ standalone = of_property_read_bool(pdev->dev.of_node, key);
+ if (standalone) {
+ probe_status = ret;
+ goto skip_init;
+ }
+
+ reg_base = of_iomap(pdev->dev.of_node, 0);
+
+ if (reg_base) {
+ version = readq_relaxed(reg_base);
+ iounmap(reg_base);
+ }
+
+ if (version == V1_PROTOCOL_VERSION)
+ rpm_msg_fmt_ver = RPM_MSG_V1_FMT;
+
+ pr_debug("RPM-SMD running version %d/n", rpm_msg_fmt_ver);
+
+ ret = msm_rpm_dev_glink_probe(pdev);
+ if (!ret) {
+ pr_info("APSS-RPM communication over GLINK\n");
+ msm_rpm_send_buffer = msm_rpm_glink_send_buffer;
+ of_platform_populate(pdev->dev.of_node, NULL, NULL,
+ &pdev->dev);
+ return ret;
+ }
+ msm_rpm_send_buffer = msm_rpm_send_smd_buffer;
+
+ key = "rpm-channel-name";
+ ret = of_property_read_string(pdev->dev.of_node, key,
+ &msm_rpm_data.ch_name);
+ if (ret) {
+ pr_err("%s(): Failed to read node: %s, key=%s\n", __func__,
+ pdev->dev.of_node->full_name, key);
+ goto fail;
+ }
+
+ key = "rpm-channel-type";
+ ret = of_property_read_u32(pdev->dev.of_node, key,
+ &msm_rpm_data.ch_type);
+ if (ret) {
+ pr_err("%s(): Failed to read node: %s, key=%s\n", __func__,
+ pdev->dev.of_node->full_name, key);
+ goto fail;
+ }
+
+ ret = smd_named_open_on_edge(msm_rpm_data.ch_name,
+ msm_rpm_data.ch_type,
+ &msm_rpm_data.ch_info,
+ &msm_rpm_data,
+ msm_rpm_notify);
+ if (ret) {
+ if (ret != -EPROBE_DEFER) {
+ pr_err("%s: Cannot open RPM channel %s %d\n",
+ __func__, msm_rpm_data.ch_name,
+ msm_rpm_data.ch_type);
+ }
+ goto fail;
+ }
+
+ spin_lock_init(&msm_rpm_data.smd_lock_write);
+ spin_lock_init(&msm_rpm_data.smd_lock_read);
+ tasklet_init(&data_tasklet, data_fn_tasklet, 0);
+
+ wait_for_completion(&msm_rpm_data.smd_open);
+
+ smd_disable_read_intr(msm_rpm_data.ch_info);
+
+ msm_rpm_smd_wq = alloc_workqueue("rpm-smd",
+ WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, 1);
+ if (!msm_rpm_smd_wq) {
+ pr_err("%s: Unable to alloc rpm-smd workqueue\n", __func__);
+ ret = -EINVAL;
+ goto fail;
+ }
+ queue_work(msm_rpm_smd_wq, &msm_rpm_data.work);
+
+ probe_status = ret;
+skip_init:
+ of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+
+ if (standalone)
+ pr_info("RPM running in standalone mode\n");
+fail:
+ return probe_status;
+}
+
+static const struct of_device_id msm_rpm_match_table[] = {
+ {.compatible = "qcom,rpm-smd"},
+ {.compatible = "qcom,rpm-glink"},
+ {},
+};
+
+static struct platform_driver msm_rpm_device_driver = {
+ .probe = msm_rpm_dev_probe,
+ .driver = {
+ .name = "rpm-smd",
+ .owner = THIS_MODULE,
+ .of_match_table = msm_rpm_match_table,
+ },
+};
+
+int __init msm_rpm_driver_init(void)
+{
+ static bool registered;
+
+ if (registered)
+ return 0;
+ registered = true;
+
+ return platform_driver_register(&msm_rpm_device_driver);
+}
+EXPORT_SYMBOL(msm_rpm_driver_init);
+arch_initcall(msm_rpm_driver_init);
diff --git a/drivers/soc/qcom/scm.c b/drivers/soc/qcom/scm.c
index ac5cc54..492b68c 100644
--- a/drivers/soc/qcom/scm.c
+++ b/drivers/soc/qcom/scm.c
@@ -764,7 +764,7 @@ int scm_call2_atomic(u32 fn_id, struct scm_desc *desc)
return scm_remap_error(ret);
return ret;
}
-
+EXPORT_SYMBOL(scm_call2_atomic);
/**
* scm_call() - Send an SCM command
* @svc_id: service identifier
diff --git a/drivers/soc/qcom/secure_buffer.c b/drivers/soc/qcom/secure_buffer.c
index 6553ac0..5289cd0 100644
--- a/drivers/soc/qcom/secure_buffer.c
+++ b/drivers/soc/qcom/secure_buffer.c
@@ -212,6 +212,7 @@ int hyp_assign_table(struct sg_table *table,
kfree(source_vm_copy);
return ret;
}
+EXPORT_SYMBOL(hyp_assign_table);
int hyp_assign_phys(phys_addr_t addr, u64 size, u32 *source_vm_list,
int source_nelems, int *dest_vmids,
diff --git a/drivers/soc/qcom/smd_debug.c b/drivers/soc/qcom/smd_debug.c
new file mode 100644
index 0000000..07c5aeb
--- /dev/null
+++ b/drivers/soc/qcom/smd_debug.c
@@ -0,0 +1,429 @@
+/* drivers/soc/qcom/smd_debug.c
+ *
+ * Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2009-2014, 2017, The Linux Foundation. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/list.h>
+#include <linux/ctype.h>
+#include <linux/jiffies.h>
+#include <linux/err.h>
+
+#include <soc/qcom/smem.h>
+
+#include "smd_private.h"
+
+#if defined(CONFIG_DEBUG_FS)
+
+static char *chstate(unsigned int n)
+{
+ switch (n) {
+ case SMD_SS_CLOSED:
+ return "CLOSED";
+ case SMD_SS_OPENING:
+ return "OPENING";
+ case SMD_SS_OPENED:
+ return "OPENED";
+ case SMD_SS_FLUSHING:
+ return "FLUSHING";
+ case SMD_SS_CLOSING:
+ return "CLOSING";
+ case SMD_SS_RESET:
+ return "RESET";
+ case SMD_SS_RESET_OPENING:
+ return "ROPENING";
+ default:
+ return "UNKNOWN";
+ }
+}
+
+static void debug_int_stats(struct seq_file *s)
+{
+ int subsys;
+ struct interrupt_stat *stats = interrupt_stats;
+ const char *subsys_name;
+
+ seq_puts(s,
+ " Subsystem | Interrupt ID | In | Out |\n");
+
+ for (subsys = 0; subsys < NUM_SMD_SUBSYSTEMS; ++subsys) {
+ subsys_name = smd_pid_to_subsystem(subsys);
+ if (!IS_ERR_OR_NULL(subsys_name)) {
+ seq_printf(s, "%-10s %4s | %9d | %9u | %9u |\n",
+ smd_pid_to_subsystem(subsys), "smd",
+ stats->smd_interrupt_id,
+ stats->smd_in_count,
+ stats->smd_out_count);
+
+ seq_printf(s, "%-10s %4s | %9d | %9u | %9u |\n",
+ smd_pid_to_subsystem(subsys), "smsm",
+ stats->smsm_interrupt_id,
+ stats->smsm_in_count,
+ stats->smsm_out_count);
+ }
+ ++stats;
+ }
+}
+
+static void debug_int_stats_reset(struct seq_file *s)
+{
+ int subsys;
+ struct interrupt_stat *stats = interrupt_stats;
+
+ seq_puts(s, "Resetting interrupt stats.\n");
+
+ for (subsys = 0; subsys < NUM_SMD_SUBSYSTEMS; ++subsys) {
+ stats->smd_in_count = 0;
+ stats->smd_out_count = 0;
+ stats->smsm_in_count = 0;
+ stats->smsm_out_count = 0;
+ ++stats;
+ }
+}
+
+/* NNV: revist, it may not be smd version */
+static void debug_read_smd_version(struct seq_file *s)
+{
+ uint32_t *smd_ver;
+ uint32_t n, version;
+
+ smd_ver = smem_find(SMEM_VERSION_SMD, 32 * sizeof(uint32_t),
+ 0, SMEM_ANY_HOST_FLAG);
+
+ if (smd_ver)
+ for (n = 0; n < 32; n++) {
+ version = smd_ver[n];
+ seq_printf(s, "entry %d: %d.%d\n", n,
+ version >> 16,
+ version & 0xffff);
+ }
+}
+
+/**
+ * pid_to_str - Convert a numeric processor id value into a human readable
+ * string value.
+ *
+ * @pid: the processor id to convert
+ * @returns: a string representation of @pid
+ */
+static char *pid_to_str(int pid)
+{
+ switch (pid) {
+ case SMD_APPS:
+ return "APPS";
+ case SMD_MODEM:
+ return "MDMSW";
+ case SMD_Q6:
+ return "ADSP";
+ case SMD_TZ:
+ return "TZ";
+ case SMD_WCNSS:
+ return "WCNSS";
+ case SMD_MODEM_Q6_FW:
+ return "MDMFW";
+ case SMD_RPM:
+ return "RPM";
+ default:
+ return "???";
+ }
+}
+
+/**
+ * print_half_ch_state - Print the state of half of a SMD channel in a human
+ * readable format.
+ *
+ * @s: the sequential file to print to
+ * @half_ch: half of a SMD channel that should have its state printed
+ * @half_ch_funcs: the relevant channel access functions for @half_ch
+ * @size: size of the fifo in bytes associated with @half_ch
+ * @proc: the processor id that owns the part of the SMD channel associated with
+ * @half_ch
+ * @is_restricted: true if memory access is restricted
+ */
+static void print_half_ch_state(struct seq_file *s,
+ void *half_ch,
+ struct smd_half_channel_access *half_ch_funcs,
+ unsigned int size,
+ int proc,
+ bool is_restricted)
+{
+ seq_printf(s, "%-5s|", pid_to_str(proc));
+
+ if (!is_restricted) {
+ seq_printf(s, "%-7s|0x%05X|0x%05X|0x%05X",
+ chstate(half_ch_funcs->get_state(half_ch)),
+ size,
+ half_ch_funcs->get_tail(half_ch),
+ half_ch_funcs->get_head(half_ch));
+ seq_printf(s, "|%c%c%c%c%c%c%c%c|0x%05X",
+ half_ch_funcs->get_fDSR(half_ch) ? 'D' : 'd',
+ half_ch_funcs->get_fCTS(half_ch) ? 'C' : 'c',
+ half_ch_funcs->get_fCD(half_ch) ? 'C' : 'c',
+ half_ch_funcs->get_fRI(half_ch) ? 'I' : 'i',
+ half_ch_funcs->get_fHEAD(half_ch) ? 'W' : 'w',
+ half_ch_funcs->get_fTAIL(half_ch) ? 'R' : 'r',
+ half_ch_funcs->get_fSTATE(half_ch) ? 'S' : 's',
+ half_ch_funcs->get_fBLOCKREADINTR(half_ch) ? 'B' : 'b',
+ (half_ch_funcs->get_head(half_ch) -
+ half_ch_funcs->get_tail(half_ch)) & (size - 1));
+ } else {
+ seq_puts(s, " Access Restricted");
+ }
+}
+
+/**
+ * smd_xfer_type_to_str - Convert a numeric transfer type value into a human
+ * readable string value.
+ *
+ * @xfer_type: the processor id to convert
+ * @returns: a string representation of @xfer_type
+ */
+static char *smd_xfer_type_to_str(uint32_t xfer_type)
+{
+ if (xfer_type == 1)
+ return "S"; /* streaming type */
+ else if (xfer_type == 2)
+ return "P"; /* packet type */
+ else
+ return "L"; /* legacy type */
+}
+
+/**
+ * print_smd_ch_table - Print the current state of every valid SMD channel in a
+ * specific SMD channel allocation table to a human
+ * readable formatted output.
+ *
+ * @s: the sequential file to print to
+ * @tbl: a valid pointer to the channel allocation table to print from
+ * @num_tbl_entries: total number of entries in the table referenced by @tbl
+ * @ch_base_id: the SMEM item id corresponding to the array of channel
+ * structures for the channels found in @tbl
+ * @fifo_base_id: the SMEM item id corresponding to the array of channel fifos
+ * for the channels found in @tbl
+ * @pid: processor id to use for any SMEM operations
+ * @flags: flags to use for any SMEM operations
+ */
+static void print_smd_ch_table(struct seq_file *s,
+ struct smd_alloc_elm *tbl,
+ unsigned int num_tbl_entries,
+ unsigned int ch_base_id,
+ unsigned int fifo_base_id,
+ unsigned int pid,
+ unsigned int flags)
+{
+ void *half_ch;
+ unsigned int half_ch_size;
+ uint32_t ch_type;
+ void *buffer;
+ unsigned int buffer_size;
+ int n;
+ bool is_restricted;
+
+/*
+ * formatted, human readable channel state output, ie:
+ID|CHANNEL NAME |T|PROC |STATE |FIFO SZ|RDPTR |WRPTR |FLAGS |DATAPEN
+-------------------------------------------------------------------------------
+00|DS |S|APPS |CLOSED |0x02000|0x00000|0x00000|dcCiwrsb|0x00000
+ | | |MDMSW|OPENING|0x02000|0x00000|0x00000|dcCiwrsb|0x00000
+-------------------------------------------------------------------------------
+ */
+
+ seq_printf(s, "%2s|%-19s|%1s|%-5s|%-7s|%-7s|%-7s|%-7s|%-8s|%-7s\n",
+ "ID",
+ "CHANNEL NAME",
+ "T",
+ "PROC",
+ "STATE",
+ "FIFO SZ",
+ "RDPTR",
+ "WRPTR",
+ "FLAGS",
+ "DATAPEN");
+ seq_puts(s,
+ "-------------------------------------------------------------------------------\n");
+ for (n = 0; n < num_tbl_entries; ++n) {
+ if (strlen(tbl[n].name) == 0)
+ continue;
+
+ seq_printf(s, "%2u|%-19s|%s|", tbl[n].cid, tbl[n].name,
+ smd_xfer_type_to_str(SMD_XFER_TYPE(tbl[n].type)));
+ ch_type = SMD_CHANNEL_TYPE(tbl[n].type);
+
+
+ if (smd_edge_to_remote_pid(ch_type) == SMD_RPM &&
+ smd_edge_to_local_pid(ch_type) != SMD_APPS)
+ is_restricted = true;
+ else
+ is_restricted = false;
+
+ if (is_word_access_ch(ch_type))
+ half_ch_size =
+ sizeof(struct smd_half_channel_word_access);
+ else
+ half_ch_size = sizeof(struct smd_half_channel);
+
+ half_ch = smem_find(ch_base_id + n, 2 * half_ch_size,
+ pid, flags);
+ buffer = smem_get_entry(fifo_base_id + n, &buffer_size,
+ pid, flags);
+ if (half_ch && buffer)
+ print_half_ch_state(s,
+ half_ch,
+ get_half_ch_funcs(ch_type),
+ buffer_size / 2,
+ smd_edge_to_local_pid(ch_type),
+ is_restricted);
+
+ seq_puts(s, "\n");
+ seq_printf(s, "%2s|%-19s|%1s|", "", "", "");
+
+ if (half_ch && buffer)
+ print_half_ch_state(s,
+ half_ch + half_ch_size,
+ get_half_ch_funcs(ch_type),
+ buffer_size / 2,
+ smd_edge_to_remote_pid(ch_type),
+ is_restricted);
+
+ seq_puts(s, "\n");
+ seq_puts(s,
+ "-------------------------------------------------------------------------------\n");
+ }
+}
+
+/**
+ * debug_ch - Print the current state of every valid SMD channel in a human
+ * readable formatted table.
+ *
+ * @s: the sequential file to print to
+ */
+static void debug_ch(struct seq_file *s)
+{
+ struct smd_alloc_elm *tbl;
+ struct smd_alloc_elm *default_pri_tbl;
+ struct smd_alloc_elm *default_sec_tbl;
+ unsigned int tbl_size;
+ int i;
+
+ tbl = smem_get_entry(ID_CH_ALLOC_TBL, &tbl_size, 0, SMEM_ANY_HOST_FLAG);
+ default_pri_tbl = tbl;
+
+ if (!tbl) {
+ seq_puts(s, "Channel allocation table not found\n");
+ return;
+ }
+
+ if (IS_ERR(tbl) && PTR_ERR(tbl) == -EPROBE_DEFER) {
+ seq_puts(s, "SMEM is not initialized\n");
+ return;
+ }
+
+ seq_puts(s, "Primary allocation table:\n");
+ print_smd_ch_table(s, tbl, tbl_size / sizeof(*tbl), ID_SMD_CHANNELS,
+ SMEM_SMD_FIFO_BASE_ID,
+ 0,
+ SMEM_ANY_HOST_FLAG);
+
+ tbl = smem_get_entry(SMEM_CHANNEL_ALLOC_TBL_2, &tbl_size, 0,
+ SMEM_ANY_HOST_FLAG);
+ default_sec_tbl = tbl;
+ if (tbl) {
+ seq_puts(s, "\n\nSecondary allocation table:\n");
+ print_smd_ch_table(s, tbl, tbl_size / sizeof(*tbl),
+ SMEM_SMD_BASE_ID_2,
+ SMEM_SMD_FIFO_BASE_ID_2,
+ 0,
+ SMEM_ANY_HOST_FLAG);
+ }
+
+ for (i = 1; i < NUM_SMD_SUBSYSTEMS; ++i) {
+ tbl = smem_get_entry(ID_CH_ALLOC_TBL, &tbl_size, i, 0);
+ if (tbl && tbl != default_pri_tbl) {
+ seq_puts(s, "\n\n");
+ seq_printf(s, "%s <-> %s Primary allocation table:\n",
+ pid_to_str(SMD_APPS),
+ pid_to_str(i));
+ print_smd_ch_table(s, tbl, tbl_size / sizeof(*tbl),
+ ID_SMD_CHANNELS,
+ SMEM_SMD_FIFO_BASE_ID,
+ i,
+ 0);
+ }
+
+ tbl = smem_get_entry(SMEM_CHANNEL_ALLOC_TBL_2, &tbl_size, i, 0);
+ if (tbl && tbl != default_sec_tbl) {
+ seq_puts(s, "\n\n");
+ seq_printf(s, "%s <-> %s Secondary allocation table:\n",
+ pid_to_str(SMD_APPS),
+ pid_to_str(i));
+ print_smd_ch_table(s, tbl, tbl_size / sizeof(*tbl),
+ SMEM_SMD_BASE_ID_2,
+ SMEM_SMD_FIFO_BASE_ID_2,
+ i,
+ 0);
+ }
+ }
+}
+
+static int debugfs_show(struct seq_file *s, void *data)
+{
+ void (*show)(struct seq_file *) = s->private;
+
+ show(s);
+
+ return 0;
+}
+
+static int debug_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, debugfs_show, inode->i_private);
+}
+
+static const struct file_operations debug_ops = {
+ .open = debug_open,
+ .release = single_release,
+ .read = seq_read,
+ .llseek = seq_lseek,
+};
+
+static void debug_create(const char *name, umode_t mode,
+ struct dentry *dent,
+ void (*show)(struct seq_file *))
+{
+ struct dentry *file;
+
+ file = debugfs_create_file(name, mode, dent, show, &debug_ops);
+ if (!file)
+ pr_err("%s: unable to create file '%s'\n", __func__, name);
+}
+
+static int __init smd_debugfs_init(void)
+{
+ struct dentry *dent;
+
+ dent = debugfs_create_dir("smd", 0);
+ if (IS_ERR(dent))
+ return PTR_ERR(dent);
+
+ debug_create("ch", 0444, dent, debug_ch);
+ debug_create("version", 0444, dent, debug_read_smd_version);
+ debug_create("int_stats", 0444, dent, debug_int_stats);
+ debug_create("int_stats_reset", 0444, dent, debug_int_stats_reset);
+
+ return 0;
+}
+
+late_initcall(smd_debugfs_init);
+#endif
diff --git a/drivers/soc/qcom/smd_init_dt.c b/drivers/soc/qcom/smd_init_dt.c
new file mode 100644
index 0000000..f14461f
--- /dev/null
+++ b/drivers/soc/qcom/smd_init_dt.c
@@ -0,0 +1,343 @@
+/* drivers/soc/qcom/smd_init_dt.c
+ *
+ * Copyright (c) 2013-2015, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/ipc_logging.h>
+
+#include "smd_private.h"
+
+#define MODULE_NAME "msm_smd"
+#define IPC_LOG(level, x...) do { \
+ if (smd_log_ctx) \
+ ipc_log_string(smd_log_ctx, x); \
+ else \
+ printk(level x); \
+ } while (0)
+
+#if defined(CONFIG_MSM_SMD_DEBUG)
+#define SMD_DBG(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMD_DEBUG) \
+ IPC_LOG(KERN_DEBUG, x); \
+ } while (0)
+
+#define SMSM_DBG(x...) do { \
+ if (msm_smd_debug_mask & MSM_SMSM_DEBUG) \
+ IPC_LOG(KERN_DEBUG, x); \
+ } while (0)
+#else
+#define SMD_DBG(x...) do { } while (0)
+#define SMSM_DBG(x...) do { } while (0)
+#endif
+
+static DEFINE_MUTEX(smd_probe_lock);
+static int first_probe_done;
+
+static int msm_smsm_probe(struct platform_device *pdev)
+{
+ uint32_t edge;
+ char *key;
+ int ret;
+ uint32_t irq_offset;
+ uint32_t irq_bitmask;
+ uint32_t irq_line;
+ struct interrupt_config_item *private_irq;
+ struct device_node *node;
+ void *irq_out_base;
+ resource_size_t irq_out_size;
+ struct platform_device *parent_pdev;
+ struct resource *r;
+ struct interrupt_config *private_intr_config;
+ uint32_t remote_pid;
+
+ node = pdev->dev.of_node;
+
+ if (!pdev->dev.parent) {
+ pr_err("%s: missing link to parent device\n", __func__);
+ return -ENODEV;
+ }
+
+ parent_pdev = to_platform_device(pdev->dev.parent);
+
+ key = "irq-reg-base";
+ r = platform_get_resource_byname(parent_pdev, IORESOURCE_MEM, key);
+ if (!r)
+ goto missing_key;
+ irq_out_size = resource_size(r);
+ irq_out_base = ioremap_nocache(r->start, irq_out_size);
+ if (!irq_out_base) {
+ pr_err("%s: ioremap_nocache() of irq_out_base addr:%pr size:%pr\n",
+ __func__, &r->start, &irq_out_size);
+ return -ENOMEM;
+ }
+ SMSM_DBG("%s: %s = %p", __func__, key, irq_out_base);
+
+ key = "qcom,smsm-edge";
+ ret = of_property_read_u32(node, key, &edge);
+ if (ret)
+ goto missing_key;
+ SMSM_DBG("%s: %s = %d", __func__, key, edge);
+
+ key = "qcom,smsm-irq-offset";
+ ret = of_property_read_u32(node, key, &irq_offset);
+ if (ret)
+ goto missing_key;
+ SMSM_DBG("%s: %s = %x", __func__, key, irq_offset);
+
+ key = "qcom,smsm-irq-bitmask";
+ ret = of_property_read_u32(node, key, &irq_bitmask);
+ if (ret)
+ goto missing_key;
+ SMSM_DBG("%s: %s = %x", __func__, key, irq_bitmask);
+
+ key = "interrupts";
+ irq_line = irq_of_parse_and_map(node, 0);
+ if (!irq_line)
+ goto missing_key;
+ SMSM_DBG("%s: %s = %d", __func__, key, irq_line);
+
+ private_intr_config = smd_get_intr_config(edge);
+ if (!private_intr_config) {
+ pr_err("%s: invalid edge\n", __func__);
+ iounmap(irq_out_base);
+ return -ENODEV;
+ }
+ private_irq = &private_intr_config->smsm;
+ private_irq->out_bit_pos = irq_bitmask;
+ private_irq->out_offset = irq_offset;
+ private_irq->out_base = irq_out_base;
+ private_irq->irq_id = irq_line;
+ remote_pid = smd_edge_to_remote_pid(edge);
+ interrupt_stats[remote_pid].smsm_interrupt_id = irq_line;
+
+ ret = request_irq(irq_line,
+ private_irq->irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_NO_SUSPEND,
+ node->name,
+ NULL);
+ if (ret < 0) {
+ pr_err("%s: request_irq() failed on %d\n", __func__, irq_line);
+ iounmap(irq_out_base);
+ return ret;
+ }
+ ret = enable_irq_wake(irq_line);
+ if (ret < 0)
+ pr_err("%s: enable_irq_wake() failed on %d\n", __func__,
+ irq_line);
+
+ ret = smsm_post_init();
+ if (ret) {
+ pr_err("smd_post_init() failed ret=%d\n", ret);
+ iounmap(irq_out_base);
+ free_irq(irq_line, NULL);
+ return ret;
+ }
+
+ return 0;
+
+missing_key:
+ pr_err("%s: missing key: %s", __func__, key);
+ return -ENODEV;
+}
+
+static int msm_smd_probe(struct platform_device *pdev)
+{
+ uint32_t edge;
+ char *key;
+ int ret;
+ uint32_t irq_offset;
+ uint32_t irq_bitmask;
+ uint32_t irq_line;
+ const char *subsys_name;
+ struct interrupt_config_item *private_irq;
+ struct device_node *node;
+ void *irq_out_base;
+ resource_size_t irq_out_size;
+ struct platform_device *parent_pdev;
+ struct resource *r;
+ struct interrupt_config *private_intr_config;
+ uint32_t remote_pid;
+ bool skip_pil;
+
+ node = pdev->dev.of_node;
+
+ if (!pdev->dev.parent) {
+ pr_err("%s: missing link to parent device\n", __func__);
+ return -ENODEV;
+ }
+
+ mutex_lock(&smd_probe_lock);
+ if (!first_probe_done) {
+ smd_reset_all_edge_subsys_name();
+ first_probe_done = 1;
+ }
+ mutex_unlock(&smd_probe_lock);
+
+ parent_pdev = to_platform_device(pdev->dev.parent);
+
+ key = "irq-reg-base";
+ r = platform_get_resource_byname(parent_pdev, IORESOURCE_MEM, key);
+ if (!r)
+ goto missing_key;
+ irq_out_size = resource_size(r);
+ irq_out_base = ioremap_nocache(r->start, irq_out_size);
+ if (!irq_out_base) {
+ pr_err("%s: ioremap_nocache() of irq_out_base addr:%pr size:%pr\n",
+ __func__, &r->start, &irq_out_size);
+ return -ENOMEM;
+ }
+ SMD_DBG("%s: %s = %p", __func__, key, irq_out_base);
+
+ key = "qcom,smd-edge";
+ ret = of_property_read_u32(node, key, &edge);
+ if (ret)
+ goto missing_key;
+ SMD_DBG("%s: %s = %d", __func__, key, edge);
+
+ key = "qcom,smd-irq-offset";
+ ret = of_property_read_u32(node, key, &irq_offset);
+ if (ret)
+ goto missing_key;
+ SMD_DBG("%s: %s = %x", __func__, key, irq_offset);
+
+ key = "qcom,smd-irq-bitmask";
+ ret = of_property_read_u32(node, key, &irq_bitmask);
+ if (ret)
+ goto missing_key;
+ SMD_DBG("%s: %s = %x", __func__, key, irq_bitmask);
+
+ key = "interrupts";
+ irq_line = irq_of_parse_and_map(node, 0);
+ if (!irq_line)
+ goto missing_key;
+ SMD_DBG("%s: %s = %d", __func__, key, irq_line);
+
+ key = "label";
+ subsys_name = of_get_property(node, key, NULL);
+ SMD_DBG("%s: %s = %s", __func__, key, subsys_name);
+ /*
+ * Backwards compatibility. Although label is required, some DTs may
+ * still list the legacy pil-string. Sanely handle pil-string.
+ */
+ if (!subsys_name) {
+ pr_warn("msm_smd: Missing required property - label. Using legacy parsing\n");
+ key = "qcom,pil-string";
+ subsys_name = of_get_property(node, key, NULL);
+ SMD_DBG("%s: %s = %s", __func__, key, subsys_name);
+ if (subsys_name)
+ skip_pil = false;
+ else
+ skip_pil = true;
+ } else {
+ key = "qcom,not-loadable";
+ skip_pil = of_property_read_bool(node, key);
+ SMD_DBG("%s: %s = %d\n", __func__, key, skip_pil);
+ }
+
+ private_intr_config = smd_get_intr_config(edge);
+ if (!private_intr_config) {
+ pr_err("%s: invalid edge\n", __func__);
+ return -ENODEV;
+ }
+ private_irq = &private_intr_config->smd;
+ private_irq->out_bit_pos = irq_bitmask;
+ private_irq->out_offset = irq_offset;
+ private_irq->out_base = irq_out_base;
+ private_irq->irq_id = irq_line;
+ remote_pid = smd_edge_to_remote_pid(edge);
+ interrupt_stats[remote_pid].smd_interrupt_id = irq_line;
+
+ ret = request_irq(irq_line,
+ private_irq->irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_NO_SUSPEND | IRQF_SHARED,
+ node->name,
+ &pdev->dev);
+ if (ret < 0) {
+ pr_err("%s: request_irq() failed on %d\n", __func__, irq_line);
+ return ret;
+ }
+
+ ret = enable_irq_wake(irq_line);
+ if (ret < 0)
+ pr_err("%s: enable_irq_wake() failed on %d\n", __func__,
+ irq_line);
+
+ smd_set_edge_subsys_name(edge, subsys_name);
+ smd_proc_set_skip_pil(smd_edge_to_remote_pid(edge), skip_pil);
+
+ smd_set_edge_initialized(edge);
+ smd_post_init(remote_pid);
+ return 0;
+
+missing_key:
+ pr_err("%s: missing key: %s", __func__, key);
+ return -ENODEV;
+}
+
+static const struct of_device_id msm_smd_match_table[] = {
+ { .compatible = "qcom,smd" },
+ {},
+};
+
+static struct platform_driver msm_smd_driver = {
+ .probe = msm_smd_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = msm_smd_match_table,
+ },
+};
+
+static const struct of_device_id msm_smsm_match_table[] = {
+ { .compatible = "qcom,smsm" },
+ {},
+};
+
+static struct platform_driver msm_smsm_driver = {
+ .probe = msm_smsm_probe,
+ .driver = {
+ .name = "msm_smsm",
+ .owner = THIS_MODULE,
+ .of_match_table = msm_smsm_match_table,
+ },
+};
+
+int msm_smd_driver_register(void)
+{
+ int rc;
+
+ rc = platform_driver_register(&msm_smd_driver);
+ if (rc) {
+ pr_err("%s: smd_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ rc = platform_driver_register(&msm_smsm_driver);
+ if (rc) {
+ pr_err("%s: msm_smsm_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(msm_smd_driver_register);
+
+MODULE_DESCRIPTION("MSM SMD Device Tree Init");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/smd_private.c b/drivers/soc/qcom/smd_private.c
new file mode 100644
index 0000000..a554696
--- /dev/null
+++ b/drivers/soc/qcom/smd_private.c
@@ -0,0 +1,336 @@
+/* Copyright (c) 2012, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "smd_private.h"
+
+void set_state(volatile void __iomem *half_channel, unsigned int data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->state = data;
+}
+
+unsigned int get_state(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->state;
+}
+
+void set_fDSR(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->fDSR = data;
+}
+
+unsigned int get_fDSR(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->fDSR;
+}
+
+void set_fCTS(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->fCTS = data;
+}
+
+unsigned int get_fCTS(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->fCTS;
+}
+
+void set_fCD(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->fCD = data;
+}
+
+unsigned int get_fCD(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->fCD;
+}
+
+void set_fRI(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->fRI = data;
+}
+
+unsigned int get_fRI(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->fRI;
+}
+
+void set_fHEAD(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->fHEAD = data;
+}
+
+unsigned int get_fHEAD(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->fHEAD;
+}
+
+void set_fTAIL(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->fTAIL = data;
+}
+
+unsigned int get_fTAIL(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->fTAIL;
+}
+
+void set_fSTATE(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->fSTATE = data;
+}
+
+unsigned int get_fSTATE(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->fSTATE;
+}
+
+void set_fBLOCKREADINTR(volatile void __iomem *half_channel, unsigned char data)
+{
+ ((struct smd_half_channel __force *)
+ (half_channel))->fBLOCKREADINTR = data;
+}
+
+unsigned int get_fBLOCKREADINTR(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)
+ (half_channel))->fBLOCKREADINTR;
+}
+
+void set_tail(volatile void __iomem *half_channel, unsigned int data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->tail = data;
+}
+
+unsigned int get_tail(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->tail;
+}
+
+void set_head(volatile void __iomem *half_channel, unsigned int data)
+{
+ ((struct smd_half_channel __force *)(half_channel))->head = data;
+}
+
+unsigned int get_head(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel __force *)(half_channel))->head;
+}
+
+void set_state_word_access(volatile void __iomem *half_channel,
+ unsigned int data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->state = data;
+}
+
+unsigned int get_state_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->state;
+}
+
+void set_fDSR_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fDSR = data;
+}
+
+unsigned int get_fDSR_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fDSR;
+}
+
+void set_fCTS_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fCTS = data;
+}
+
+unsigned int get_fCTS_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fCTS;
+}
+
+void set_fCD_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fCD = data;
+}
+
+unsigned int get_fCD_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fCD;
+}
+
+void set_fRI_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fRI = data;
+}
+
+unsigned int get_fRI_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fRI;
+}
+
+void set_fHEAD_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fHEAD = data;
+}
+
+unsigned int get_fHEAD_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fHEAD;
+}
+
+void set_fTAIL_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fTAIL = data;
+}
+
+unsigned int get_fTAIL_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fTAIL;
+}
+
+void set_fSTATE_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fSTATE = data;
+}
+
+unsigned int get_fSTATE_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fSTATE;
+}
+
+void set_fBLOCKREADINTR_word_access(volatile void __iomem *half_channel,
+ unsigned char data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fBLOCKREADINTR = data;
+}
+
+unsigned int get_fBLOCKREADINTR_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->fBLOCKREADINTR;
+}
+
+void set_tail_word_access(volatile void __iomem *half_channel,
+ unsigned int data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->tail = data;
+}
+
+unsigned int get_tail_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->tail;
+}
+
+void set_head_word_access(volatile void __iomem *half_channel,
+ unsigned int data)
+{
+ ((struct smd_half_channel_word_access __force *)
+ (half_channel))->head = data;
+}
+
+unsigned int get_head_word_access(volatile void __iomem *half_channel)
+{
+ return ((struct smd_half_channel_word_access __force *)
+ (half_channel))->head;
+}
+
+int is_word_access_ch(unsigned int ch_type)
+{
+ if (ch_type == SMD_APPS_RPM || ch_type == SMD_MODEM_RPM ||
+ ch_type == SMD_QDSP_RPM || ch_type == SMD_WCNSS_RPM ||
+ ch_type == SMD_TZ_RPM)
+ return 1;
+ else
+ return 0;
+}
+
+struct smd_half_channel_access *get_half_ch_funcs(unsigned int ch_type)
+{
+ static struct smd_half_channel_access byte_access = {
+ .set_state = set_state,
+ .get_state = get_state,
+ .set_fDSR = set_fDSR,
+ .get_fDSR = get_fDSR,
+ .set_fCTS = set_fCTS,
+ .get_fCTS = get_fCTS,
+ .set_fCD = set_fCD,
+ .get_fCD = get_fCD,
+ .set_fRI = set_fRI,
+ .get_fRI = get_fRI,
+ .set_fHEAD = set_fHEAD,
+ .get_fHEAD = get_fHEAD,
+ .set_fTAIL = set_fTAIL,
+ .get_fTAIL = get_fTAIL,
+ .set_fSTATE = set_fSTATE,
+ .get_fSTATE = get_fSTATE,
+ .set_fBLOCKREADINTR = set_fBLOCKREADINTR,
+ .get_fBLOCKREADINTR = get_fBLOCKREADINTR,
+ .set_tail = set_tail,
+ .get_tail = get_tail,
+ .set_head = set_head,
+ .get_head = get_head,
+ };
+ static struct smd_half_channel_access word_access = {
+ .set_state = set_state_word_access,
+ .get_state = get_state_word_access,
+ .set_fDSR = set_fDSR_word_access,
+ .get_fDSR = get_fDSR_word_access,
+ .set_fCTS = set_fCTS_word_access,
+ .get_fCTS = get_fCTS_word_access,
+ .set_fCD = set_fCD_word_access,
+ .get_fCD = get_fCD_word_access,
+ .set_fRI = set_fRI_word_access,
+ .get_fRI = get_fRI_word_access,
+ .set_fHEAD = set_fHEAD_word_access,
+ .get_fHEAD = get_fHEAD_word_access,
+ .set_fTAIL = set_fTAIL_word_access,
+ .get_fTAIL = get_fTAIL_word_access,
+ .set_fSTATE = set_fSTATE_word_access,
+ .get_fSTATE = get_fSTATE_word_access,
+ .set_fBLOCKREADINTR = set_fBLOCKREADINTR_word_access,
+ .get_fBLOCKREADINTR = get_fBLOCKREADINTR_word_access,
+ .set_tail = set_tail_word_access,
+ .get_tail = get_tail_word_access,
+ .set_head = set_head_word_access,
+ .get_head = get_head_word_access,
+ };
+
+ if (is_word_access_ch(ch_type))
+ return &word_access;
+ else
+ return &byte_access;
+}
+
diff --git a/drivers/soc/qcom/smd_private.h b/drivers/soc/qcom/smd_private.h
new file mode 100644
index 0000000..98d0bde
--- /dev/null
+++ b/drivers/soc/qcom/smd_private.h
@@ -0,0 +1,246 @@
+/* drivers/soc/qcom/smd_private.h
+ *
+ * Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2007-2014, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#ifndef _ARCH_ARM_MACH_MSM_MSM_SMD_PRIVATE_H_
+#define _ARCH_ARM_MACH_MSM_MSM_SMD_PRIVATE_H_
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/errno.h>
+#include <linux/remote_spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+
+#include <soc/qcom/smd.h>
+#include <soc/qcom/smsm.h>
+
+#define VERSION_QDSP6 4
+#define VERSION_APPS_SBL 6
+#define VERSION_MODEM_SBL 7
+#define VERSION_APPS 8
+#define VERSION_MODEM 9
+#define VERSION_DSPS 10
+
+#define ID_SMD_CHANNELS SMEM_SMD_BASE_ID
+#define ID_SHARED_STATE SMEM_SMSM_SHARED_STATE
+#define ID_CH_ALLOC_TBL SMEM_CHANNEL_ALLOC_TBL
+
+#define SMD_SS_CLOSED 0x00000000
+#define SMD_SS_OPENING 0x00000001
+#define SMD_SS_OPENED 0x00000002
+#define SMD_SS_FLUSHING 0x00000003
+#define SMD_SS_CLOSING 0x00000004
+#define SMD_SS_RESET 0x00000005
+#define SMD_SS_RESET_OPENING 0x00000006
+
+#define SMD_HEADER_SIZE 20
+
+/* 'type' field of smd_alloc_elm structure
+ * has the following breakup
+ * bits 0-7 -> channel type
+ * bits 8-11 -> xfer type
+ * bits 12-31 -> reserved
+ */
+struct smd_alloc_elm {
+ char name[20];
+ uint32_t cid;
+ uint32_t type;
+ uint32_t ref_count;
+};
+
+#define SMD_CHANNEL_TYPE(x) ((x) & 0x000000FF)
+#define SMD_XFER_TYPE(x) (((x) & 0x00000F00) >> 8)
+
+struct smd_half_channel {
+ unsigned int state;
+ unsigned char fDSR;
+ unsigned char fCTS;
+ unsigned char fCD;
+ unsigned char fRI;
+ unsigned char fHEAD;
+ unsigned char fTAIL;
+ unsigned char fSTATE;
+ unsigned char fBLOCKREADINTR;
+ unsigned int tail;
+ unsigned int head;
+};
+
+struct smd_half_channel_word_access {
+ unsigned int state;
+ unsigned int fDSR;
+ unsigned int fCTS;
+ unsigned int fCD;
+ unsigned int fRI;
+ unsigned int fHEAD;
+ unsigned int fTAIL;
+ unsigned int fSTATE;
+ unsigned int fBLOCKREADINTR;
+ unsigned int tail;
+ unsigned int head;
+};
+
+struct smd_half_channel_access {
+ void (*set_state)(volatile void __iomem *half_channel,
+ unsigned int data);
+ unsigned int (*get_state)(volatile void __iomem *half_channel);
+ void (*set_fDSR)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fDSR)(volatile void __iomem *half_channel);
+ void (*set_fCTS)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fCTS)(volatile void __iomem *half_channel);
+ void (*set_fCD)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fCD)(volatile void __iomem *half_channel);
+ void (*set_fRI)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fRI)(volatile void __iomem *half_channel);
+ void (*set_fHEAD)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fHEAD)(volatile void __iomem *half_channel);
+ void (*set_fTAIL)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fTAIL)(volatile void __iomem *half_channel);
+ void (*set_fSTATE)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fSTATE)(volatile void __iomem *half_channel);
+ void (*set_fBLOCKREADINTR)(volatile void __iomem *half_channel,
+ unsigned char data);
+ unsigned int (*get_fBLOCKREADINTR)(volatile void __iomem *half_channel);
+ void (*set_tail)(volatile void __iomem *half_channel,
+ unsigned int data);
+ unsigned int (*get_tail)(volatile void __iomem *half_channel);
+ void (*set_head)(volatile void __iomem *half_channel,
+ unsigned int data);
+ unsigned int (*get_head)(volatile void __iomem *half_channel);
+};
+
+int is_word_access_ch(unsigned int ch_type);
+
+struct smd_half_channel_access *get_half_ch_funcs(unsigned int ch_type);
+
+struct smd_channel {
+ volatile void __iomem *send; /* some variant of smd_half_channel */
+ volatile void __iomem *recv; /* some variant of smd_half_channel */
+ unsigned char *send_data;
+ unsigned char *recv_data;
+ unsigned int fifo_size;
+ struct list_head ch_list;
+
+ unsigned int current_packet;
+ unsigned int n;
+ void *priv;
+ void (*notify)(void *priv, unsigned int flags);
+
+ int (*read)(smd_channel_t *ch, void *data, int len);
+ int (*write)(smd_channel_t *ch, const void *data, int len,
+ bool int_ntfy);
+ int (*read_avail)(smd_channel_t *ch);
+ int (*write_avail)(smd_channel_t *ch);
+ int (*read_from_cb)(smd_channel_t *ch, void *data, int len);
+
+ void (*update_state)(smd_channel_t *ch);
+ unsigned int last_state;
+ void (*notify_other_cpu)(smd_channel_t *ch);
+ void * (*read_from_fifo)(void *dest, const void *src, size_t num_bytes);
+ void * (*write_to_fifo)(void *dest, const void *src, size_t num_bytes);
+
+ char name[20];
+ struct platform_device pdev;
+ unsigned int type;
+
+ int pending_pkt_sz;
+
+ char is_pkt_ch;
+
+ /*
+ * private internal functions to access *send and *recv.
+ * never to be exported outside of smd
+ */
+ struct smd_half_channel_access *half_ch;
+};
+
+extern spinlock_t smem_lock;
+
+struct interrupt_stat {
+ uint32_t smd_in_count;
+ uint32_t smd_out_count;
+ uint32_t smd_interrupt_id;
+
+ uint32_t smsm_in_count;
+ uint32_t smsm_out_count;
+ uint32_t smsm_interrupt_id;
+};
+extern struct interrupt_stat interrupt_stats[NUM_SMD_SUBSYSTEMS];
+
+struct interrupt_config_item {
+ /* must be initialized */
+ irqreturn_t (*irq_handler)(int req, void *data);
+ /* outgoing interrupt config (set from platform data) */
+ uint32_t out_bit_pos;
+ void __iomem *out_base;
+ uint32_t out_offset;
+ int irq_id;
+};
+
+enum {
+ MSM_SMD_DEBUG = 1U << 0,
+ MSM_SMSM_DEBUG = 1U << 1,
+ MSM_SMD_INFO = 1U << 2,
+ MSM_SMSM_INFO = 1U << 3,
+ MSM_SMD_POWER_INFO = 1U << 4,
+ MSM_SMSM_POWER_INFO = 1U << 5,
+};
+
+struct interrupt_config {
+ struct interrupt_config_item smd;
+ struct interrupt_config_item smsm;
+};
+
+struct edge_to_pid {
+ uint32_t local_pid;
+ uint32_t remote_pid;
+ char subsys_name[SMD_MAX_CH_NAME_LEN];
+ bool initialized;
+};
+
+extern void *smd_log_ctx;
+extern int msm_smd_debug_mask;
+
+extern irqreturn_t smd_modem_irq_handler(int irq, void *data);
+extern irqreturn_t smsm_modem_irq_handler(int irq, void *data);
+extern irqreturn_t smd_dsp_irq_handler(int irq, void *data);
+extern irqreturn_t smsm_dsp_irq_handler(int irq, void *data);
+extern irqreturn_t smd_dsps_irq_handler(int irq, void *data);
+extern irqreturn_t smsm_dsps_irq_handler(int irq, void *data);
+extern irqreturn_t smd_wcnss_irq_handler(int irq, void *data);
+extern irqreturn_t smsm_wcnss_irq_handler(int irq, void *data);
+extern irqreturn_t smd_rpm_irq_handler(int irq, void *data);
+extern irqreturn_t smd_modemfw_irq_handler(int irq, void *data);
+
+extern int msm_smd_driver_register(void);
+extern void smd_post_init(unsigned int remote_pid);
+extern int smsm_post_init(void);
+
+extern struct interrupt_config *smd_get_intr_config(uint32_t edge);
+extern int smd_edge_to_remote_pid(uint32_t edge);
+extern int smd_edge_to_local_pid(uint32_t edge);
+extern void smd_set_edge_subsys_name(uint32_t edge, const char *subsys_name);
+extern void smd_reset_all_edge_subsys_name(void);
+extern void smd_proc_set_skip_pil(unsigned int pid, bool skip_pil);
+extern void smd_set_edge_initialized(uint32_t edge);
+extern void smd_cfg_smd_intr(uint32_t proc, uint32_t mask, void *ptr);
+extern void smd_cfg_smsm_intr(uint32_t proc, uint32_t mask, void *ptr);
+#endif
diff --git a/drivers/soc/qcom/smsm_debug.c b/drivers/soc/qcom/smsm_debug.c
new file mode 100644
index 0000000..b9b97ef
--- /dev/null
+++ b/drivers/soc/qcom/smsm_debug.c
@@ -0,0 +1,330 @@
+/* drivers/soc/qcom/smsm_debug.c
+ *
+ * Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2009-2014, 2017, The Linux Foundation. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/list.h>
+#include <linux/ctype.h>
+#include <linux/jiffies.h>
+
+#include <soc/qcom/smem.h>
+#include <soc/qcom/smsm.h>
+
+#if defined(CONFIG_DEBUG_FS)
+
+
+static void debug_read_smsm_state(struct seq_file *s)
+{
+ uint32_t *smsm;
+ int n;
+
+ smsm = smem_find(SMEM_SMSM_SHARED_STATE,
+ SMSM_NUM_ENTRIES * sizeof(uint32_t),
+ 0,
+ SMEM_ANY_HOST_FLAG);
+
+ if (smsm)
+ for (n = 0; n < SMSM_NUM_ENTRIES; n++)
+ seq_printf(s, "entry %d: 0x%08x\n", n, smsm[n]);
+}
+
+struct SMSM_CB_DATA {
+ int cb_count;
+ void *data;
+ uint32_t old_state;
+ uint32_t new_state;
+};
+static struct SMSM_CB_DATA smsm_cb_data;
+static struct completion smsm_cb_completion;
+
+static void smsm_state_cb(void *data, uint32_t old_state, uint32_t new_state)
+{
+ smsm_cb_data.cb_count++;
+ smsm_cb_data.old_state = old_state;
+ smsm_cb_data.new_state = new_state;
+ smsm_cb_data.data = data;
+ complete_all(&smsm_cb_completion);
+}
+
+#define UT_EQ_INT(a, b) \
+ { \
+ if ((a) != (b)) { \
+ seq_printf(s, "%s:%d " #a "(%d) != " #b "(%d)\n", \
+ __func__, __LINE__, \
+ a, b); \
+ break; \
+ } \
+ }
+
+#define UT_GT_INT(a, b) \
+ { \
+ if ((a) <= (b)) { \
+ seq_printf(s, "%s:%d " #a "(%d) > " #b "(%d)\n", \
+ __func__, __LINE__, \
+ a, b); \
+ break; \
+ } \
+ }
+
+#define SMSM_CB_TEST_INIT() \
+ do { \
+ smsm_cb_data.cb_count = 0; \
+ smsm_cb_data.old_state = 0; \
+ smsm_cb_data.new_state = 0; \
+ smsm_cb_data.data = 0; \
+ } while (0)
+
+
+static void debug_test_smsm(struct seq_file *s)
+{
+ int test_num = 0;
+ int ret;
+
+ /* Test case 1 - Register new callback for notification */
+ do {
+ test_num++;
+ SMSM_CB_TEST_INIT();
+ ret = smsm_state_cb_register(SMSM_APPS_STATE, SMSM_SMDINIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 0);
+
+ /* de-assert SMSM_SMD_INIT to trigger state update */
+ UT_EQ_INT(smsm_cb_data.cb_count, 0);
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_SMDINIT, 0x0);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+
+ UT_EQ_INT(smsm_cb_data.cb_count, 1);
+ UT_EQ_INT(smsm_cb_data.old_state & SMSM_SMDINIT, SMSM_SMDINIT);
+ UT_EQ_INT(smsm_cb_data.new_state & SMSM_SMDINIT, 0x0);
+ UT_EQ_INT((int)(uintptr_t)smsm_cb_data.data, 0x1234);
+
+ /* re-assert SMSM_SMD_INIT to trigger state update */
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_SMDINIT);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 2);
+ UT_EQ_INT(smsm_cb_data.old_state & SMSM_SMDINIT, 0x0);
+ UT_EQ_INT(smsm_cb_data.new_state & SMSM_SMDINIT, SMSM_SMDINIT);
+
+ /* deregister callback */
+ ret = smsm_state_cb_deregister(SMSM_APPS_STATE, SMSM_SMDINIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 2);
+
+ /* make sure state change doesn't cause any more callbacks */
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_SMDINIT, 0x0);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_SMDINIT);
+ UT_EQ_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 2);
+
+ seq_printf(s, "Test %d - PASS\n", test_num);
+ } while (0);
+
+ /* Test case 2 - Update already registered callback */
+ do {
+ test_num++;
+ SMSM_CB_TEST_INIT();
+ ret = smsm_state_cb_register(SMSM_APPS_STATE, SMSM_SMDINIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 0);
+ ret = smsm_state_cb_register(SMSM_APPS_STATE, SMSM_INIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 1);
+
+ /* verify both callback bits work */
+ reinit_completion(&smsm_cb_completion);
+ UT_EQ_INT(smsm_cb_data.cb_count, 0);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_SMDINIT, 0x0);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 1);
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_SMDINIT);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 2);
+
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_INIT, 0x0);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 3);
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_INIT);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 4);
+
+ /* deregister 1st callback */
+ ret = smsm_state_cb_deregister(SMSM_APPS_STATE, SMSM_SMDINIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 1);
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_SMDINIT, 0x0);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_SMDINIT);
+ UT_EQ_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 4);
+
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_INIT, 0x0);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 5);
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_INIT);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 6);
+
+ /* deregister 2nd callback */
+ ret = smsm_state_cb_deregister(SMSM_APPS_STATE, SMSM_INIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 2);
+
+ /* make sure state change doesn't cause any more callbacks */
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_INIT, 0x0);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_INIT);
+ UT_EQ_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 6);
+
+ seq_printf(s, "Test %d - PASS\n", test_num);
+ } while (0);
+
+ /* Test case 3 - Two callback registrations with different data */
+ do {
+ test_num++;
+ SMSM_CB_TEST_INIT();
+ ret = smsm_state_cb_register(SMSM_APPS_STATE, SMSM_SMDINIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 0);
+ ret = smsm_state_cb_register(SMSM_APPS_STATE, SMSM_INIT,
+ smsm_state_cb, (void *)0x3456);
+ UT_EQ_INT(ret, 0);
+
+ /* verify both callbacks work */
+ reinit_completion(&smsm_cb_completion);
+ UT_EQ_INT(smsm_cb_data.cb_count, 0);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_SMDINIT, 0x0);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 1);
+ UT_EQ_INT((int)(uintptr_t)smsm_cb_data.data, 0x1234);
+
+ reinit_completion(&smsm_cb_completion);
+ smsm_change_state(SMSM_APPS_STATE, SMSM_INIT, 0x0);
+ UT_GT_INT((int)wait_for_completion_timeout(&smsm_cb_completion,
+ msecs_to_jiffies(20)), 0);
+ UT_EQ_INT(smsm_cb_data.cb_count, 2);
+ UT_EQ_INT((int)(uintptr_t)smsm_cb_data.data, 0x3456);
+
+ /* cleanup and unregister
+ * degregister in reverse to verify data field is
+ * being used
+ */
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_SMDINIT);
+ smsm_change_state(SMSM_APPS_STATE, 0x0, SMSM_INIT);
+ ret = smsm_state_cb_deregister(SMSM_APPS_STATE,
+ SMSM_INIT,
+ smsm_state_cb, (void *)0x3456);
+ UT_EQ_INT(ret, 2);
+ ret = smsm_state_cb_deregister(SMSM_APPS_STATE,
+ SMSM_SMDINIT,
+ smsm_state_cb, (void *)0x1234);
+ UT_EQ_INT(ret, 2);
+
+ seq_printf(s, "Test %d - PASS\n", test_num);
+ } while (0);
+}
+
+static void debug_read_intr_mask(struct seq_file *s)
+{
+ uint32_t *smsm;
+ int m, n;
+
+ smsm = smem_find(SMEM_SMSM_CPU_INTR_MASK,
+ SMSM_NUM_ENTRIES * SMSM_NUM_HOSTS * sizeof(uint32_t),
+ 0,
+ SMEM_ANY_HOST_FLAG);
+
+ if (smsm)
+ for (m = 0; m < SMSM_NUM_ENTRIES; m++) {
+ seq_printf(s, "entry %d:", m);
+ for (n = 0; n < SMSM_NUM_HOSTS; n++)
+ seq_printf(s, " host %d: 0x%08x",
+ n, smsm[m * SMSM_NUM_HOSTS + n]);
+ seq_puts(s, "\n");
+ }
+}
+
+static int debugfs_show(struct seq_file *s, void *data)
+{
+ void (*show)(struct seq_file *) = s->private;
+
+ show(s);
+
+ return 0;
+}
+
+static int debug_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, debugfs_show, inode->i_private);
+}
+
+static const struct file_operations debug_ops = {
+ .open = debug_open,
+ .release = single_release,
+ .read = seq_read,
+ .llseek = seq_lseek,
+};
+
+static void debug_create(const char *name, umode_t mode,
+ struct dentry *dent,
+ void (*show)(struct seq_file *))
+{
+ struct dentry *file;
+
+ file = debugfs_create_file(name, mode, dent, show, &debug_ops);
+ if (!file)
+ pr_err("%s: unable to create file '%s'\n", __func__, name);
+}
+
+static int __init smsm_debugfs_init(void)
+{
+ struct dentry *dent;
+
+ dent = debugfs_create_dir("smsm", 0);
+ if (IS_ERR(dent))
+ return PTR_ERR(dent);
+
+ debug_create("state", 0444, dent, debug_read_smsm_state);
+ debug_create("intr_mask", 0444, dent, debug_read_intr_mask);
+ debug_create("smsm_test", 0444, dent, debug_test_smsm);
+
+ init_completion(&smsm_cb_completion);
+
+ return 0;
+}
+
+late_initcall(smsm_debugfs_init);
+#endif
diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
index 195aec1..9af39e1 100644
--- a/drivers/soc/qcom/socinfo.c
+++ b/drivers/soc/qcom/socinfo.c
@@ -66,6 +66,7 @@ enum {
HW_PLATFORM_RCM = 21,
HW_PLATFORM_STP = 23,
HW_PLATFORM_SBC = 24,
+ HW_PLATFORM_HDK = 31,
HW_PLATFORM_INVALID
};
@@ -86,6 +87,7 @@ const char *hw_platform[] = {
[HW_PLATFORM_DTV] = "DTV",
[HW_PLATFORM_STP] = "STP",
[HW_PLATFORM_SBC] = "SBC",
+ [HW_PLATFORM_HDK] = "HDK",
};
enum {
diff --git a/drivers/soc/qcom/spcom.c b/drivers/soc/qcom/spcom.c
index 68681f9..876e176 100644
--- a/drivers/soc/qcom/spcom.c
+++ b/drivers/soc/qcom/spcom.c
@@ -506,6 +506,7 @@ static void spcom_notify_state(void *handle, const void *priv,
* We do it here, ASAP, to allow rx data.
*/
+ ch->rx_abort = false; /* cleanup from previouse close */
pr_debug("call glink_queue_rx_intent() ch [%s].\n", ch->name);
ret = glink_queue_rx_intent(handle, ch, ch->rx_buf_size);
if (ret) {
@@ -536,14 +537,15 @@ static void spcom_notify_state(void *handle, const void *priv,
*/
pr_err("GLINK_REMOTE_DISCONNECTED, ch [%s].\n", ch->name);
- ch->glink_state = event;
-
/*
* Abort any blocking read() operation.
* The glink notification might be after REMOTE_DISCONNECT.
*/
spcom_notify_rx_abort(NULL, ch, NULL);
+ /* set the state to not-connected after notify-rx-abort */
+ ch->glink_state = event;
+
/*
* after glink_close(),
* expecting notify GLINK_LOCAL_DISCONNECTED
@@ -579,7 +581,10 @@ static bool spcom_notify_rx_intent_req(void *handle, const void *priv,
* spcom_notify_rx_abort() - glink callback on aborting rx pending buffer.
*
* Rx abort may happen if channel is closed by remote side, while rx buffer is
- * pending in the queue.
+ * pending in the queue, like upon SP reset (SSR).
+ *
+ * More common scenario, is when rx intent is queud (for next transfer),
+ * and the channel is closed locally.
*/
static void spcom_notify_rx_abort(void *handle, const void *priv,
const void *pkt_priv)
@@ -593,7 +598,8 @@ static void spcom_notify_rx_abort(void *handle, const void *priv,
pr_debug("ch [%s] pending rx aborted.\n", ch->name);
- if (spcom_is_channel_open(ch) && (!ch->rx_abort)) {
+ /* ignore rx-abort after local channel disconected */
+ if (spcom_is_channel_connected(ch) && (!ch->rx_abort)) {
ch->rx_abort = true;
complete_all(&ch->rx_done);
}
@@ -873,14 +879,16 @@ static int spcom_tx(struct spcom_channel *ch,
for (retry = 0; retry < TX_MAX_RETRY ; retry++) {
ret = glink_tx(ch->glink_handle, pkt_priv, buf, size, tx_flags);
if (ret == -EAGAIN) {
- pr_err("glink_tx() fail, try again.\n");
+ pr_err("glink_tx() fail, try again, ch [%s].\n",
+ ch->name);
/*
* Delay to allow remote side to queue rx buffer.
* This may happen after the first channel connection.
*/
msleep(TX_RETRY_DELAY_MSEC);
} else if (ret < 0) {
- pr_err("glink_tx() error %d.\n", ret);
+ pr_err("glink_tx() error [%d], ch [%s].\n",
+ ret, ch->name);
goto exit_err;
} else {
break; /* no retry needed */
@@ -953,6 +961,7 @@ static int spcom_rx(struct spcom_channel *ch,
return -ETIMEDOUT;
} else if (ch->rx_abort) {
mutex_unlock(&ch->lock);
+ pr_err("rx_abort, probably remote side reset (SSR).\n");
return -ERESTART; /* probably SSR */
} else if (ch->actual_rx_size) {
pr_debug("actual_rx_size is [%zu]\n", ch->actual_rx_size);
@@ -1072,10 +1081,19 @@ static void spcom_rx_abort_pending_server(void)
for (i = 0 ; i < ARRAY_SIZE(spcom_dev->channels); i++) {
struct spcom_channel *ch = &spcom_dev->channels[i];
- if (ch->is_server) {
- pr_debug("rx-abort server on ch [%s].\n", ch->name);
- spcom_notify_rx_abort(NULL, ch, NULL);
- }
+ /* relevant only for servers */
+ if (!ch->is_server)
+ continue;
+
+ /* The server might not be connected to a client.
+ * Don't check if connected, only if open.
+ */
+ if (!spcom_is_channel_open(ch) || (ch->rx_abort))
+ continue;
+
+ pr_debug("rx-abort server ch [%s].\n", ch->name);
+ ch->rx_abort = true;
+ complete_all(&ch->rx_done);
}
}
diff --git a/drivers/soc/qcom/subsys-pil-tz.c b/drivers/soc/qcom/subsys-pil-tz.c
index d65756c..5b600f6 100644
--- a/drivers/soc/qcom/subsys-pil-tz.c
+++ b/drivers/soc/qcom/subsys-pil-tz.c
@@ -42,6 +42,7 @@
#define ERR_READY 0
#define PBL_DONE 1
+#define QDSP6SS_NMI_STATUS 0x44
#define desc_to_data(d) container_of(d, struct pil_tz_data, desc)
#define subsys_to_data(d) container_of(d, struct pil_tz_data, subsys_desc)
@@ -109,6 +110,7 @@ struct pil_tz_data {
void __iomem *irq_mask;
void __iomem *err_status;
void __iomem *err_status_spare;
+ void __iomem *reg_base;
u32 bits_arr[2];
};
@@ -925,8 +927,19 @@ static void subsys_crash_shutdown(const struct subsys_desc *subsys)
static irqreturn_t subsys_err_fatal_intr_handler (int irq, void *dev_id)
{
struct pil_tz_data *d = subsys_to_data(dev_id);
+ u32 nmi_status = 0;
- pr_err("Fatal error on %s!\n", d->subsys_desc.name);
+ if (d->reg_base)
+ nmi_status = readl_relaxed(d->reg_base +
+ QDSP6SS_NMI_STATUS);
+
+ if (nmi_status & 0x04)
+ pr_err("%s: Fatal error on the %s due to TZ NMI\n",
+ __func__, d->subsys_desc.name);
+ else
+ pr_err("%s Fatal error on the %s\n",
+ __func__, d->subsys_desc.name);
+
if (subsys_get_crash_status(d->subsys)) {
pr_err("%s: Ignoring error fatal, restart in progress\n",
d->subsys_desc.name);
@@ -1062,6 +1075,13 @@ static int pil_tz_driver_probe(struct platform_device *pdev)
d->keep_proxy_regs_on = of_property_read_bool(pdev->dev.of_node,
"qcom,keep-proxy-regs-on");
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "base_reg");
+ d->reg_base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(d->reg_base)) {
+ dev_err(&pdev->dev, "Failed to ioremap base register\n");
+ d->reg_base = NULL;
+ }
+
rc = of_property_read_string(pdev->dev.of_node, "qcom,firmware-name",
&d->desc.name);
if (rc)
diff --git a/drivers/soc/qcom/subsystem_restart.c b/drivers/soc/qcom/subsystem_restart.c
index 55cb604..110cdf7 100644
--- a/drivers/soc/qcom/subsystem_restart.c
+++ b/drivers/soc/qcom/subsystem_restart.c
@@ -655,13 +655,16 @@ static int subsystem_powerup(struct subsys_device *dev, void *data)
if (ret < 0) {
notify_each_subsys_device(&dev, 1, SUBSYS_POWERUP_FAILURE,
NULL);
- if (!dev->desc->ignore_ssr_failure) {
+ if (system_state == SYSTEM_RESTART
+ || system_state == SYSTEM_POWER_OFF)
+ WARN(1, "SSR aborted: %s, system reboot/shutdown is under way\n",
+ name);
+ else if (!dev->desc->ignore_ssr_failure)
panic("[%s:%d]: Powerup error: %s!",
current->comm, current->pid, name);
- } else {
+ else
pr_err("Powerup failure on %s\n", name);
- return ret;
- }
+ return ret;
}
enable_all_irqs(dev);
@@ -1174,6 +1177,7 @@ void subsys_set_crash_status(struct subsys_device *dev,
{
dev->crashed = crashed;
}
+EXPORT_SYMBOL(subsys_set_crash_status);
enum crash_status subsys_get_crash_status(struct subsys_device *dev)
{
diff --git a/drivers/soc/qcom/watchdog_v2.c b/drivers/soc/qcom/watchdog_v2.c
index 9aea6db..f5e76e0 100644
--- a/drivers/soc/qcom/watchdog_v2.c
+++ b/drivers/soc/qcom/watchdog_v2.c
@@ -29,6 +29,7 @@
#include <linux/wait.h>
#include <soc/qcom/scm.h>
#include <soc/qcom/memory_dump.h>
+#include <soc/qcom/minidump.h>
#include <soc/qcom/watchdog.h>
#include <linux/dma-mapping.h>
@@ -549,6 +550,8 @@ static void configure_bark_dump(struct msm_watchdog_data *wdog_dd)
cpu_data[cpu].addr = virt_to_phys(cpu_buf +
cpu * MAX_CPU_CTX_SIZE);
cpu_data[cpu].len = MAX_CPU_CTX_SIZE;
+ snprintf(cpu_data[cpu].name, sizeof(cpu_data[cpu].name),
+ "KCPU_CTX%d", cpu);
dump_entry.id = MSM_DUMP_DATA_CPU_CTX + cpu;
dump_entry.addr = virt_to_phys(&cpu_data[cpu]);
ret = msm_dump_data_register(MSM_DUMP_TABLE_APPS,
@@ -596,6 +599,8 @@ static void configure_scandump(struct msm_watchdog_data *wdog_dd)
cpu_data->addr = dump_addr;
cpu_data->len = MAX_CPU_SCANDUMP_SIZE;
+ snprintf(cpu_data->name, sizeof(cpu_data->name),
+ "KSCANDUMP%d", cpu);
dump_entry.id = MSM_DUMP_DATA_SCANDUMP_PER_CPU + cpu;
dump_entry.addr = virt_to_phys(cpu_data);
ret = msm_dump_data_register(MSM_DUMP_TABLE_APPS,
@@ -799,6 +804,7 @@ static int msm_watchdog_probe(struct platform_device *pdev)
{
int ret;
struct msm_watchdog_data *wdog_dd;
+ struct md_region md_entry;
if (!pdev->dev.of_node || !enable)
return -ENODEV;
@@ -820,6 +826,15 @@ static int msm_watchdog_probe(struct platform_device *pdev)
goto err;
}
init_watchdog_data(wdog_dd);
+
+ /* Add wdog info to minidump table */
+ strlcpy(md_entry.name, "KWDOGDATA", sizeof(md_entry.name));
+ md_entry.virt_addr = (uintptr_t)wdog_dd;
+ md_entry.phys_addr = virt_to_phys(wdog_dd);
+ md_entry.size = sizeof(*wdog_dd);
+ if (msm_minidump_add_region(&md_entry))
+ pr_info("Failed to add Watchdog data in Minidump\n");
+
return 0;
err:
kzfree(wdog_dd);
diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
index b29e60d..d6089aa 100644
--- a/drivers/spmi/spmi-pmic-arb.c
+++ b/drivers/spmi/spmi-pmic-arb.c
@@ -944,8 +944,8 @@ static int pmic_arb_read_apid_map_v5(struct spmi_pmic_arb *pa)
* multiple EE's to write to a single PPID in arbiter version 5, there
* is more than one APID mapped to each PPID. The owner field for each
* of these mappings specifies the EE which is allowed to write to the
- * APID. The owner of the last (highest) APID for a given PPID will
- * receive interrupts from the PPID.
+ * APID. The owner of the last (highest) APID which has the IRQ owner
+ * bit set for a given PPID will receive interrupts from the PPID.
*/
for (apid = 0; apid < pa->max_periph; apid++) {
offset = pa->ver_ops->channel_map_offset(apid);
@@ -969,7 +969,10 @@ static int pmic_arb_read_apid_map_v5(struct spmi_pmic_arb *pa)
valid = pa->ppid_to_apid[ppid] & PMIC_ARB_CHAN_VALID;
prev_apid = pa->ppid_to_apid[ppid] & ~PMIC_ARB_CHAN_VALID;
- if (valid && is_irq_owner &&
+ if (!valid || pa->apid_data[apid].write_owner == pa->ee) {
+ /* First PPID mapping or one for this EE */
+ pa->ppid_to_apid[ppid] = apid | PMIC_ARB_CHAN_VALID;
+ } else if (valid && is_irq_owner &&
pa->apid_data[prev_apid].write_owner == pa->ee) {
/*
* Duplicate PPID mapping after the one for this EE;
@@ -977,9 +980,6 @@ static int pmic_arb_read_apid_map_v5(struct spmi_pmic_arb *pa)
*/
pa->apid_data[prev_apid].irq_owner
= pa->apid_data[apid].irq_owner;
- } else if (!valid || is_irq_owner) {
- /* First PPID mapping or duplicate for another EE */
- pa->ppid_to_apid[ppid] = apid | PMIC_ARB_CHAN_VALID;
}
pa->apid_data[apid].ppid = ppid;
diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c
index 7c58e19..72f2b6a 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -57,6 +57,7 @@ static int ion_cma_get_sgtable(struct device *dev, struct sg_table *sgt,
return ret;
sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+ sg_dma_address(sgt->sgl) = sg_phys(sgt->sgl);
return 0;
}
@@ -97,9 +98,9 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
&info->handle,
GFP_KERNEL);
else
- info->cpu_addr = dma_alloc_nonconsistent(dev, len,
- &info->handle,
- GFP_KERNEL);
+ info->cpu_addr = dma_alloc_attrs(dev, len, &info->handle,
+ GFP_KERNEL,
+ DMA_ATTR_FORCE_COHERENT);
if (!info->cpu_addr) {
dev_err(dev, "Fail to allocate buffer\n");
@@ -115,6 +116,11 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
ion_cma_get_sgtable(dev,
info->table, info->cpu_addr, info->handle, len);
+ /* Ensure memory is dma-ready - refer to ion_buffer_create() */
+ if (info->is_cached)
+ dma_sync_sg_for_device(dev, info->table->sgl,
+ info->table->nents, DMA_BIDIRECTIONAL);
+
/* keep this for memory release */
buffer->priv_virt = info;
dev_dbg(dev, "Allocate buffer %pK\n", buffer);
@@ -129,10 +135,13 @@ static void ion_cma_free(struct ion_buffer *buffer)
{
struct device *dev = buffer->heap->priv;
struct ion_cma_buffer_info *info = buffer->priv_virt;
+ unsigned long attrs = 0;
dev_dbg(dev, "Release buffer %pK\n", buffer);
/* release memory */
- dma_free_coherent(dev, buffer->size, info->cpu_addr, info->handle);
+ if (info->is_cached)
+ attrs |= DMA_ATTR_FORCE_COHERENT;
+ dma_free_attrs(dev, buffer->size, info->cpu_addr, info->handle, attrs);
sg_free_table(info->table);
/* release sg table */
kfree(info->table);
@@ -175,8 +184,9 @@ static int ion_cma_mmap(struct ion_heap *mapper, struct ion_buffer *buffer,
struct ion_cma_buffer_info *info = buffer->priv_virt;
if (info->is_cached)
- return dma_mmap_nonconsistent(dev, vma, info->cpu_addr,
- info->handle, buffer->size);
+ return dma_mmap_attrs(dev, vma, info->cpu_addr,
+ info->handle, buffer->size,
+ DMA_ATTR_FORCE_COHERENT);
else
return dma_mmap_writecombine(dev, vma, info->cpu_addr,
info->handle, buffer->size);
diff --git a/drivers/staging/fsl-mc/bus/fsl-mc-msi.c b/drivers/staging/fsl-mc/bus/fsl-mc-msi.c
index 3d46b1b..7de992c 100644
--- a/drivers/staging/fsl-mc/bus/fsl-mc-msi.c
+++ b/drivers/staging/fsl-mc/bus/fsl-mc-msi.c
@@ -17,6 +17,7 @@
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include "../include/mc-bus.h"
+#include "fsl-mc-private.h"
/*
* Generate a unique ID identifying the interrupt (only used within the MSI
diff --git a/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c b/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c
index 7a6ac64..eaeb3c5 100644
--- a/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c
+++ b/drivers/staging/fsl-mc/bus/irq-gic-v3-its-fsl-mc-msi.c
@@ -17,6 +17,7 @@
#include <linux/of.h>
#include <linux/of_irq.h>
#include "../include/mc-bus.h"
+#include "fsl-mc-private.h"
static struct irq_chip its_msi_irq_chip = {
.name = "fsl-mc-bus-msi",
diff --git a/drivers/staging/greybus/connection.c b/drivers/staging/greybus/connection.c
index 5570751..1bf0ee4 100644
--- a/drivers/staging/greybus/connection.c
+++ b/drivers/staging/greybus/connection.c
@@ -357,6 +357,9 @@ static int gb_connection_hd_cport_quiesce(struct gb_connection *connection)
size_t peer_space;
int ret;
+ if (!hd->driver->cport_quiesce)
+ return 0;
+
peer_space = sizeof(struct gb_operation_msg_hdr) +
sizeof(struct gb_cport_shutdown_request);
@@ -380,6 +383,9 @@ static int gb_connection_hd_cport_clear(struct gb_connection *connection)
struct gb_host_device *hd = connection->hd;
int ret;
+ if (!hd->driver->cport_clear)
+ return 0;
+
ret = hd->driver->cport_clear(hd, connection->hd_cport_id);
if (ret) {
dev_err(&hd->dev, "%s: failed to clear host cport: %d\n",
diff --git a/drivers/staging/greybus/spilib.c b/drivers/staging/greybus/spilib.c
index e97b191..1e7321a 100644
--- a/drivers/staging/greybus/spilib.c
+++ b/drivers/staging/greybus/spilib.c
@@ -544,12 +544,15 @@ int gb_spilib_master_init(struct gb_connection *connection, struct device *dev,
return 0;
-exit_spi_unregister:
- spi_unregister_master(master);
exit_spi_put:
spi_master_put(master);
return ret;
+
+exit_spi_unregister:
+ spi_unregister_master(master);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(gb_spilib_master_init);
@@ -558,7 +561,6 @@ void gb_spilib_master_exit(struct gb_connection *connection)
struct spi_master *master = gb_connection_get_data(connection);
spi_unregister_master(master);
- spi_master_put(master);
}
EXPORT_SYMBOL_GPL(gb_spilib_master_exit);
diff --git a/drivers/staging/iio/trigger/iio-trig-bfin-timer.c b/drivers/staging/iio/trigger/iio-trig-bfin-timer.c
index 38dca69..ce500a5 100644
--- a/drivers/staging/iio/trigger/iio-trig-bfin-timer.c
+++ b/drivers/staging/iio/trigger/iio-trig-bfin-timer.c
@@ -260,7 +260,7 @@ static int iio_bfin_tmr_trigger_probe(struct platform_device *pdev)
out1:
iio_trigger_unregister(st->trig);
out:
- iio_trigger_put(st->trig);
+ iio_trigger_free(st->trig);
return ret;
}
@@ -273,7 +273,7 @@ static int iio_bfin_tmr_trigger_remove(struct platform_device *pdev)
peripheral_free(st->t->pin);
free_irq(st->irq, st);
iio_trigger_unregister(st->trig);
- iio_trigger_put(st->trig);
+ iio_trigger_free(st->trig);
return 0;
}
diff --git a/drivers/staging/lustre/lustre/include/lustre/lustre_user.h b/drivers/staging/lustre/lustre/include/lustre/lustre_user.h
index 6fc9855..e533088 100644
--- a/drivers/staging/lustre/lustre/include/lustre/lustre_user.h
+++ b/drivers/staging/lustre/lustre/include/lustre/lustre_user.h
@@ -1213,23 +1213,21 @@ struct hsm_action_item {
* \retval buffer
*/
static inline char *hai_dump_data_field(struct hsm_action_item *hai,
- char *buffer, int len)
+ char *buffer, size_t len)
{
- int i, sz, data_len;
+ int i, data_len;
char *ptr;
ptr = buffer;
- sz = len;
data_len = hai->hai_len - sizeof(*hai);
- for (i = 0 ; (i < data_len) && (sz > 0) ; i++) {
- int cnt;
-
- cnt = snprintf(ptr, sz, "%.2X",
- (unsigned char)hai->hai_data[i]);
- ptr += cnt;
- sz -= cnt;
+ for (i = 0; (i < data_len) && (len > 2); i++) {
+ snprintf(ptr, 3, "%02X", (unsigned char)hai->hai_data[i]);
+ ptr += 2;
+ len -= 2;
}
+
*ptr = '\0';
+
return buffer;
}
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
index 3c48b4f..d18ab3f 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
@@ -546,6 +546,13 @@ struct ldlm_lock *__ldlm_handle2lock(const struct lustre_handle *handle,
if (!lock)
return NULL;
+ if (lock->l_export && lock->l_export->exp_failed) {
+ CDEBUG(D_INFO, "lock export failed: lock %p, exp %p\n",
+ lock, lock->l_export);
+ LDLM_LOCK_PUT(lock);
+ return NULL;
+ }
+
/* It's unlikely but possible that someone marked the lock as
* destroyed after we did handle2object on it
*/
diff --git a/drivers/staging/lustre/lustre/llite/rw26.c b/drivers/staging/lustre/lustre/llite/rw26.c
index 26f3a37..0cb70c3 100644
--- a/drivers/staging/lustre/lustre/llite/rw26.c
+++ b/drivers/staging/lustre/lustre/llite/rw26.c
@@ -354,6 +354,10 @@ static ssize_t ll_direct_IO_26(struct kiocb *iocb, struct iov_iter *iter)
if (!lli->lli_has_smd)
return -EBADF;
+ /* Check EOF by ourselves */
+ if (iov_iter_rw(iter) == READ && file_offset >= i_size_read(inode))
+ return 0;
+
/* FIXME: io smaller than PAGE_SIZE is broken on ia64 ??? */
if ((file_offset & ~PAGE_MASK) || (count & ~PAGE_MASK))
return -EINVAL;
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_obd.c b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
index 7dbb2b9..cd19ce8 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_obd.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
@@ -744,16 +744,18 @@ static int lmv_hsm_req_count(struct lmv_obd *lmv,
/* count how many requests must be sent to the given target */
for (i = 0; i < hur->hur_request.hr_itemcount; i++) {
curr_tgt = lmv_find_target(lmv, &hur->hur_user_item[i].hui_fid);
+ if (IS_ERR(curr_tgt))
+ return PTR_ERR(curr_tgt);
if (obd_uuid_equals(&curr_tgt->ltd_uuid, &tgt_mds->ltd_uuid))
nr++;
}
return nr;
}
-static void lmv_hsm_req_build(struct lmv_obd *lmv,
- struct hsm_user_request *hur_in,
- const struct lmv_tgt_desc *tgt_mds,
- struct hsm_user_request *hur_out)
+static int lmv_hsm_req_build(struct lmv_obd *lmv,
+ struct hsm_user_request *hur_in,
+ const struct lmv_tgt_desc *tgt_mds,
+ struct hsm_user_request *hur_out)
{
int i, nr_out;
struct lmv_tgt_desc *curr_tgt;
@@ -764,6 +766,8 @@ static void lmv_hsm_req_build(struct lmv_obd *lmv,
for (i = 0; i < hur_in->hur_request.hr_itemcount; i++) {
curr_tgt = lmv_find_target(lmv,
&hur_in->hur_user_item[i].hui_fid);
+ if (IS_ERR(curr_tgt))
+ return PTR_ERR(curr_tgt);
if (obd_uuid_equals(&curr_tgt->ltd_uuid, &tgt_mds->ltd_uuid)) {
hur_out->hur_user_item[nr_out] =
hur_in->hur_user_item[i];
@@ -773,6 +777,8 @@ static void lmv_hsm_req_build(struct lmv_obd *lmv,
hur_out->hur_request.hr_itemcount = nr_out;
memcpy(hur_data(hur_out), hur_data(hur_in),
hur_in->hur_request.hr_data_len);
+
+ return 0;
}
static int lmv_hsm_ct_unregister(struct lmv_obd *lmv, unsigned int cmd, int len,
@@ -1052,15 +1058,17 @@ static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
} else {
/* split fid list to their respective MDS */
for (i = 0; i < count; i++) {
- unsigned int nr, reqlen;
- int rc1;
struct hsm_user_request *req;
+ size_t reqlen;
+ int nr, rc1;
tgt = lmv->tgts[i];
if (!tgt || !tgt->ltd_exp)
continue;
nr = lmv_hsm_req_count(lmv, hur, tgt);
+ if (nr < 0)
+ return nr;
if (nr == 0) /* nothing for this MDS */
continue;
@@ -1072,10 +1080,13 @@ static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
if (!req)
return -ENOMEM;
- lmv_hsm_req_build(lmv, hur, tgt, req);
+ rc1 = lmv_hsm_req_build(lmv, hur, tgt, req);
+ if (rc1 < 0)
+ goto hsm_req_err;
rc1 = obd_iocontrol(cmd, tgt->ltd_exp, reqlen,
req, uarg);
+hsm_req_err:
if (rc1 != 0 && rc == 0)
rc = rc1;
kvfree(req);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
index 72f3930..9d34848 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/service.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
@@ -1264,20 +1264,15 @@ static int ptlrpc_server_hpreq_init(struct ptlrpc_service_part *svcpt,
*/
if (req->rq_ops->hpreq_check) {
rc = req->rq_ops->hpreq_check(req);
- /**
- * XXX: Out of all current
- * ptlrpc_hpreq_ops::hpreq_check(), only
- * ldlm_cancel_hpreq_check() can return an error code;
- * other functions assert in similar places, which seems
- * odd. What also does not seem right is that handlers
- * for those RPCs do not assert on the same checks, but
- * rather handle the error cases. e.g. see
- * ost_rw_hpreq_check(), and ost_brw_read(),
- * ost_brw_write().
+ if (rc == -ESTALE) {
+ req->rq_status = rc;
+ ptlrpc_error(req);
+ }
+ /** can only return error,
+ * 0 for normal request,
+ * or 1 for high priority request
*/
- if (rc < 0)
- return rc;
- LASSERT(rc == 0 || rc == 1);
+ LASSERT(rc <= 1);
}
spin_lock_bh(&req->rq_export->exp_rpc_lock);
diff --git a/drivers/staging/rtl8188eu/include/rtw_debug.h b/drivers/staging/rtl8188eu/include/rtw_debug.h
index 95590a1..9cc4b8c 100644
--- a/drivers/staging/rtl8188eu/include/rtw_debug.h
+++ b/drivers/staging/rtl8188eu/include/rtw_debug.h
@@ -70,7 +70,7 @@ extern u32 GlobalDebugLevel;
#define DBG_88E_LEVEL(_level, fmt, arg...) \
do { \
if (_level <= GlobalDebugLevel) \
- pr_info(DRIVER_PREFIX"ERROR " fmt, ##arg); \
+ pr_info(DRIVER_PREFIX fmt, ##arg); \
} while (0)
#define DBG_88E(...) \
diff --git a/drivers/staging/rtl8712/ieee80211.h b/drivers/staging/rtl8712/ieee80211.h
index 67ab580..68fd65e 100644
--- a/drivers/staging/rtl8712/ieee80211.h
+++ b/drivers/staging/rtl8712/ieee80211.h
@@ -138,51 +138,51 @@ struct ieee_ibss_seq {
};
struct ieee80211_hdr {
- u16 frame_ctl;
- u16 duration_id;
+ __le16 frame_ctl;
+ __le16 duration_id;
u8 addr1[ETH_ALEN];
u8 addr2[ETH_ALEN];
u8 addr3[ETH_ALEN];
- u16 seq_ctl;
+ __le16 seq_ctl;
u8 addr4[ETH_ALEN];
-} __packed;
+} __packed __aligned(2);
struct ieee80211_hdr_3addr {
- u16 frame_ctl;
- u16 duration_id;
+ __le16 frame_ctl;
+ __le16 duration_id;
u8 addr1[ETH_ALEN];
u8 addr2[ETH_ALEN];
u8 addr3[ETH_ALEN];
- u16 seq_ctl;
-} __packed;
+ __le16 seq_ctl;
+} __packed __aligned(2);
struct ieee80211_hdr_qos {
- u16 frame_ctl;
- u16 duration_id;
+ __le16 frame_ctl;
+ __le16 duration_id;
u8 addr1[ETH_ALEN];
u8 addr2[ETH_ALEN];
u8 addr3[ETH_ALEN];
- u16 seq_ctl;
+ __le16 seq_ctl;
u8 addr4[ETH_ALEN];
- u16 qc;
-} __packed;
+ __le16 qc;
+} __packed __aligned(2);
struct ieee80211_hdr_3addr_qos {
- u16 frame_ctl;
- u16 duration_id;
+ __le16 frame_ctl;
+ __le16 duration_id;
u8 addr1[ETH_ALEN];
u8 addr2[ETH_ALEN];
u8 addr3[ETH_ALEN];
- u16 seq_ctl;
- u16 qc;
+ __le16 seq_ctl;
+ __le16 qc;
} __packed;
struct eapol {
u8 snap[6];
- u16 ethertype;
+ __be16 ethertype;
u8 version;
u8 type;
- u16 length;
+ __le16 length;
} __packed;
enum eap_type {
@@ -514,13 +514,13 @@ struct ieee80211_security {
*/
struct ieee80211_header_data {
- u16 frame_ctl;
- u16 duration_id;
+ __le16 frame_ctl;
+ __le16 duration_id;
u8 addr1[6];
u8 addr2[6];
u8 addr3[6];
- u16 seq_ctrl;
-};
+ __le16 seq_ctrl;
+} __packed __aligned(2);
#define BEACON_PROBE_SSID_ID_POSITION 12
@@ -552,18 +552,18 @@ struct ieee80211_info_element {
/*
* These are the data types that can make up management packets
*
- u16 auth_algorithm;
- u16 auth_sequence;
- u16 beacon_interval;
- u16 capability;
+ __le16 auth_algorithm;
+ __le16 auth_sequence;
+ __le16 beacon_interval;
+ __le16 capability;
u8 current_ap[ETH_ALEN];
- u16 listen_interval;
+ __le16 listen_interval;
struct {
u16 association_id:14, reserved:2;
} __packed;
- u32 time_stamp[2];
- u16 reason;
- u16 status;
+ __le32 time_stamp[2];
+ __le16 reason;
+ __le16 status;
*/
#define IEEE80211_DEFAULT_TX_ESSID "Penguin"
@@ -571,16 +571,16 @@ struct ieee80211_info_element {
struct ieee80211_authentication {
struct ieee80211_header_data header;
- u16 algorithm;
- u16 transaction;
- u16 status;
+ __le16 algorithm;
+ __le16 transaction;
+ __le16 status;
} __packed;
struct ieee80211_probe_response {
struct ieee80211_header_data header;
- u32 time_stamp[2];
- u16 beacon_interval;
- u16 capability;
+ __le32 time_stamp[2];
+ __le16 beacon_interval;
+ __le16 capability;
struct ieee80211_info_element info_element;
} __packed;
@@ -590,16 +590,16 @@ struct ieee80211_probe_request {
struct ieee80211_assoc_request_frame {
struct ieee80211_hdr_3addr header;
- u16 capability;
- u16 listen_interval;
+ __le16 capability;
+ __le16 listen_interval;
struct ieee80211_info_element_hdr info_element;
} __packed;
struct ieee80211_assoc_response_frame {
struct ieee80211_hdr_3addr header;
- u16 capability;
- u16 status;
- u16 aid;
+ __le16 capability;
+ __le16 status;
+ __le16 aid;
} __packed;
struct ieee80211_txb {
diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
index 475e790..2d26f9a 100644
--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
@@ -199,7 +199,7 @@ static noinline_for_stack char *translate_scan(struct _adapter *padapter,
iwe.cmd = SIOCGIWMODE;
memcpy((u8 *)&cap, r8712_get_capability_from_ie(pnetwork->network.IEs),
2);
- cap = le16_to_cpu(cap);
+ le16_to_cpus(&cap);
if (cap & (WLAN_CAPABILITY_IBSS | WLAN_CAPABILITY_BSS)) {
if (cap & WLAN_CAPABILITY_BSS)
iwe.u.mode = (u32)IW_MODE_MASTER;
diff --git a/drivers/staging/rtl8712/rtl871x_xmit.c b/drivers/staging/rtl8712/rtl871x_xmit.c
index be38364..c478639 100644
--- a/drivers/staging/rtl8712/rtl871x_xmit.c
+++ b/drivers/staging/rtl8712/rtl871x_xmit.c
@@ -344,7 +344,8 @@ sint r8712_update_attrib(struct _adapter *padapter, _pkt *pkt,
* some settings above.
*/
if (check_fwstate(pmlmepriv, WIFI_MP_STATE))
- pattrib->priority = (txdesc.txdw1 >> QSEL_SHT) & 0x1f;
+ pattrib->priority =
+ (le32_to_cpu(txdesc.txdw1) >> QSEL_SHT) & 0x1f;
return _SUCCESS;
}
@@ -485,7 +486,7 @@ static sint make_wlanhdr(struct _adapter *padapter, u8 *hdr,
struct ieee80211_hdr *pwlanhdr = (struct ieee80211_hdr *)hdr;
struct mlme_priv *pmlmepriv = &padapter->mlmepriv;
struct qos_priv *pqospriv = &pmlmepriv->qospriv;
- u16 *fctrl = &pwlanhdr->frame_ctl;
+ __le16 *fctrl = &pwlanhdr->frame_ctl;
memset(hdr, 0, WLANHDR_OFFSET);
SetFrameSubType(fctrl, pattrib->subtype);
@@ -574,7 +575,7 @@ static sint r8712_put_snap(u8 *data, u16 h_proto)
snap->oui[0] = oui[0];
snap->oui[1] = oui[1];
snap->oui[2] = oui[2];
- *(u16 *)(data + SNAP_SIZE) = htons(h_proto);
+ *(__be16 *)(data + SNAP_SIZE) = htons(h_proto);
return SNAP_SIZE + sizeof(u16);
}
diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
index defffa7..07d6e48 100644
--- a/drivers/staging/wilc1000/linux_wlan.c
+++ b/drivers/staging/wilc1000/linux_wlan.c
@@ -1001,7 +1001,7 @@ int wilc_mac_xmit(struct sk_buff *skb, struct net_device *ndev)
tx_data->skb = skb;
eth_h = (struct ethhdr *)(skb->data);
- if (eth_h->h_proto == 0x8e88)
+ if (eth_h->h_proto == cpu_to_be16(0x8e88))
netdev_dbg(ndev, "EAPOL transmitted\n");
ih = (struct iphdr *)(skb->data + sizeof(struct ethhdr));
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index e49fcd5..f3c9d18 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -1940,7 +1940,7 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
struct iscsi_tm *hdr;
int out_of_order_cmdsn = 0, ret;
bool sess_ref = false;
- u8 function;
+ u8 function, tcm_function = TMR_UNKNOWN;
hdr = (struct iscsi_tm *) buf;
hdr->flags &= ~ISCSI_FLAG_CMD_FINAL;
@@ -1986,10 +1986,6 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
* LIO-Target $FABRIC_MOD
*/
if (function != ISCSI_TM_FUNC_TASK_REASSIGN) {
-
- u8 tcm_function;
- int ret;
-
transport_init_se_cmd(&cmd->se_cmd, &iscsi_ops,
conn->sess->se_sess, 0, DMA_NONE,
TCM_SIMPLE_TAG, cmd->sense_buffer + 2);
@@ -2025,15 +2021,14 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
return iscsit_add_reject_cmd(cmd,
ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
}
-
- ret = core_tmr_alloc_req(&cmd->se_cmd, cmd->tmr_req,
- tcm_function, GFP_KERNEL);
- if (ret < 0)
- return iscsit_add_reject_cmd(cmd,
+ }
+ ret = core_tmr_alloc_req(&cmd->se_cmd, cmd->tmr_req, tcm_function,
+ GFP_KERNEL);
+ if (ret < 0)
+ return iscsit_add_reject_cmd(cmd,
ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
- cmd->tmr_req->se_tmr_req = cmd->se_cmd.se_tmr_req;
- }
+ cmd->tmr_req->se_tmr_req = cmd->se_cmd.se_tmr_req;
cmd->iscsi_opcode = ISCSI_OP_SCSI_TMFUNC;
cmd->i_state = ISTATE_SEND_TASKMGTRSP;
diff --git a/drivers/thermal/qcom/msm_lmh_dcvs.c b/drivers/thermal/qcom/msm_lmh_dcvs.c
index 4e5546e..94c93b5 100644
--- a/drivers/thermal/qcom/msm_lmh_dcvs.c
+++ b/drivers/thermal/qcom/msm_lmh_dcvs.c
@@ -58,8 +58,7 @@
#define LIMITS_CLUSTER_0 0x6370302D
#define LIMITS_CLUSTER_1 0x6370312D
-#define LIMITS_DOMAIN_MAX 0x444D4158
-#define LIMITS_DOMAIN_MIN 0x444D494E
+#define LIMITS_FREQ_CAP 0x46434150
#define LIMITS_TEMP_DEFAULT 75000
#define LIMITS_TEMP_HIGH_THRESH_MAX 120000
@@ -225,31 +224,36 @@ static irqreturn_t lmh_dcvs_handle_isr(int irq, void *data)
}
static int limits_dcvs_write(uint32_t node_id, uint32_t fn,
- uint32_t setting, uint32_t val)
+ uint32_t setting, uint32_t val, uint32_t val1,
+ bool enable_val1)
{
int ret;
struct scm_desc desc_arg;
uint32_t *payload = NULL;
+ uint32_t payload_len;
- payload = kzalloc(sizeof(uint32_t) * 5, GFP_KERNEL);
+ payload_len = ((enable_val1) ? 6 : 5) * sizeof(uint32_t);
+ payload = kzalloc(payload_len, GFP_KERNEL);
if (!payload)
return -ENOMEM;
payload[0] = fn; /* algorithm */
payload[1] = 0; /* unused sub-algorithm */
payload[2] = setting;
- payload[3] = 1; /* number of values */
+ payload[3] = enable_val1 ? 2 : 1; /* number of values */
payload[4] = val;
+ if (enable_val1)
+ payload[5] = val1;
desc_arg.args[0] = SCM_BUFFER_PHYS(payload);
- desc_arg.args[1] = sizeof(uint32_t) * 5;
+ desc_arg.args[1] = payload_len;
desc_arg.args[2] = LIMITS_NODE_DCVS;
desc_arg.args[3] = node_id;
desc_arg.args[4] = 0; /* version */
desc_arg.arginfo = SCM_ARGS(5, SCM_RO, SCM_VAL, SCM_VAL,
SCM_VAL, SCM_VAL);
- dmac_flush_range(payload, (void *)payload + 5 * (sizeof(uint32_t)));
+ dmac_flush_range(payload, (void *)payload + payload_len);
ret = scm_call2(SCM_SIP_FNID(SCM_SVC_LMH, LIMITS_DCVSH), &desc_arg);
kfree(payload);
@@ -288,16 +292,17 @@ static int lmh_set_trips(void *data, int low, int high)
hw->temp_limits[LIMITS_TRIP_ARM] = (uint32_t)low;
ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_THERMAL,
- LIMITS_ARM_THRESHOLD, low);
+ LIMITS_ARM_THRESHOLD, low, 0, 0);
if (ret)
return ret;
ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_THERMAL,
- LIMITS_HI_THRESHOLD, high);
+ LIMITS_HI_THRESHOLD, high, 0, 0);
if (ret)
return ret;
ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_THERMAL,
LIMITS_LOW_THRESHOLD,
- high - LIMITS_LOW_THRESHOLD_OFFSET);
+ high - LIMITS_LOW_THRESHOLD_OFFSET,
+ 0, 0);
if (ret)
return ret;
@@ -365,8 +370,9 @@ static int lmh_set_max_limit(int cpu, u32 freq)
max_freq = hw->cdev_data[idx].max_freq;
idx++;
}
- ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_GENERAL,
- LIMITS_DOMAIN_MAX, max_freq);
+ ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_THERMAL,
+ LIMITS_FREQ_CAP, max_freq,
+ (max_freq == U32_MAX) ? 0 : 1, 1);
mutex_unlock(&hw->access_lock);
lmh_dcvs_notify(hw);
@@ -556,22 +562,22 @@ static int limits_dcvs_probe(struct platform_device *pdev)
/* Enable the thermal algorithm early */
ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_THERMAL,
- LIMITS_ALGO_MODE_ENABLE, 1);
+ LIMITS_ALGO_MODE_ENABLE, 1, 0, 0);
if (ret)
return ret;
/* Enable the LMH outer loop algorithm */
ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_CRNT,
- LIMITS_ALGO_MODE_ENABLE, 1);
+ LIMITS_ALGO_MODE_ENABLE, 1, 0, 0);
if (ret)
return ret;
/* Enable the Reliability algorithm */
ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_REL,
- LIMITS_ALGO_MODE_ENABLE, 1);
+ LIMITS_ALGO_MODE_ENABLE, 1, 0, 0);
if (ret)
return ret;
/* Enable the BCL algorithm */
ret = limits_dcvs_write(hw->affinity, LIMITS_SUB_FN_BCL,
- LIMITS_ALGO_MODE_ENABLE, 1);
+ LIMITS_ALGO_MODE_ENABLE, 1, 0, 0);
if (ret)
return ret;
ret = enable_lmh();
diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
index 7663e3c..80c3f91 100644
--- a/drivers/thermal/thermal_core.c
+++ b/drivers/thermal/thermal_core.c
@@ -1325,9 +1325,11 @@ thermal_cooling_device_cur_state_store(struct device *dev,
if ((long)state < 0)
return -EINVAL;
+ mutex_lock(&cdev->lock);
cdev->sysfs_cur_state_req = state;
cdev->updated = false;
+ mutex_unlock(&cdev->lock);
thermal_cdev_update(cdev);
return count;
@@ -1349,9 +1351,11 @@ thermal_cooling_device_min_state_store(struct device *dev,
if ((long)state < 0)
return -EINVAL;
+ mutex_lock(&cdev->lock);
cdev->sysfs_min_state_req = state;
cdev->updated = false;
+ mutex_unlock(&cdev->lock);
thermal_cdev_update(cdev);
return count;
diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
index 0facc78..f8c3107 100644
--- a/drivers/tty/serial/8250/8250_fintek.c
+++ b/drivers/tty/serial/8250/8250_fintek.c
@@ -54,6 +54,9 @@ static int fintek_8250_enter_key(u16 base_port, u8 key)
if (!request_muxed_region(base_port, 2, "8250_fintek"))
return -EBUSY;
+ /* Force to deactive all SuperIO in this base_port */
+ outb(EXIT_KEY, base_port + ADDR_PORT);
+
outb(key, base_port + ADDR_PORT);
outb(key, base_port + ADDR_PORT);
return 0;
diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
index 626cfdc..0c5e9ca 100644
--- a/drivers/tty/serial/Kconfig
+++ b/drivers/tty/serial/Kconfig
@@ -1690,6 +1690,16 @@
and warnings and which allows logins in single user mode)
Otherwise, say 'N'.
+config SERIAL_MSM_SMD
+ bool "Enable tty device interface for some SMD ports"
+ default n
+ depends on MSM_SMD
+ help
+ This driver provides the interface for the userspace clients
+ to communicate over smd via device nodes. This enable the
+ usersapce clients to read and write to some streaming SMD ports
+ via tty device interface for MSM chipset.
+
endmenu
config SERIAL_MCTRL_GPIO
diff --git a/drivers/tty/serial/Makefile b/drivers/tty/serial/Makefile
index 1bdc7f8..882fff9 100644
--- a/drivers/tty/serial/Makefile
+++ b/drivers/tty/serial/Makefile
@@ -98,3 +98,4 @@
# GPIOLIB helpers for modem control lines
obj-$(CONFIG_SERIAL_MCTRL_GPIO) += serial_mctrl_gpio.o
+obj-$(CONFIG_SERIAL_MSM_SMD) += msm_smd_tty.o
diff --git a/drivers/tty/serial/msm_geni_serial.c b/drivers/tty/serial/msm_geni_serial.c
index b142869..89c1681 100644
--- a/drivers/tty/serial/msm_geni_serial.c
+++ b/drivers/tty/serial/msm_geni_serial.c
@@ -127,7 +127,7 @@
} while (0)
#define DMA_RX_BUF_SIZE (2048)
-#define CONSOLE_YIELD_LEN (8 * 1024)
+#define UART_CONSOLE_RX_WM (2)
struct msm_geni_serial_port {
struct uart_port uport;
char name[20];
@@ -163,7 +163,7 @@ struct msm_geni_serial_port {
unsigned int cur_baud;
int ioctl_count;
int edge_count;
- unsigned int tx_yield_count;
+ bool manual_flow;
};
static const struct uart_ops msm_geni_serial_pops;
@@ -266,16 +266,18 @@ static bool check_transfers_inflight(struct uart_port *uport)
/* Possible stop tx is called multiple times. */
m_cmd_active = geni_status & M_GENI_CMD_ACTIVE;
- if (port->xfer_mode == SE_DMA)
+ if (port->xfer_mode == SE_DMA) {
tx_fifo_status = port->tx_dma ? 1 : 0;
- else
+ rx_fifo_status =
+ geni_read_reg_nolog(uport->membase, SE_DMA_RX_LEN_IN);
+ } else {
tx_fifo_status = geni_read_reg_nolog(uport->membase,
SE_GENI_TX_FIFO_STATUS);
- tx_active = m_cmd_active || tx_fifo_status;
- rx_fifo_status = geni_read_reg_nolog(uport->membase,
+ rx_fifo_status = geni_read_reg_nolog(uport->membase,
SE_GENI_RX_FIFO_STATUS);
- if (rx_fifo_status)
- rx_active = true;
+ }
+ tx_active = m_cmd_active || tx_fifo_status;
+ rx_active = rx_fifo_status ? true : false;
if (rx_active || tx_active || !uart_circ_empty(xmit))
xfer_on = true;
@@ -303,10 +305,12 @@ static void wait_for_transfers_inflight(struct uart_port *uport)
u32 geni_ios = geni_read_reg_nolog(uport->membase, SE_GENI_IOS);
u32 rx_fifo_status = geni_read_reg_nolog(uport->membase,
SE_GENI_RX_FIFO_STATUS);
+ u32 rx_dma =
+ geni_read_reg_nolog(uport->membase, SE_DMA_RX_LEN_IN);
IPC_LOG_MSG(port->ipc_log_misc,
- "%s IOS 0x%x geni status 0x%x rx fifo 0x%x\n",
- __func__, geni_ios, geni_status, rx_fifo_status);
+ "%s IOS 0x%x geni status 0x%x rx: fifo 0x%x dma 0x%x\n",
+ __func__, geni_ios, geni_status, rx_fifo_status, rx_dma);
}
}
@@ -405,13 +409,9 @@ static unsigned int msm_geni_serial_get_mctrl(struct uart_port *uport)
{
u32 geni_ios = 0;
unsigned int mctrl = TIOCM_DSR | TIOCM_CAR;
- struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
- if (device_pending_suspend(uport)) {
- IPC_LOG_MSG(port->ipc_log_misc,
- "%s.Device is suspended.\n", __func__);
+ if (device_pending_suspend(uport))
return TIOCM_DSR | TIOCM_CAR | TIOCM_CTS;
- }
geni_ios = geni_read_reg_nolog(uport->membase, SE_GENI_IOS);
if (!(geni_ios & IO2_DATA_IN))
@@ -436,8 +436,12 @@ static void msm_geni_serial_set_mctrl(struct uart_port *uport,
"%s.Device is suspended.\n", __func__);
return;
}
- if (!(mctrl & TIOCM_RTS))
+ if (!(mctrl & TIOCM_RTS)) {
uart_manual_rfr |= (UART_MANUAL_RFR_EN | UART_RFR_NOT_READY);
+ port->manual_flow = true;
+ } else {
+ port->manual_flow = false;
+ }
geni_write_reg_nolog(uart_manual_rfr, uport->membase,
SE_UART_MANUAL_RFR);
/* Write to flow control must complete before return to client*/
@@ -542,7 +546,7 @@ static int msm_geni_serial_poll_bit(struct uart_port *uport,
* Total polling iterations based on FIFO worth of bytes to be
* sent at current baud .Add a little fluff to the wait.
*/
- total_iter = ((fifo_bits * USEC_PER_SEC) / baud);
+ total_iter = ((fifo_bits * USEC_PER_SEC) / baud) / 10;
total_iter += 50;
}
@@ -920,19 +924,12 @@ static void msm_geni_serial_tx_fsm_rst(struct uart_port *uport)
geni_write_reg_nolog(tx_irq_en, uport->membase, SE_DMA_TX_IRQ_EN_SET);
}
-static void msm_geni_serial_stop_tx(struct uart_port *uport)
+static void stop_tx_sequencer(struct uart_port *uport)
{
unsigned int geni_m_irq_en;
unsigned int geni_status;
struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
- if (!uart_console(uport) && device_pending_suspend(uport)) {
- dev_err(uport->dev, "%s.Device is suspended.\n", __func__);
- IPC_LOG_MSG(port->ipc_log_misc,
- "%s.Device is suspended.\n", __func__);
- return;
- }
-
geni_m_irq_en = geni_read_reg_nolog(uport->membase, SE_GENI_M_IRQ_EN);
geni_m_irq_en &= ~M_CMD_DONE_EN;
if (port->xfer_mode == FIFO_MODE) {
@@ -948,9 +945,7 @@ static void msm_geni_serial_stop_tx(struct uart_port *uport)
}
}
port->xmit_size = 0;
-
geni_write_reg_nolog(geni_m_irq_en, uport->membase, SE_GENI_M_IRQ_EN);
-
geni_status = geni_read_reg_nolog(uport->membase,
SE_GENI_STATUS);
/* Possible stop tx is called multiple times. */
@@ -970,13 +965,9 @@ static void msm_geni_serial_stop_tx(struct uart_port *uport)
IPC_LOG_MSG(port->ipc_log_misc, "%s\n", __func__);
}
-static void msm_geni_serial_start_rx(struct uart_port *uport)
+static void msm_geni_serial_stop_tx(struct uart_port *uport)
{
- unsigned int geni_s_irq_en;
- unsigned int geni_m_irq_en;
- unsigned int geni_status;
struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
- int ret;
if (!uart_console(uport) && device_pending_suspend(uport)) {
dev_err(uport->dev, "%s.Device is suspended.\n", __func__);
@@ -984,6 +975,16 @@ static void msm_geni_serial_start_rx(struct uart_port *uport)
"%s.Device is suspended.\n", __func__);
return;
}
+ stop_tx_sequencer(uport);
+}
+
+static void start_rx_sequencer(struct uart_port *uport)
+{
+ unsigned int geni_s_irq_en;
+ unsigned int geni_m_irq_en;
+ unsigned int geni_status;
+ struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
+ int ret;
geni_status = geni_read_reg_nolog(uport->membase, SE_GENI_STATUS);
if (geni_status & S_GENI_CMD_ACTIVE)
@@ -1011,7 +1012,7 @@ static void msm_geni_serial_start_rx(struct uart_port *uport)
dev_err(uport->dev, "%s: RX Prep dma failed %d\n",
__func__, ret);
msm_geni_serial_stop_rx(uport);
- goto exit_geni_serial_start_rx;
+ goto exit_start_rx_sequencer;
}
}
/*
@@ -1020,10 +1021,24 @@ static void msm_geni_serial_start_rx(struct uart_port *uport)
*/
mb();
geni_status = geni_read_reg_nolog(uport->membase, SE_GENI_STATUS);
-exit_geni_serial_start_rx:
+exit_start_rx_sequencer:
IPC_LOG_MSG(port->ipc_log_misc, "%s 0x%x\n", __func__, geni_status);
}
+static void msm_geni_serial_start_rx(struct uart_port *uport)
+{
+ struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
+
+ if (!uart_console(uport) && device_pending_suspend(uport)) {
+ dev_err(uport->dev, "%s.Device is suspended.\n", __func__);
+ IPC_LOG_MSG(port->ipc_log_misc,
+ "%s.Device is suspended.\n", __func__);
+ return;
+ }
+ start_rx_sequencer(&port->uport);
+}
+
+
static void msm_geni_serial_rx_fsm_rst(struct uart_port *uport)
{
unsigned int rx_irq_en;
@@ -1043,19 +1058,15 @@ static void msm_geni_serial_rx_fsm_rst(struct uart_port *uport)
geni_write_reg_nolog(rx_irq_en, uport->membase, SE_DMA_RX_IRQ_EN_SET);
}
-static void msm_geni_serial_stop_rx(struct uart_port *uport)
+static void stop_rx_sequencer(struct uart_port *uport)
{
unsigned int geni_s_irq_en;
unsigned int geni_m_irq_en;
unsigned int geni_status;
struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
u32 irq_clear = S_CMD_DONE_EN;
+ bool done;
- if (!uart_console(uport) && device_pending_suspend(uport)) {
- IPC_LOG_MSG(port->ipc_log_misc,
- "%s.Device is suspended.\n", __func__);
- return;
- }
IPC_LOG_MSG(port->ipc_log_misc, "%s\n", __func__);
if (port->xfer_mode == FIFO_MODE) {
geni_s_irq_en = geni_read_reg_nolog(uport->membase,
@@ -1069,28 +1080,47 @@ static void msm_geni_serial_stop_rx(struct uart_port *uport)
SE_GENI_S_IRQ_EN);
geni_write_reg_nolog(geni_m_irq_en, uport->membase,
SE_GENI_M_IRQ_EN);
- } else if (port->xfer_mode == SE_DMA && port->rx_dma) {
- msm_geni_serial_rx_fsm_rst(uport);
- geni_se_rx_dma_unprep(port->wrapper_dev, port->rx_dma,
- DMA_RX_BUF_SIZE);
- port->rx_dma = (dma_addr_t)NULL;
}
geni_status = geni_read_reg_nolog(uport->membase, SE_GENI_STATUS);
/* Possible stop rx is called multiple times. */
if (!(geni_status & S_GENI_CMD_ACTIVE))
- return;
+ goto exit_rx_seq;
geni_cancel_s_cmd(uport->membase);
/*
* Ensure that the cancel goes through before polling for the
* cancel control bit.
*/
mb();
- msm_geni_serial_poll_bit(uport, SE_GENI_S_CMD_CTRL_REG,
+ done = msm_geni_serial_poll_bit(uport, SE_GENI_S_CMD_CTRL_REG,
S_GENI_CMD_CANCEL, false);
+ geni_status = geni_read_reg_nolog(uport->membase, SE_GENI_STATUS);
+ if (!done)
+ IPC_LOG_MSG(port->ipc_log_misc, "%s Cancel fail 0x%x\n",
+ __func__, geni_status);
+
geni_write_reg_nolog(irq_clear, uport->membase, SE_GENI_S_IRQ_CLEAR);
if ((geni_status & S_GENI_CMD_ACTIVE))
msm_geni_serial_abort_rx(uport);
+exit_rx_seq:
+ if (port->xfer_mode == SE_DMA && port->rx_dma) {
+ msm_geni_serial_rx_fsm_rst(uport);
+ geni_se_rx_dma_unprep(port->wrapper_dev, port->rx_dma,
+ DMA_RX_BUF_SIZE);
+ port->rx_dma = (dma_addr_t)NULL;
+ }
+}
+
+static void msm_geni_serial_stop_rx(struct uart_port *uport)
+{
+ struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
+
+ if (!uart_console(uport) && device_pending_suspend(uport)) {
+ IPC_LOG_MSG(port->ipc_log_misc,
+ "%s.Device is suspended.\n", __func__);
+ return;
+ }
+ stop_rx_sequencer(uport);
}
static int handle_rx_hs(struct uart_port *uport,
@@ -1165,20 +1195,13 @@ static int msm_geni_serial_handle_tx(struct uart_port *uport)
unsigned int fifo_width_bytes =
(uart_console(uport) ? 1 : (msm_port->tx_fifo_width >> 3));
unsigned int geni_m_irq_en;
+ int temp_tail = 0;
- xmit->tail = (xmit->tail + msm_port->xmit_size) & (UART_XMIT_SIZE - 1);
- msm_port->xmit_size = 0;
- if (uart_console(uport) &&
- (uport->icount.tx - msm_port->tx_yield_count) > CONSOLE_YIELD_LEN) {
- msm_port->tx_yield_count = uport->icount.tx;
- msm_geni_serial_stop_tx(uport);
- uart_write_wakeup(uport);
- goto exit_handle_tx;
- }
-
+ xmit_size = uart_circ_chars_pending(xmit);
tx_fifo_status = geni_read_reg_nolog(uport->membase,
SE_GENI_TX_FIFO_STATUS);
- if (uart_circ_empty(xmit) && !tx_fifo_status) {
+ /* Both FIFO and framework buffer are drained */
+ if ((xmit_size == msm_port->xmit_size) && !tx_fifo_status) {
/*
* This will balance out the power vote put in during start_tx
* allowing the device to suspend.
@@ -1188,9 +1211,12 @@ static int msm_geni_serial_handle_tx(struct uart_port *uport)
"%s.Power Off.\n", __func__);
msm_geni_serial_power_off(uport);
}
+ msm_port->xmit_size = 0;
+ uart_circ_clear(xmit);
msm_geni_serial_stop_tx(uport);
goto exit_handle_tx;
}
+ xmit_size -= msm_port->xmit_size;
if (!uart_console(uport)) {
geni_m_irq_en = geni_read_reg_nolog(uport->membase,
@@ -1204,9 +1230,9 @@ static int msm_geni_serial_handle_tx(struct uart_port *uport)
avail_fifo_bytes = (msm_port->tx_fifo_depth - msm_port->tx_wm) *
fifo_width_bytes;
- xmit_size = uart_circ_chars_pending(xmit);
- if (xmit_size > (UART_XMIT_SIZE - xmit->tail))
- xmit_size = UART_XMIT_SIZE - xmit->tail;
+ temp_tail = (xmit->tail + msm_port->xmit_size) & (UART_XMIT_SIZE - 1);
+ if (xmit_size > (UART_XMIT_SIZE - temp_tail))
+ xmit_size = (UART_XMIT_SIZE - temp_tail);
if (xmit_size > avail_fifo_bytes)
xmit_size = avail_fifo_bytes;
@@ -1216,33 +1242,29 @@ static int msm_geni_serial_handle_tx(struct uart_port *uport)
msm_geni_serial_setup_tx(uport, xmit_size);
bytes_remaining = xmit_size;
- dump_ipc(msm_port->ipc_log_tx, "Tx", (char *)&xmit->buf[xmit->tail], 0,
+ dump_ipc(msm_port->ipc_log_tx, "Tx", (char *)&xmit->buf[temp_tail], 0,
xmit_size);
while (i < xmit_size) {
unsigned int tx_bytes;
unsigned int buf = 0;
- int temp_tail;
int c;
tx_bytes = ((bytes_remaining < fifo_width_bytes) ?
bytes_remaining : fifo_width_bytes);
- temp_tail = (xmit->tail + i) & (UART_XMIT_SIZE - 1);
for (c = 0; c < tx_bytes ; c++)
buf |= (xmit->buf[temp_tail + c] << (c * 8));
geni_write_reg_nolog(buf, uport->membase, SE_GENI_TX_FIFOn);
i += tx_bytes;
+ temp_tail = (temp_tail + tx_bytes) & (UART_XMIT_SIZE - 1);
uport->icount.tx += tx_bytes;
bytes_remaining -= tx_bytes;
/* Ensure FIFO write goes through */
wmb();
}
- if (uart_console(uport)) {
+ if (uart_console(uport))
msm_geni_serial_poll_cancel_tx(uport);
- xmit->tail = (xmit->tail + xmit_size) & (UART_XMIT_SIZE - 1);
- } else {
- msm_port->xmit_size = xmit_size;
- }
+ msm_port->xmit_size += xmit_size;
exit_handle_tx:
uart_write_wakeup(uport);
return ret;
@@ -1337,6 +1359,7 @@ static irqreturn_t msm_geni_serial_isr(int isr, void *dev)
unsigned long flags;
unsigned int m_irq_en;
struct msm_geni_serial_port *msm_port = GET_DEV_PORT(uport);
+ struct tty_port *tport = &uport->state->port;
bool drop_rx = false;
spin_lock_irqsave(&uport->lock, flags);
@@ -1366,7 +1389,8 @@ static irqreturn_t msm_geni_serial_isr(int isr, void *dev)
}
if (s_irq_status & S_RX_FIFO_WR_ERR_EN) {
- uport->icount.buf_overrun++;
+ uport->icount.overrun++;
+ tty_insert_flip_char(tport, 0, TTY_OVERRUN);
IPC_LOG_MSG(msm_port->ipc_log_misc,
"%s.sirq 0x%x buf_overrun:%d\n",
__func__, s_irq_status, uport->icount.buf_overrun);
@@ -1505,7 +1529,10 @@ static void set_rfr_wm(struct msm_geni_serial_port *port)
* TX WM level at 10% TX_FIFO_DEPTH.
*/
port->rx_rfr = port->rx_fifo_depth - 2;
- port->rx_wm = port->rx_fifo_depth >> 1;
+ if (!uart_console(&port->uport))
+ port->rx_wm = port->rx_fifo_depth >> 1;
+ else
+ port->rx_wm = UART_CONSOLE_RX_WM;
port->tx_wm = 2;
}
@@ -1890,7 +1917,7 @@ static unsigned int msm_geni_serial_tx_empty(struct uart_port *uport)
struct msm_geni_serial_port *port = GET_DEV_PORT(uport);
if (!uart_console(uport) && device_pending_suspend(uport))
- return 0;
+ return 1;
if (port->xfer_mode == SE_DMA)
tx_fifo_status = port->tx_dma ? 1 : 0;
@@ -2377,6 +2404,11 @@ static int msm_geni_serial_probe(struct platform_device *pdev)
goto exit_geni_serial_probe;
}
+ /* Optional to use the Rx pin as wakeup irq */
+ dev_port->wakeup_irq = platform_get_irq(pdev, 1);
+ if ((dev_port->wakeup_irq < 0 && !is_console))
+ dev_info(&pdev->dev, "No wakeup IRQ configured\n");
+
dev_port->serial_rsc.geni_pinctrl = devm_pinctrl_get(&pdev->dev);
if (IS_ERR_OR_NULL(dev_port->serial_rsc.geni_pinctrl)) {
dev_err(&pdev->dev, "No pinctrl config specified!\n");
@@ -2391,13 +2423,24 @@ static int msm_geni_serial_probe(struct platform_device *pdev)
ret = PTR_ERR(dev_port->serial_rsc.geni_gpio_active);
goto exit_geni_serial_probe;
}
- dev_port->serial_rsc.geni_gpio_sleep =
- pinctrl_lookup_state(dev_port->serial_rsc.geni_pinctrl,
+
+ /*
+ * For clients who setup an Inband wakeup, leave the GPIO pins
+ * always connected to the core, else move the pins to their
+ * defined "sleep" state.
+ */
+ if (dev_port->wakeup_irq > 0) {
+ dev_port->serial_rsc.geni_gpio_sleep =
+ dev_port->serial_rsc.geni_gpio_active;
+ } else {
+ dev_port->serial_rsc.geni_gpio_sleep =
+ pinctrl_lookup_state(dev_port->serial_rsc.geni_pinctrl,
PINCTRL_SLEEP);
- if (IS_ERR_OR_NULL(dev_port->serial_rsc.geni_gpio_sleep)) {
- dev_err(&pdev->dev, "No sleep config specified!\n");
- ret = PTR_ERR(dev_port->serial_rsc.geni_gpio_sleep);
- goto exit_geni_serial_probe;
+ if (IS_ERR_OR_NULL(dev_port->serial_rsc.geni_gpio_sleep)) {
+ dev_err(&pdev->dev, "No sleep config specified!\n");
+ ret = PTR_ERR(dev_port->serial_rsc.geni_gpio_sleep);
+ goto exit_geni_serial_probe;
+ }
}
wakeup_source_init(&dev_port->geni_wake, dev_name(&pdev->dev));
@@ -2414,11 +2457,6 @@ static int msm_geni_serial_probe(struct platform_device *pdev)
goto exit_geni_serial_probe;
}
- /* Optional to use the Rx pin as wakeup irq */
- dev_port->wakeup_irq = platform_get_irq(pdev, 1);
- if ((dev_port->wakeup_irq < 0 && !is_console))
- dev_info(&pdev->dev, "No wakeup IRQ configured\n");
-
uport->private_data = (void *)drv;
platform_set_drvdata(pdev, dev_port);
if (is_console) {
@@ -2462,25 +2500,42 @@ static int msm_geni_serial_runtime_suspend(struct device *dev)
struct platform_device *pdev = to_platform_device(dev);
struct msm_geni_serial_port *port = platform_get_drvdata(pdev);
int ret = 0;
+ u32 uart_manual_rfr = 0;
+ u32 geni_status = geni_read_reg_nolog(port->uport.membase,
+ SE_GENI_STATUS);
wait_for_transfers_inflight(&port->uport);
+ /*
+ * Disable Interrupt
+ * Manual RFR On.
+ * Stop Rx.
+ * Resources off
+ */
disable_irq(port->uport.irq);
+ /*
+ * If the clients haven't done a manual flow on/off then go ahead and
+ * set this to manual flow on.
+ */
+ if (!port->manual_flow) {
+ uart_manual_rfr |= (UART_MANUAL_RFR_EN | UART_RFR_READY);
+ geni_write_reg_nolog(uart_manual_rfr, port->uport.membase,
+ SE_UART_MANUAL_RFR);
+ /*
+ * Ensure that the manual flow on writes go through before
+ * doing a stop_rx else we could end up flowing off the peer.
+ */
+ mb();
+ }
+ stop_rx_sequencer(&port->uport);
+ if ((geni_status & M_GENI_CMD_ACTIVE))
+ stop_tx_sequencer(&port->uport);
ret = se_geni_resources_off(&port->serial_rsc);
if (ret) {
dev_err(dev, "%s: Error ret %d\n", __func__, ret);
goto exit_runtime_suspend;
}
if (port->wakeup_irq > 0) {
- struct se_geni_rsc *rsc = &port->serial_rsc;
-
port->edge_count = 0;
- ret = pinctrl_select_state(rsc->geni_pinctrl,
- rsc->geni_gpio_active);
- if (ret) {
- dev_err(dev, "%s: Error %d pinctrl_select_state\n",
- __func__, ret);
- goto exit_runtime_suspend;
- }
enable_irq(port->wakeup_irq);
}
IPC_LOG_MSG(port->ipc_log_pwr, "%s:\n", __func__);
@@ -2503,12 +2558,24 @@ static int msm_geni_serial_runtime_resume(struct device *dev)
__pm_stay_awake(&port->geni_wake);
if (port->wakeup_irq > 0)
disable_irq(port->wakeup_irq);
+ /*
+ * Resources On.
+ * Start Rx.
+ * Auto RFR.
+ * Enable IRQ.
+ */
ret = se_geni_resources_on(&port->serial_rsc);
if (ret) {
dev_err(dev, "%s: Error ret %d\n", __func__, ret);
__pm_relax(&port->geni_wake);
goto exit_runtime_resume;
}
+ start_rx_sequencer(&port->uport);
+ if (!port->manual_flow)
+ geni_write_reg_nolog(0, port->uport.membase,
+ SE_UART_MANUAL_RFR);
+ /* Ensure that the Rx is running before enabling interrupts */
+ mb();
enable_irq(port->uport.irq);
IPC_LOG_MSG(port->ipc_log_pwr, "%s:\n", __func__);
exit_runtime_resume:
diff --git a/drivers/tty/serial/msm_smd_tty.c b/drivers/tty/serial/msm_smd_tty.c
new file mode 100644
index 0000000..84ee1dd
--- /dev/null
+++ b/drivers/tty/serial/msm_smd_tty.c
@@ -0,0 +1,1049 @@
+/* Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2009-2015, 2017, The Linux Foundation. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/pm.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/ipc_logging.h>
+#include <linux/of.h>
+#include <linux/suspend.h>
+
+#include <linux/tty.h>
+#include <linux/tty_driver.h>
+#include <linux/tty_flip.h>
+
+#include <soc/qcom/smd.h>
+#include <soc/qcom/smsm.h>
+#include <soc/qcom/subsystem_restart.h>
+
+#define MODULE_NAME "msm_smdtty"
+#define MAX_SMD_TTYS 37
+#define MAX_TTY_BUF_SIZE 2048
+#define TTY_PUSH_WS_DELAY 500
+#define TTY_PUSH_WS_POST_SUSPEND_DELAY 100
+#define MAX_RA_WAKE_LOCK_NAME_LEN 32
+#define SMD_TTY_LOG_PAGES 2
+
+#define SMD_TTY_INFO(buf...) \
+do { \
+ if (smd_tty_log_ctx) { \
+ ipc_log_string(smd_tty_log_ctx, buf); \
+ } \
+} while (0)
+
+#define SMD_TTY_ERR(buf...) \
+do { \
+ if (smd_tty_log_ctx) \
+ ipc_log_string(smd_tty_log_ctx, buf); \
+ pr_err(buf); \
+} while (0)
+
+static void *smd_tty_log_ctx;
+static bool smd_tty_in_suspend;
+static bool smd_tty_read_in_suspend;
+static struct wakeup_source read_in_suspend_ws;
+
+/**
+ * struct smd_tty_info - context for an individual SMD TTY device
+ *
+ * @ch: SMD channel handle
+ * @port: TTY port context structure
+ * @device_ptr: TTY device pointer
+ * @pending_ws: pending-data wakeup source
+ * @tty_tsklt: read tasklet
+ * @buf_req_timer: RX buffer retry timer
+ * @ch_allocated: completion set when SMD channel is allocated
+ * @pil: Peripheral Image Loader handle
+ * @edge: SMD edge associated with port
+ * @ch_name: SMD channel name associated with port
+ * @dev_name: SMD platform device name associated with port
+ *
+ * @open_lock_lha1: open/close lock - used to serialize open/close operations
+ * @open_wait: Timeout in seconds to wait for SMD port to be created / opened
+ *
+ * @reset_lock_lha2: lock for reset and open state
+ * @in_reset: True if SMD channel is closed / in SSR
+ * @in_reset_updated: reset state changed
+ * @is_open: True if SMD port is open
+ * @ch_opened_wait_queue: SMD port open/close wait queue
+ *
+ * @ra_lock_lha3: Read-available lock - used to synchronize reads from SMD
+ * @ra_wakeup_source_name: Name of the read-available wakeup source
+ * @ra_wakeup_source: Read-available wakeup source
+ */
+struct smd_tty_info {
+ smd_channel_t *ch;
+ struct tty_port port;
+ struct device *device_ptr;
+ struct wakeup_source pending_ws;
+ struct tasklet_struct tty_tsklt;
+ struct timer_list buf_req_timer;
+ struct completion ch_allocated;
+ void *pil;
+ uint32_t edge;
+ char ch_name[SMD_MAX_CH_NAME_LEN];
+ char dev_name[SMD_MAX_CH_NAME_LEN];
+
+ struct mutex open_lock_lha1;
+ unsigned int open_wait;
+
+ spinlock_t reset_lock_lha2;
+ int in_reset;
+ int in_reset_updated;
+ int is_open;
+ wait_queue_head_t ch_opened_wait_queue;
+
+ spinlock_t ra_lock_lha3;
+ char ra_wakeup_source_name[MAX_RA_WAKE_LOCK_NAME_LEN];
+ struct wakeup_source ra_wakeup_source;
+};
+
+/**
+ * struct smd_tty_pfdriver - SMD tty channel platform driver structure
+ *
+ * @list: Adds this structure into smd_tty_platform_driver_list::list.
+ * @ref_cnt: reference count for this structure.
+ * @driver: SMD channel platform driver context structure
+ */
+struct smd_tty_pfdriver {
+ struct list_head list;
+ int ref_cnt;
+ struct platform_driver driver;
+};
+
+#define LOOPBACK_IDX 36
+
+static struct delayed_work loopback_work;
+static struct smd_tty_info smd_tty[MAX_SMD_TTYS];
+
+static DEFINE_MUTEX(smd_tty_pfdriver_lock_lha1);
+static LIST_HEAD(smd_tty_pfdriver_list);
+
+static int is_in_reset(struct smd_tty_info *info)
+{
+ return info->in_reset;
+}
+
+static void buf_req_retry(unsigned long param)
+{
+ struct smd_tty_info *info = (struct smd_tty_info *)param;
+ unsigned long flags;
+
+ spin_lock_irqsave(&info->reset_lock_lha2, flags);
+ if (info->is_open) {
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+ tasklet_hi_schedule(&info->tty_tsklt);
+ return;
+ }
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+}
+
+static ssize_t open_timeout_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t n)
+{
+ unsigned int num_dev;
+ unsigned long wait;
+
+ if (dev == NULL) {
+ SMD_TTY_INFO("%s: Invalid Device passed", __func__);
+ return -EINVAL;
+ }
+ for (num_dev = 0; num_dev < MAX_SMD_TTYS; num_dev++) {
+ if (dev == smd_tty[num_dev].device_ptr)
+ break;
+ }
+ if (num_dev >= MAX_SMD_TTYS) {
+ SMD_TTY_ERR("[%s]: Device Not found", __func__);
+ return -EINVAL;
+ }
+ if (!kstrtoul(buf, 10, &wait)) {
+ mutex_lock(&smd_tty[num_dev].open_lock_lha1);
+ smd_tty[num_dev].open_wait = wait;
+ mutex_unlock(&smd_tty[num_dev].open_lock_lha1);
+ return n;
+ }
+
+ SMD_TTY_INFO("[%s]: Unable to convert %s to an int",
+ __func__, buf);
+ return -EINVAL;
+
+}
+
+static ssize_t open_timeout_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned int num_dev;
+ unsigned int open_wait;
+
+ if (dev == NULL) {
+ SMD_TTY_INFO("%s: Invalid Device passed", __func__);
+ return -EINVAL;
+ }
+ for (num_dev = 0; num_dev < MAX_SMD_TTYS; num_dev++) {
+ if (dev == smd_tty[num_dev].device_ptr)
+ break;
+ }
+ if (num_dev >= MAX_SMD_TTYS) {
+ SMD_TTY_ERR("[%s]: Device Not Found", __func__);
+ return -EINVAL;
+ }
+
+ mutex_lock(&smd_tty[num_dev].open_lock_lha1);
+ open_wait = smd_tty[num_dev].open_wait;
+ mutex_unlock(&smd_tty[num_dev].open_lock_lha1);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", open_wait);
+}
+
+static DEVICE_ATTR
+ (open_timeout, 0664, open_timeout_show, open_timeout_store);
+
+static void smd_tty_read(unsigned long param)
+{
+ unsigned char *ptr;
+ int avail;
+ struct smd_tty_info *info = (struct smd_tty_info *)param;
+ struct tty_struct *tty = tty_port_tty_get(&info->port);
+ unsigned long flags;
+
+ if (!tty)
+ return;
+
+ for (;;) {
+ if (is_in_reset(info)) {
+ /* signal TTY clients using TTY_BREAK */
+ tty_insert_flip_char(tty->port, 0x00, TTY_BREAK);
+ tty_flip_buffer_push(tty->port);
+ break;
+ }
+
+ if (test_bit(TTY_THROTTLED, &tty->flags))
+ break;
+ spin_lock_irqsave(&info->ra_lock_lha3, flags);
+ avail = smd_read_avail(info->ch);
+ if (avail == 0) {
+ __pm_relax(&info->ra_wakeup_source);
+ spin_unlock_irqrestore(&info->ra_lock_lha3, flags);
+ break;
+ }
+ spin_unlock_irqrestore(&info->ra_lock_lha3, flags);
+
+ if (avail > MAX_TTY_BUF_SIZE)
+ avail = MAX_TTY_BUF_SIZE;
+
+ avail = tty_prepare_flip_string(tty->port, &ptr, avail);
+ if (avail <= 0) {
+ mod_timer(&info->buf_req_timer,
+ jiffies + msecs_to_jiffies(30));
+ tty_kref_put(tty);
+ return;
+ }
+
+ if (smd_read(info->ch, ptr, avail) != avail) {
+ /* shouldn't be possible since we're in interrupt
+ * context here and nobody else could 'steal' our
+ * characters.
+ */
+ SMD_TTY_ERR(
+ "%s - Possible smd_tty_buffer mismatch for %s",
+ __func__, info->ch_name);
+ }
+
+ /*
+ * Keep system awake long enough to allow the TTY
+ * framework to pass the flip buffer to any waiting
+ * userspace clients.
+ */
+ __pm_wakeup_event(&info->pending_ws, TTY_PUSH_WS_DELAY);
+
+ if (smd_tty_in_suspend)
+ smd_tty_read_in_suspend = true;
+
+ tty_flip_buffer_push(tty->port);
+ }
+
+ /* XXX only when writable and necessary */
+ tty_wakeup(tty);
+ tty_kref_put(tty);
+}
+
+static void smd_tty_notify(void *priv, unsigned int event)
+{
+ struct smd_tty_info *info = priv;
+ struct tty_struct *tty;
+ unsigned long flags;
+
+ switch (event) {
+ case SMD_EVENT_DATA:
+ spin_lock_irqsave(&info->reset_lock_lha2, flags);
+ if (!info->is_open) {
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+ break;
+ }
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+ /* There may be clients (tty framework) that are blocked
+ * waiting for space to write data, so if a possible read
+ * interrupt came in wake anyone waiting and disable the
+ * interrupts
+ */
+ if (smd_write_avail(info->ch)) {
+ smd_disable_read_intr(info->ch);
+ tty = tty_port_tty_get(&info->port);
+ if (tty)
+ wake_up_interruptible(&tty->write_wait);
+ tty_kref_put(tty);
+ }
+ spin_lock_irqsave(&info->ra_lock_lha3, flags);
+ if (smd_read_avail(info->ch)) {
+ __pm_stay_awake(&info->ra_wakeup_source);
+ tasklet_hi_schedule(&info->tty_tsklt);
+ }
+ spin_unlock_irqrestore(&info->ra_lock_lha3, flags);
+ break;
+
+ case SMD_EVENT_OPEN:
+ tty = tty_port_tty_get(&info->port);
+ spin_lock_irqsave(&info->reset_lock_lha2, flags);
+ if (tty)
+ clear_bit(TTY_OTHER_CLOSED, &tty->flags);
+ info->in_reset = 0;
+ info->in_reset_updated = 1;
+ info->is_open = 1;
+ wake_up_interruptible(&info->ch_opened_wait_queue);
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+ tty_kref_put(tty);
+ break;
+
+ case SMD_EVENT_CLOSE:
+ spin_lock_irqsave(&info->reset_lock_lha2, flags);
+ info->in_reset = 1;
+ info->in_reset_updated = 1;
+ info->is_open = 0;
+ wake_up_interruptible(&info->ch_opened_wait_queue);
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+
+ tty = tty_port_tty_get(&info->port);
+ if (tty) {
+ /* send TTY_BREAK through read tasklet */
+ set_bit(TTY_OTHER_CLOSED, &tty->flags);
+ tasklet_hi_schedule(&info->tty_tsklt);
+
+ if (tty->index == LOOPBACK_IDX)
+ schedule_delayed_work(&loopback_work,
+ msecs_to_jiffies(1000));
+ }
+ tty_kref_put(tty);
+ break;
+ }
+}
+
+static uint32_t is_modem_smsm_inited(void)
+{
+ uint32_t modem_state;
+ uint32_t ready_state = (SMSM_INIT | SMSM_SMDINIT);
+
+ modem_state = smsm_get_state(SMSM_MODEM_STATE);
+ return (modem_state & ready_state) == ready_state;
+}
+
+static int smd_tty_dummy_probe(struct platform_device *pdev)
+{
+ int n;
+
+ for (n = 0; n < MAX_SMD_TTYS; ++n) {
+ if (!smd_tty[n].dev_name)
+ continue;
+
+ if (pdev->id == smd_tty[n].edge &&
+ !strcmp(pdev->name, smd_tty[n].dev_name)) {
+ complete_all(&smd_tty[n].ch_allocated);
+ return 0;
+ }
+ }
+ SMD_TTY_ERR("[ERR]%s: unknown device '%s'\n", __func__, pdev->name);
+
+ return -ENODEV;
+}
+
+/**
+ * smd_tty_add_driver() - Add platform drivers for smd tty device
+ *
+ * @info: context for an individual SMD TTY device
+ *
+ * @returns: 0 for success, standard Linux error code otherwise
+ *
+ * This function is used to register platform driver once for all
+ * smd tty devices which have same names and increment the reference
+ * count for 2nd to nth devices.
+ */
+static int smd_tty_add_driver(struct smd_tty_info *info)
+{
+ int r = 0;
+ struct smd_tty_pfdriver *smd_tty_pfdriverp;
+ struct smd_tty_pfdriver *item;
+
+ if (!info) {
+ pr_err("%s on a NULL device structure\n", __func__);
+ return -EINVAL;
+ }
+
+ SMD_TTY_INFO("Begin %s on smd_tty[%s]\n", __func__,
+ info->ch_name);
+
+ mutex_lock(&smd_tty_pfdriver_lock_lha1);
+ list_for_each_entry(item, &smd_tty_pfdriver_list, list) {
+ if (!strcmp(item->driver.driver.name, info->dev_name)) {
+ SMD_TTY_INFO("%s:%s Driver Already reg. cnt:%d\n",
+ __func__, info->ch_name, item->ref_cnt);
+ ++item->ref_cnt;
+ goto exit;
+ }
+ }
+
+ smd_tty_pfdriverp = kzalloc(sizeof(*smd_tty_pfdriverp), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(smd_tty_pfdriverp)) {
+ pr_err("%s: kzalloc() failed for smd_tty_pfdriver[%s]\n",
+ __func__, info->ch_name);
+ r = -ENOMEM;
+ goto exit;
+ }
+
+ smd_tty_pfdriverp->driver.probe = smd_tty_dummy_probe;
+ smd_tty_pfdriverp->driver.driver.name = info->dev_name;
+ smd_tty_pfdriverp->driver.driver.owner = THIS_MODULE;
+ r = platform_driver_register(&smd_tty_pfdriverp->driver);
+ if (r) {
+ pr_err("%s: %s Platform driver reg. failed\n",
+ __func__, info->ch_name);
+ kfree(smd_tty_pfdriverp);
+ goto exit;
+ }
+ ++smd_tty_pfdriverp->ref_cnt;
+ list_add(&smd_tty_pfdriverp->list, &smd_tty_pfdriver_list);
+
+exit:
+ SMD_TTY_INFO("End %s on smd_tty_ch[%s]\n", __func__, info->ch_name);
+ mutex_unlock(&smd_tty_pfdriver_lock_lha1);
+ return r;
+}
+
+/**
+ * smd_tty_remove_driver() - Remove the platform drivers for smd tty device
+ *
+ * @info: context for an individual SMD TTY device
+ *
+ * This function is used to decrement the reference count on
+ * platform drivers for smd pkt devices and removes the drivers
+ * when the reference count becomes zero.
+ */
+static void smd_tty_remove_driver(struct smd_tty_info *info)
+{
+ struct smd_tty_pfdriver *smd_tty_pfdriverp;
+ bool found_item = false;
+
+ if (!info) {
+ pr_err("%s on a NULL device\n", __func__);
+ return;
+ }
+
+ SMD_TTY_INFO("Begin %s on smd_tty_ch[%s]\n", __func__,
+ info->ch_name);
+ mutex_lock(&smd_tty_pfdriver_lock_lha1);
+ list_for_each_entry(smd_tty_pfdriverp, &smd_tty_pfdriver_list, list) {
+ if (!strcmp(smd_tty_pfdriverp->driver.driver.name,
+ info->dev_name)) {
+ found_item = true;
+ SMD_TTY_INFO("%s:%s Platform driver cnt:%d\n",
+ __func__, info->ch_name,
+ smd_tty_pfdriverp->ref_cnt);
+ if (smd_tty_pfdriverp->ref_cnt > 0)
+ --smd_tty_pfdriverp->ref_cnt;
+ else
+ pr_warn("%s reference count <= 0\n", __func__);
+ break;
+ }
+ }
+ if (!found_item)
+ SMD_TTY_ERR("%s:%s No item found in list.\n",
+ __func__, info->ch_name);
+
+ if (found_item && smd_tty_pfdriverp->ref_cnt == 0) {
+ platform_driver_unregister(&smd_tty_pfdriverp->driver);
+ smd_tty_pfdriverp->driver.probe = NULL;
+ list_del(&smd_tty_pfdriverp->list);
+ kfree(smd_tty_pfdriverp);
+ }
+ mutex_unlock(&smd_tty_pfdriver_lock_lha1);
+ SMD_TTY_INFO("End %s on smd_tty_ch[%s]\n", __func__, info->ch_name);
+}
+
+static int smd_tty_port_activate(struct tty_port *tport,
+ struct tty_struct *tty)
+{
+ int res = 0;
+ unsigned int n = tty->index;
+ struct smd_tty_info *info;
+ const char *peripheral = NULL;
+
+ if (n >= MAX_SMD_TTYS || !smd_tty[n].ch_name)
+ return -ENODEV;
+
+ info = smd_tty + n;
+
+ mutex_lock(&info->open_lock_lha1);
+ tty->driver_data = info;
+
+ res = smd_tty_add_driver(info);
+ if (res) {
+ SMD_TTY_ERR("%s:%d Idx smd_tty_driver register failed %d\n",
+ __func__, n, res);
+ goto out;
+ }
+
+ peripheral = smd_edge_to_pil_str(smd_tty[n].edge);
+ if (!IS_ERR_OR_NULL(peripheral)) {
+ info->pil = subsystem_get(peripheral);
+ if (IS_ERR(info->pil)) {
+ SMD_TTY_INFO(
+ "%s failed on smd_tty device :%s subsystem_get failed for %s",
+ __func__, info->ch_name,
+ peripheral);
+
+ /*
+ * Sleep, inorder to reduce the frequency of
+ * retry by user-space modules and to avoid
+ * possible watchdog bite.
+ */
+ msleep((smd_tty[n].open_wait * 1000));
+ res = PTR_ERR(info->pil);
+ goto platform_unregister;
+ }
+ }
+
+ /* Wait for the modem SMSM to be inited for the SMD
+ * Loopback channel to be allocated at the modem. Since
+ * the wait need to be done atmost once, using msleep
+ * doesn't degrade the performance.
+ */
+ if (n == LOOPBACK_IDX) {
+ if (!is_modem_smsm_inited())
+ msleep(5000);
+ smsm_change_state(SMSM_APPS_STATE,
+ 0, SMSM_SMD_LOOPBACK);
+ msleep(100);
+ }
+
+ /*
+ * Wait for a channel to be allocated so we know
+ * the modem is ready enough.
+ */
+ if (smd_tty[n].open_wait) {
+ res = wait_for_completion_interruptible_timeout(
+ &info->ch_allocated,
+ msecs_to_jiffies(smd_tty[n].open_wait *
+ 1000));
+
+ if (res == 0) {
+ SMD_TTY_INFO(
+ "Timed out waiting for SMD channel %s",
+ info->ch_name);
+ res = -ETIMEDOUT;
+ goto release_pil;
+ } else if (res < 0) {
+ SMD_TTY_INFO(
+ "Error waiting for SMD channel %s : %d\n",
+ info->ch_name, res);
+ goto release_pil;
+ }
+ }
+
+ tasklet_init(&info->tty_tsklt, smd_tty_read, (unsigned long)info);
+ wakeup_source_init(&info->pending_ws, info->ch_name);
+ scnprintf(info->ra_wakeup_source_name, MAX_RA_WAKE_LOCK_NAME_LEN,
+ "SMD_TTY_%s_RA", info->ch_name);
+ wakeup_source_init(&info->ra_wakeup_source,
+ info->ra_wakeup_source_name);
+
+ res = smd_named_open_on_edge(info->ch_name,
+ smd_tty[n].edge, &info->ch, info,
+ smd_tty_notify);
+ if (res < 0) {
+ SMD_TTY_INFO("%s: %s open failed %d\n",
+ __func__, info->ch_name, res);
+ goto release_wl_tl;
+ }
+
+ res = wait_event_interruptible_timeout(info->ch_opened_wait_queue,
+ info->is_open, (2 * HZ));
+ if (res == 0)
+ res = -ETIMEDOUT;
+ if (res < 0) {
+ SMD_TTY_INFO("%s: wait for %s smd_open failed %d\n",
+ __func__, info->ch_name, res);
+ goto close_ch;
+ }
+ SMD_TTY_INFO("%s with PID %u opened port %s",
+ current->comm, current->pid, info->ch_name);
+ smd_disable_read_intr(info->ch);
+ mutex_unlock(&info->open_lock_lha1);
+ return 0;
+
+close_ch:
+ smd_close(info->ch);
+ info->ch = NULL;
+
+release_wl_tl:
+ tasklet_kill(&info->tty_tsklt);
+ wakeup_source_trash(&info->pending_ws);
+ wakeup_source_trash(&info->ra_wakeup_source);
+
+release_pil:
+ subsystem_put(info->pil);
+
+platform_unregister:
+ smd_tty_remove_driver(info);
+
+out:
+ mutex_unlock(&info->open_lock_lha1);
+
+ return res;
+}
+
+static void smd_tty_port_shutdown(struct tty_port *tport)
+{
+ struct smd_tty_info *info;
+ struct tty_struct *tty = tty_port_tty_get(tport);
+ unsigned long flags;
+
+ info = tty->driver_data;
+ if (info == 0) {
+ tty_kref_put(tty);
+ return;
+ }
+
+ mutex_lock(&info->open_lock_lha1);
+
+ spin_lock_irqsave(&info->reset_lock_lha2, flags);
+ info->is_open = 0;
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+
+ tasklet_kill(&info->tty_tsklt);
+ wakeup_source_trash(&info->pending_ws);
+ wakeup_source_trash(&info->ra_wakeup_source);
+
+ SMD_TTY_INFO("%s with PID %u closed port %s",
+ current->comm, current->pid,
+ info->ch_name);
+ tty->driver_data = NULL;
+ del_timer(&info->buf_req_timer);
+
+ smd_close(info->ch);
+ info->ch = NULL;
+ subsystem_put(info->pil);
+ smd_tty_remove_driver(info);
+
+ mutex_unlock(&info->open_lock_lha1);
+ tty_kref_put(tty);
+}
+
+static int smd_tty_open(struct tty_struct *tty, struct file *f)
+{
+ struct smd_tty_info *info = smd_tty + tty->index;
+
+ return tty_port_open(&info->port, tty, f);
+}
+
+static void smd_tty_close(struct tty_struct *tty, struct file *f)
+{
+ struct smd_tty_info *info = smd_tty + tty->index;
+
+ tty_port_close(&info->port, tty, f);
+}
+
+static int smd_tty_write(struct tty_struct *tty, const unsigned char *buf,
+ int len)
+{
+ struct smd_tty_info *info = tty->driver_data;
+ int avail;
+
+ /* if we're writing to a packet channel we will
+ * never be able to write more data than there
+ * is currently space for
+ */
+ if (is_in_reset(info))
+ return -ENETRESET;
+
+ avail = smd_write_avail(info->ch);
+ /* if no space, we'll have to setup a notification later to wake up the
+ * tty framework when space becomes available
+ */
+ if (!avail) {
+ smd_enable_read_intr(info->ch);
+ return 0;
+ }
+ if (len > avail)
+ len = avail;
+ SMD_TTY_INFO("[WRITE]: PID %u -> port %s %x bytes",
+ current->pid, info->ch_name, len);
+
+ return smd_write(info->ch, buf, len);
+}
+
+static int smd_tty_write_room(struct tty_struct *tty)
+{
+ struct smd_tty_info *info = tty->driver_data;
+
+ return smd_write_avail(info->ch);
+}
+
+static int smd_tty_chars_in_buffer(struct tty_struct *tty)
+{
+ struct smd_tty_info *info = tty->driver_data;
+
+ return smd_read_avail(info->ch);
+}
+
+static void smd_tty_unthrottle(struct tty_struct *tty)
+{
+ struct smd_tty_info *info = tty->driver_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&info->reset_lock_lha2, flags);
+ if (info->is_open) {
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+ tasklet_hi_schedule(&info->tty_tsklt);
+ return;
+ }
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+}
+
+/*
+ * Returns the current TIOCM status bits including:
+ * SMD Signals (DTR/DSR, CTS/RTS, CD, RI)
+ * TIOCM_OUT1 - reset state (1=in reset)
+ * TIOCM_OUT2 - reset state updated (1=updated)
+ */
+static int smd_tty_tiocmget(struct tty_struct *tty)
+{
+ struct smd_tty_info *info = tty->driver_data;
+ unsigned long flags;
+ int tiocm;
+
+ tiocm = smd_tiocmget(info->ch);
+
+ spin_lock_irqsave(&info->reset_lock_lha2, flags);
+ tiocm |= (info->in_reset ? TIOCM_OUT1 : 0);
+ if (info->in_reset_updated) {
+ tiocm |= TIOCM_OUT2;
+ info->in_reset_updated = 0;
+ }
+ SMD_TTY_INFO("PID %u --> %s TIOCM is %x ",
+ current->pid, __func__, tiocm);
+ spin_unlock_irqrestore(&info->reset_lock_lha2, flags);
+
+ return tiocm;
+}
+
+static int smd_tty_tiocmset(struct tty_struct *tty,
+ unsigned int set, unsigned int clear)
+{
+ struct smd_tty_info *info = tty->driver_data;
+
+ if (info->in_reset)
+ return -ENETRESET;
+
+ SMD_TTY_INFO("PID %u --> %s Set: %x Clear: %x",
+ current->pid, __func__, set, clear);
+ return smd_tiocmset(info->ch, set, clear);
+}
+
+static void loopback_probe_worker(struct work_struct *work)
+{
+ /* wait for modem to restart before requesting loopback server */
+ if (!is_modem_smsm_inited())
+ schedule_delayed_work(&loopback_work, msecs_to_jiffies(1000));
+ else
+ smsm_change_state(SMSM_APPS_STATE,
+ 0, SMSM_SMD_LOOPBACK);
+}
+
+static const struct tty_port_operations smd_tty_port_ops = {
+ .shutdown = smd_tty_port_shutdown,
+ .activate = smd_tty_port_activate,
+};
+
+static const struct tty_operations smd_tty_ops = {
+ .open = smd_tty_open,
+ .close = smd_tty_close,
+ .write = smd_tty_write,
+ .write_room = smd_tty_write_room,
+ .chars_in_buffer = smd_tty_chars_in_buffer,
+ .unthrottle = smd_tty_unthrottle,
+ .tiocmget = smd_tty_tiocmget,
+ .tiocmset = smd_tty_tiocmset,
+};
+
+static int smd_tty_pm_notifier(struct notifier_block *nb,
+ unsigned long event, void *unused)
+{
+ switch (event) {
+ case PM_SUSPEND_PREPARE:
+ smd_tty_read_in_suspend = false;
+ smd_tty_in_suspend = true;
+ break;
+
+ case PM_POST_SUSPEND:
+ smd_tty_in_suspend = false;
+ if (smd_tty_read_in_suspend) {
+ smd_tty_read_in_suspend = false;
+ __pm_wakeup_event(&read_in_suspend_ws,
+ TTY_PUSH_WS_POST_SUSPEND_DELAY);
+ }
+ break;
+ }
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block smd_tty_pm_nb = {
+ .notifier_call = smd_tty_pm_notifier,
+ .priority = 0,
+};
+
+/**
+ * smd_tty_log_init()- Init function for IPC logging
+ *
+ * Initialize the buffer that is used to provide the log information
+ * pertaining to the smd_tty module.
+ */
+static void smd_tty_log_init(void)
+{
+ smd_tty_log_ctx = ipc_log_context_create(SMD_TTY_LOG_PAGES,
+ "smd_tty", 0);
+ if (!smd_tty_log_ctx)
+ pr_err("%s: Unable to create IPC log", __func__);
+}
+
+static struct tty_driver *smd_tty_driver;
+
+static int smd_tty_register_driver(void)
+{
+ int ret;
+
+ smd_tty_driver = alloc_tty_driver(MAX_SMD_TTYS);
+ if (smd_tty_driver == 0) {
+ SMD_TTY_ERR("%s - Driver allocation failed", __func__);
+ return -ENOMEM;
+ }
+
+ smd_tty_driver->owner = THIS_MODULE;
+ smd_tty_driver->driver_name = "smd_tty_driver";
+ smd_tty_driver->name = "smd";
+ smd_tty_driver->major = 0;
+ smd_tty_driver->minor_start = 0;
+ smd_tty_driver->type = TTY_DRIVER_TYPE_SERIAL;
+ smd_tty_driver->subtype = SERIAL_TYPE_NORMAL;
+ smd_tty_driver->init_termios = tty_std_termios;
+ smd_tty_driver->init_termios.c_iflag = 0;
+ smd_tty_driver->init_termios.c_oflag = 0;
+ smd_tty_driver->init_termios.c_cflag = B38400 | CS8 | CREAD;
+ smd_tty_driver->init_termios.c_lflag = 0;
+ smd_tty_driver->flags = TTY_DRIVER_RESET_TERMIOS |
+ TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;
+ tty_set_operations(smd_tty_driver, &smd_tty_ops);
+
+ ret = tty_register_driver(smd_tty_driver);
+ if (ret) {
+ put_tty_driver(smd_tty_driver);
+ SMD_TTY_ERR("%s: driver registration failed %d", __func__, ret);
+ }
+
+ return ret;
+}
+
+static void smd_tty_device_init(int idx)
+{
+ struct tty_port *port;
+
+ port = &smd_tty[idx].port;
+ tty_port_init(port);
+ port->ops = &smd_tty_port_ops;
+ smd_tty[idx].device_ptr = tty_port_register_device(port, smd_tty_driver,
+ idx, NULL);
+ if (IS_ERR_OR_NULL(smd_tty[idx].device_ptr)) {
+ SMD_TTY_ERR("%s: Unable to register tty port %s reason %d\n",
+ __func__,
+ smd_tty[idx].ch_name,
+ PTR_ERR_OR_ZERO(smd_tty[idx].device_ptr));
+ return;
+ }
+ init_completion(&smd_tty[idx].ch_allocated);
+ mutex_init(&smd_tty[idx].open_lock_lha1);
+ spin_lock_init(&smd_tty[idx].reset_lock_lha2);
+ spin_lock_init(&smd_tty[idx].ra_lock_lha3);
+ smd_tty[idx].is_open = 0;
+ setup_timer(&smd_tty[idx].buf_req_timer, buf_req_retry,
+ (unsigned long)&smd_tty[idx]);
+ init_waitqueue_head(&smd_tty[idx].ch_opened_wait_queue);
+
+ if (device_create_file(smd_tty[idx].device_ptr, &dev_attr_open_timeout))
+ SMD_TTY_ERR("%s: Unable to create device attributes for %s",
+ __func__, smd_tty[idx].ch_name);
+}
+
+static int smd_tty_devicetree_init(struct platform_device *pdev)
+{
+ int ret;
+ int idx;
+ int edge;
+ char *key = NULL;
+ const char *ch_name;
+ const char *dev_name;
+ const char *remote_ss;
+ struct device_node *node;
+
+ ret = smd_tty_register_driver();
+ if (ret) {
+ SMD_TTY_ERR("%s: driver registration failed %d\n",
+ __func__, ret);
+ return ret;
+ }
+
+ for_each_child_of_node(pdev->dev.of_node, node) {
+
+ ret = of_alias_get_id(node, "smd");
+ SMD_TTY_INFO("%s:adding smd%d\n", __func__, ret);
+
+ if (ret < 0 || ret >= MAX_SMD_TTYS)
+ goto error;
+ idx = ret;
+
+ key = "qcom,smdtty-remote";
+ remote_ss = of_get_property(node, key, NULL);
+ if (!remote_ss)
+ goto error;
+
+ edge = smd_remote_ss_to_edge(remote_ss);
+ if (edge < 0)
+ goto error;
+ smd_tty[idx].edge = edge;
+
+ key = "qcom,smdtty-port-name";
+ ch_name = of_get_property(node, key, NULL);
+ if (!ch_name)
+ goto error;
+ strlcpy(smd_tty[idx].ch_name, ch_name,
+ SMD_MAX_CH_NAME_LEN);
+
+ key = "qcom,smdtty-dev-name";
+ dev_name = of_get_property(node, key, NULL);
+ if (!dev_name) {
+ strlcpy(smd_tty[idx].dev_name, smd_tty[idx].ch_name,
+ SMD_MAX_CH_NAME_LEN);
+ } else {
+ strlcpy(smd_tty[idx].dev_name, dev_name,
+ SMD_MAX_CH_NAME_LEN);
+ }
+
+ smd_tty_device_init(idx);
+ }
+ INIT_DELAYED_WORK(&loopback_work, loopback_probe_worker);
+
+ ret = register_pm_notifier(&smd_tty_pm_nb);
+ if (ret)
+ pr_err("%s: power state notif error %d\n", __func__, ret);
+
+ return 0;
+
+error:
+ SMD_TTY_ERR("%s:Initialization error, key[%s]\n", __func__, key);
+ /* Unregister tty platform devices */
+ for_each_child_of_node(pdev->dev.of_node, node) {
+
+ ret = of_alias_get_id(node, "smd");
+ SMD_TTY_INFO("%s:Removing smd%d\n", __func__, ret);
+
+ if (ret < 0 || ret >= MAX_SMD_TTYS)
+ goto out;
+ idx = ret;
+
+ if (smd_tty[idx].device_ptr) {
+ device_remove_file(smd_tty[idx].device_ptr,
+ &dev_attr_open_timeout);
+ tty_unregister_device(smd_tty_driver, idx);
+ }
+ }
+out:
+ tty_unregister_driver(smd_tty_driver);
+ put_tty_driver(smd_tty_driver);
+ return ret;
+}
+
+static int msm_smd_tty_probe(struct platform_device *pdev)
+{
+ int ret;
+
+ if (pdev) {
+ if (pdev->dev.of_node) {
+ ret = smd_tty_devicetree_init(pdev);
+ if (ret) {
+ SMD_TTY_ERR("%s: device tree init failed\n",
+ __func__);
+ return ret;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static const struct of_device_id msm_smd_tty_match_table[] = {
+ { .compatible = "qcom,smdtty" },
+ {},
+};
+
+static struct platform_driver msm_smd_tty_driver = {
+ .probe = msm_smd_tty_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = msm_smd_tty_match_table,
+ },
+};
+
+
+static int __init smd_tty_init(void)
+{
+ int rc;
+
+ smd_tty_log_init();
+ rc = platform_driver_register(&msm_smd_tty_driver);
+ if (rc) {
+ SMD_TTY_ERR("%s: msm_smd_tty_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ wakeup_source_init(&read_in_suspend_ws, "SMDTTY_READ_IN_SUSPEND");
+ return 0;
+}
+
+module_init(smd_tty_init);
diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
index 44e5b5b..472ba3c 100644
--- a/drivers/tty/serial/omap-serial.c
+++ b/drivers/tty/serial/omap-serial.c
@@ -693,7 +693,7 @@ static void serial_omap_set_mctrl(struct uart_port *port, unsigned int mctrl)
if ((mctrl & TIOCM_RTS) && (port->status & UPSTAT_AUTORTS))
up->efr |= UART_EFR_RTS;
else
- up->efr &= UART_EFR_RTS;
+ up->efr &= ~UART_EFR_RTS;
serial_out(up, UART_EFR, up->efr);
serial_out(up, UART_LCR, lcr);
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 7e97a1c..15eaea5 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -193,18 +193,17 @@ static const struct plat_sci_reg sci_regmap[SCIx_NR_REGTYPES][SCIx_NR_REGS] = {
},
/*
- * Common definitions for legacy IrDA ports, dependent on
- * regshift value.
+ * Common definitions for legacy IrDA ports.
*/
[SCIx_IRDA_REGTYPE] = {
[SCSMR] = { 0x00, 8 },
- [SCBRR] = { 0x01, 8 },
- [SCSCR] = { 0x02, 8 },
- [SCxTDR] = { 0x03, 8 },
- [SCxSR] = { 0x04, 8 },
- [SCxRDR] = { 0x05, 8 },
- [SCFCR] = { 0x06, 8 },
- [SCFDR] = { 0x07, 16 },
+ [SCBRR] = { 0x02, 8 },
+ [SCSCR] = { 0x04, 8 },
+ [SCxTDR] = { 0x06, 8 },
+ [SCxSR] = { 0x08, 16 },
+ [SCxRDR] = { 0x0a, 8 },
+ [SCFCR] = { 0x0c, 8 },
+ [SCFDR] = { 0x0e, 16 },
[SCTFDR] = sci_reg_invalid,
[SCRFDR] = sci_reg_invalid,
[SCSPTR] = sci_reg_invalid,
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
index c8075eb..fa61935 100644
--- a/drivers/usb/core/devio.c
+++ b/drivers/usb/core/devio.c
@@ -1838,6 +1838,18 @@ static int proc_unlinkurb(struct usb_dev_state *ps, void __user *arg)
return 0;
}
+static void compute_isochronous_actual_length(struct urb *urb)
+{
+ unsigned int i;
+
+ if (urb->number_of_packets > 0) {
+ urb->actual_length = 0;
+ for (i = 0; i < urb->number_of_packets; i++)
+ urb->actual_length +=
+ urb->iso_frame_desc[i].actual_length;
+ }
+}
+
static int processcompl(struct async *as, void __user * __user *arg)
{
struct urb *urb = as->urb;
@@ -1845,6 +1857,7 @@ static int processcompl(struct async *as, void __user * __user *arg)
void __user *addr = as->userurb;
unsigned int i;
+ compute_isochronous_actual_length(urb);
if (as->userbuffer && urb->actual_length) {
if (copy_urb_data_to_user(as->userbuffer, urb))
goto err_out;
@@ -2019,6 +2032,7 @@ static int processcompl_compat(struct async *as, void __user * __user *arg)
void __user *addr = as->userurb;
unsigned int i;
+ compute_isochronous_actual_length(urb);
if (as->userbuffer && urb->actual_length) {
if (copy_urb_data_to_user(as->userbuffer, urb))
return -EFAULT;
diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
index 7b8ca7d..e4b39a7 100644
--- a/drivers/usb/core/hcd.c
+++ b/drivers/usb/core/hcd.c
@@ -2290,6 +2290,14 @@ int usb_hcd_get_controller_id(struct usb_device *udev)
return hcd->driver->get_core_id(hcd);
}
+int usb_hcd_stop_endpoint(struct usb_device *udev,
+ struct usb_host_endpoint *ep)
+{
+ struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+
+ return hcd->driver->stop_endpoint(hcd, udev, ep);
+}
+
#ifdef CONFIG_PM
int hcd_bus_suspend(struct usb_device *rhdev, pm_message_t msg)
@@ -3093,6 +3101,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
}
usb_put_invalidate_rhdev(hcd);
+ hcd->flags = 0;
}
EXPORT_SYMBOL_GPL(usb_remove_hcd);
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 70c90e4..50a6f2f 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -4744,7 +4744,7 @@ hub_power_remaining(struct usb_hub *hub)
static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
u16 portchange)
{
- int status = -ENODEV;
+ int ret, status = -ENODEV;
int i;
unsigned unit_load;
struct usb_device *hdev = hub->hdev;
@@ -4752,6 +4752,7 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
struct usb_port *port_dev = hub->ports[port1 - 1];
struct usb_device *udev = port_dev->child;
static int unreliable_port = -1;
+ enum usb_device_speed dev_speed = USB_SPEED_UNKNOWN;
/* Disconnect any existing devices under this port */
if (udev) {
@@ -4806,6 +4807,7 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
else
unit_load = 100;
+retry_enum:
status = 0;
for (i = 0; i < SET_CONFIG_TRIES; i++) {
@@ -4843,6 +4845,13 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
if (status < 0)
goto loop;
+ dev_speed = udev->speed;
+ if (udev->speed > USB_SPEED_UNKNOWN &&
+ udev->speed <= USB_SPEED_HIGH && hcd->usb_phy
+ && hcd->usb_phy->disable_chirp)
+ hcd->usb_phy->disable_chirp(hcd->usb_phy,
+ false);
+
if (udev->quirks & USB_QUIRK_DELAY_INIT)
msleep(2000);
@@ -4945,6 +4954,19 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
if (status != -ENOTCONN && status != -ENODEV)
dev_err(&port_dev->dev,
"unable to enumerate USB device\n");
+ if (!hub->hdev->parent && dev_speed == USB_SPEED_UNKNOWN
+ && hcd->usb_phy && hcd->usb_phy->disable_chirp) {
+ ret = hcd->usb_phy->disable_chirp(hcd->usb_phy, true);
+ if (!ret) {
+ dev_dbg(&port_dev->dev,
+ "chirp disabled re-try enum\n");
+ goto retry_enum;
+ } else {
+ /* bail out and re-enable chirping */
+ hcd->usb_phy->disable_chirp(hcd->usb_phy,
+ false);
+ }
+ }
}
done:
diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
index a6aaf2f..37c418e 100644
--- a/drivers/usb/core/quirks.c
+++ b/drivers/usb/core/quirks.c
@@ -221,6 +221,9 @@ static const struct usb_device_id usb_quirk_list[] = {
/* Corsair Strafe RGB */
{ USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT },
+ /* Corsair K70 LUX */
+ { USB_DEVICE(0x1b1c, 0x1b36), .driver_info = USB_QUIRK_DELAY_INIT },
+
/* MIDI keyboard WORLDE MINI */
{ USB_DEVICE(0x1c75, 0x0204), .driver_info =
USB_QUIRK_CONFIG_INTF_STRINGS },
diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
index d745733..bb2a4fe 100644
--- a/drivers/usb/core/usb.c
+++ b/drivers/usb/core/usb.c
@@ -734,6 +734,12 @@ int usb_get_controller_id(struct usb_device *dev)
}
EXPORT_SYMBOL(usb_get_controller_id);
+int usb_stop_endpoint(struct usb_device *dev, struct usb_host_endpoint *ep)
+{
+ return usb_hcd_stop_endpoint(dev, ep);
+}
+EXPORT_SYMBOL(usb_stop_endpoint);
+
/*-------------------------------------------------------------------*/
/*
* __usb_get_extra_descriptor() finds a descriptor of specific type in the
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index f511055..68a40f9 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -480,6 +480,8 @@
#define DWC3_DEPCMD_SETTRANSFRESOURCE (0x02 << 0)
#define DWC3_DEPCMD_SETEPCONFIG (0x01 << 0)
+#define DWC3_DEPCMD_CMD(x) ((x) & 0xf)
+
/* The EP number goes 0..31 so ep0 is always out and ep1 is always in */
#define DWC3_DALEPENA_EP(n) (1 << n)
@@ -613,6 +615,7 @@ struct dwc3_ep {
#define DWC3_EP_BUSY (1 << 4)
#define DWC3_EP_PENDING_REQUEST (1 << 5)
#define DWC3_EP_MISSED_ISOC (1 << 6)
+#define DWC3_EP_TRANSFER_STARTED (1 << 8)
/* This last one is specific to EP0 */
#define DWC3_EP0_DIR_IN (1 << 31)
diff --git a/drivers/usb/dwc3/dwc3-msm.c b/drivers/usb/dwc3/dwc3-msm.c
index b6ad39b..874499d 100644
--- a/drivers/usb/dwc3/dwc3-msm.c
+++ b/drivers/usb/dwc3/dwc3-msm.c
@@ -245,6 +245,10 @@ struct dwc3_msm {
struct notifier_block dwc3_cpu_notifier;
struct notifier_block usbdev_nb;
bool hc_died;
+ /* for usb connector either type-C or microAB */
+ bool type_c;
+ /* whether to vote for VBUS reg in host mode */
+ bool no_vbus_vote_type_c;
struct extcon_dev *extcon_vbus;
struct extcon_dev *extcon_id;
@@ -2898,7 +2902,7 @@ static int dwc3_msm_eud_notifier(struct notifier_block *nb,
return NOTIFY_DONE;
}
-static int dwc3_msm_extcon_register(struct dwc3_msm *mdwc)
+static int dwc3_msm_extcon_register(struct dwc3_msm *mdwc, int start_idx)
{
struct device_node *node = mdwc->dev->of_node;
struct extcon_dev *edev;
@@ -2907,8 +2911,11 @@ static int dwc3_msm_extcon_register(struct dwc3_msm *mdwc)
if (!of_property_read_bool(node, "extcon"))
return 0;
- /* Use first phandle (mandatory) for USB vbus status notification */
- edev = extcon_get_edev_by_phandle(mdwc->dev, 0);
+ /*
+ * Use mandatory phandle (index 0 for type-C; index 3 for microUSB)
+ * for USB vbus status notification
+ */
+ edev = extcon_get_edev_by_phandle(mdwc->dev, start_idx);
if (IS_ERR(edev) && PTR_ERR(edev) != -ENODEV)
return PTR_ERR(edev);
@@ -2923,9 +2930,12 @@ static int dwc3_msm_extcon_register(struct dwc3_msm *mdwc)
}
}
- /* Use second phandle (optional) for USB ID status notification */
- if (of_count_phandle_with_args(node, "extcon", NULL) > 1) {
- edev = extcon_get_edev_by_phandle(mdwc->dev, 1);
+ /*
+ * Use optional phandle (index 1 for type-C; index 4 for microUSB)
+ * for USB ID status notification
+ */
+ if (of_count_phandle_with_args(node, "extcon", NULL) > start_idx + 1) {
+ edev = extcon_get_edev_by_phandle(mdwc->dev, start_idx + 1);
if (IS_ERR(edev) && PTR_ERR(edev) != -ENODEV) {
ret = PTR_ERR(edev);
goto err;
@@ -2953,12 +2963,12 @@ static int dwc3_msm_extcon_register(struct dwc3_msm *mdwc)
}
edev = NULL;
- /* Use third phandle (optional) for EUD based detach/attach events */
+ /* Use optional phandle (index 2) for EUD based detach/attach events */
if (of_count_phandle_with_args(node, "extcon", NULL) > 2) {
edev = extcon_get_edev_by_phandle(mdwc->dev, 2);
if (IS_ERR(edev) && PTR_ERR(edev) != -ENODEV) {
ret = PTR_ERR(edev);
- goto err1;
+ goto err2;
}
}
@@ -3408,10 +3418,6 @@ static int dwc3_msm_probe(struct platform_device *pdev)
if (of_property_read_bool(node, "qcom,disable-dev-mode-pm"))
pm_runtime_get_noresume(mdwc->dev);
- ret = dwc3_msm_extcon_register(mdwc);
- if (ret)
- goto put_dwc3;
-
ret = of_property_read_u32(node, "qcom,pm-qos-latency",
&mdwc->pm_qos_latency);
if (ret) {
@@ -3419,15 +3425,34 @@ static int dwc3_msm_probe(struct platform_device *pdev)
mdwc->pm_qos_latency = 0;
}
+ mdwc->no_vbus_vote_type_c = of_property_read_bool(node,
+ "qcom,no-vbus-vote-with-type-C");
+
+ /* Mark type-C as true by default */
+ mdwc->type_c = true;
+
mdwc->usb_psy = power_supply_get_by_name("usb");
if (!mdwc->usb_psy) {
dev_warn(mdwc->dev, "Could not get usb power_supply\n");
pval.intval = -EINVAL;
} else {
power_supply_get_property(mdwc->usb_psy,
+ POWER_SUPPLY_PROP_CONNECTOR_TYPE, &pval);
+ if (pval.intval == POWER_SUPPLY_CONNECTOR_MICRO_USB)
+ mdwc->type_c = false;
+ power_supply_get_property(mdwc->usb_psy,
POWER_SUPPLY_PROP_PRESENT, &pval);
}
+ /*
+ * Extcon phandles starting indices in DT:
+ * type-C : 0
+ * microUSB : 3
+ */
+ ret = dwc3_msm_extcon_register(mdwc, mdwc->type_c ? 0 : 3);
+ if (ret)
+ goto put_psy;
+
mutex_init(&mdwc->suspend_resume_mutex);
/* Update initial VBUS/ID state from extcon */
if (mdwc->extcon_vbus && extcon_get_state(mdwc->extcon_vbus,
@@ -3456,6 +3481,12 @@ static int dwc3_msm_probe(struct platform_device *pdev)
return 0;
+put_psy:
+ if (mdwc->usb_psy)
+ power_supply_put(mdwc->usb_psy);
+
+ if (cpu_to_affin)
+ unregister_cpu_notifier(&mdwc->dwc3_cpu_notifier);
put_dwc3:
if (mdwc->bus_perf_client)
msm_bus_scale_unregister_client(mdwc->bus_perf_client);
@@ -3478,6 +3509,8 @@ static int dwc3_msm_remove(struct platform_device *pdev)
int ret_pm;
device_remove_file(&pdev->dev, &dev_attr_mode);
+ if (mdwc->usb_psy)
+ power_supply_put(mdwc->usb_psy);
if (cpu_to_affin)
unregister_cpu_notifier(&mdwc->dwc3_cpu_notifier);
@@ -3680,7 +3713,8 @@ static int dwc3_otg_start_host(struct dwc3_msm *mdwc, int on)
* IS_ERR: regulator could not be obtained, so skip using it
* Valid pointer otherwise
*/
- if (!mdwc->vbus_reg) {
+ if (!mdwc->vbus_reg && (!mdwc->type_c ||
+ (mdwc->type_c && !mdwc->no_vbus_vote_type_c))) {
mdwc->vbus_reg = devm_regulator_get_optional(mdwc->dev,
"vbus_dwc3");
if (IS_ERR(mdwc->vbus_reg) &&
@@ -3705,7 +3739,7 @@ static int dwc3_otg_start_host(struct dwc3_msm *mdwc, int on)
pm_runtime_get_sync(mdwc->dev);
dbg_event(0xFF, "StrtHost gync",
atomic_read(&mdwc->dev->power.usage_count));
- if (!IS_ERR(mdwc->vbus_reg))
+ if (!IS_ERR_OR_NULL(mdwc->vbus_reg))
ret = regulator_enable(mdwc->vbus_reg);
if (ret) {
dev_err(mdwc->dev, "unable to enable vbus_reg\n");
@@ -3729,7 +3763,7 @@ static int dwc3_otg_start_host(struct dwc3_msm *mdwc, int on)
dev_err(mdwc->dev,
"%s: failed to add XHCI pdev ret=%d\n",
__func__, ret);
- if (!IS_ERR(mdwc->vbus_reg))
+ if (!IS_ERR_OR_NULL(mdwc->vbus_reg))
regulator_disable(mdwc->vbus_reg);
mdwc->hs_phy->flags &= ~PHY_HOST_MODE;
mdwc->ss_phy->flags &= ~PHY_HOST_MODE;
@@ -3770,7 +3804,7 @@ static int dwc3_otg_start_host(struct dwc3_msm *mdwc, int on)
dev_dbg(mdwc->dev, "%s: turn off host\n", __func__);
usb_unregister_atomic_notify(&mdwc->usbdev_nb);
- if (!IS_ERR(mdwc->vbus_reg))
+ if (!IS_ERR_OR_NULL(mdwc->vbus_reg))
ret = regulator_disable(mdwc->vbus_reg);
if (ret) {
dev_err(mdwc->dev, "unable to disable vbus_reg\n");
diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 1c33051..4e7de00 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -599,22 +599,30 @@ static int dwc3_ep0_set_config(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
return -EINVAL;
case USB_STATE_ADDRESS:
- /* Read ep0IN related TXFIFO size */
- dwc->last_fifo_depth = (dwc3_readl(dwc->regs,
- DWC3_GTXFIFOSIZ(0)) & 0xFFFF);
- /* Clear existing allocated TXFIFO for all IN eps except ep0 */
- for (num = 0; num < dwc->num_in_eps; num++) {
- dep = dwc->eps[(num << 1) | 1];
- if (num) {
- dwc3_writel(dwc->regs, DWC3_GTXFIFOSIZ(num), 0);
- dep->fifo_depth = 0;
- } else {
- dep->fifo_depth = dwc->last_fifo_depth;
- }
+ /*
+ * If tx-fifo-resize flag is not set for the controller, then
+ * do not clear existing allocated TXFIFO since we do not
+ * allocate it again in dwc3_gadget_resize_tx_fifos
+ */
+ if (dwc->needs_fifo_resize) {
+ /* Read ep0IN related TXFIFO size */
+ dwc->last_fifo_depth = (dwc3_readl(dwc->regs,
+ DWC3_GTXFIFOSIZ(0)) & 0xFFFF);
+ /* Clear existing TXFIFO for all IN eps except ep0 */
+ for (num = 0; num < dwc->num_in_eps; num++) {
+ dep = dwc->eps[(num << 1) | 1];
+ if (num) {
+ dwc3_writel(dwc->regs,
+ DWC3_GTXFIFOSIZ(num), 0);
+ dep->fifo_depth = 0;
+ } else {
+ dep->fifo_depth = dwc->last_fifo_depth;
+ }
- dev_dbg(dwc->dev, "%s(): %s dep->fifo_depth:%x\n",
+ dev_dbg(dwc->dev, "%s(): %s fifo_depth:%x\n",
__func__, dep->name, dep->fifo_depth);
- dbg_event(0xFF, "fifo_reset", dep->number);
+ dbg_event(0xFF, "fifo_reset", dep->number);
+ }
}
ret = dwc3_ep0_delegate_req(dwc, ctrl);
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 8c44f86..5571374 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -355,7 +355,7 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned cmd,
}
}
- if (cmd == DWC3_DEPCMD_STARTTRANSFER) {
+ if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
int needs_wakeup;
needs_wakeup = (dwc->link_state == DWC3_LINK_STATE_U1 ||
@@ -423,6 +423,20 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned cmd,
trace_dwc3_gadget_ep_cmd(dep, cmd, params, cmd_status);
+ if (ret == 0) {
+ switch (DWC3_DEPCMD_CMD(cmd)) {
+ case DWC3_DEPCMD_STARTTRANSFER:
+ dep->flags |= DWC3_EP_TRANSFER_STARTED;
+ break;
+ case DWC3_DEPCMD_ENDTRANSFER:
+ dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+ break;
+ default:
+ /* nothing */
+ break;
+ }
+ }
+
if (unlikely(susphy)) {
reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
reg |= DWC3_GUSB2PHYCFG_SUSPHY;
@@ -1200,6 +1214,14 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep, u16 cmd_param)
return 0;
}
+static int __dwc3_gadget_get_frame(struct dwc3 *dwc)
+{
+ u32 reg;
+
+ reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+ return DWC3_DSTS_SOFFN(reg);
+}
+
static void __dwc3_gadget_start_isoc(struct dwc3 *dwc,
struct dwc3_ep *dep, u32 cur_uf)
{
@@ -1214,8 +1236,11 @@ static void __dwc3_gadget_start_isoc(struct dwc3 *dwc,
return;
}
- /* 4 micro frames in the future */
- uf = cur_uf + dep->interval * 4;
+ /*
+ * Schedule the first trb for one interval in the future or at
+ * least 4 microframes.
+ */
+ uf = cur_uf + max_t(u32, 4, dep->interval);
ret = __dwc3_gadget_kick_transfer(dep, uf);
if (ret < 0)
@@ -1285,12 +1310,28 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
* errors which will force us issue EndTransfer command.
*/
if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
- if ((dep->flags & DWC3_EP_PENDING_REQUEST) &&
- list_empty(&dep->started_list)) {
- dwc3_stop_active_transfer(dwc, dep->number, true);
- dep->flags = DWC3_EP_ENABLED;
+ if ((dep->flags & DWC3_EP_PENDING_REQUEST)) {
+ if (dep->flags & DWC3_EP_TRANSFER_STARTED) {
+ dwc3_stop_active_transfer(dwc, dep->number, true);
+ dep->flags = DWC3_EP_ENABLED;
+ } else {
+ u32 cur_uf;
+
+ cur_uf = __dwc3_gadget_get_frame(dwc);
+ __dwc3_gadget_start_isoc(dwc, dep, cur_uf);
+ dep->flags &= ~DWC3_EP_PENDING_REQUEST;
+ }
+ return 0;
}
- return 0;
+
+ if ((dep->flags & DWC3_EP_BUSY) &&
+ !(dep->flags & DWC3_EP_MISSED_ISOC)) {
+ WARN_ON_ONCE(!dep->resource_index);
+ ret = __dwc3_gadget_kick_transfer(dep,
+ dep->resource_index);
+ }
+
+ goto out;
}
if (!dwc3_calc_trbs_left(dep))
@@ -1301,6 +1342,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
dwc3_trace(trace_dwc3_gadget,
"%s: failed to kick transfers",
dep->name);
+out:
if (ret == -EBUSY)
ret = 0;
@@ -1635,10 +1677,8 @@ static const struct usb_ep_ops dwc3_gadget_ep_ops = {
static int dwc3_gadget_get_frame(struct usb_gadget *g)
{
struct dwc3 *dwc = gadget_to_dwc(g);
- u32 reg;
- reg = dwc3_readl(dwc->regs, DWC3_DSTS);
- return DWC3_DSTS_SOFFN(reg);
+ return __dwc3_gadget_get_frame(dwc);
}
static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
@@ -2333,6 +2373,10 @@ static int dwc3_gadget_stop(struct usb_gadget *g)
dwc->gadget_driver = NULL;
spin_unlock_irqrestore(&dwc->lock, flags);
+ dbg_event(0xFF, "fwq_started", 0);
+ flush_workqueue(dwc->dwc_wq);
+ dbg_event(0xFF, "fwq_completed", 0);
+
return 0;
}
@@ -2827,43 +2871,55 @@ static void dwc3_endpoint_interrupt(struct dwc3 *dwc,
static void dwc3_disconnect_gadget(struct dwc3 *dwc)
{
+ struct usb_gadget_driver *gadget_driver;
+
if (dwc->gadget_driver && dwc->gadget_driver->disconnect) {
+ gadget_driver = dwc->gadget_driver;
spin_unlock(&dwc->lock);
dbg_event(0xFF, "DISCONNECT", 0);
- dwc->gadget_driver->disconnect(&dwc->gadget);
+ gadget_driver->disconnect(&dwc->gadget);
spin_lock(&dwc->lock);
}
}
static void dwc3_suspend_gadget(struct dwc3 *dwc)
{
+ struct usb_gadget_driver *gadget_driver;
+
if (dwc->gadget_driver && dwc->gadget_driver->suspend) {
+ gadget_driver = dwc->gadget_driver;
spin_unlock(&dwc->lock);
dbg_event(0xFF, "SUSPEND", 0);
- dwc->gadget_driver->suspend(&dwc->gadget);
+ gadget_driver->suspend(&dwc->gadget);
spin_lock(&dwc->lock);
}
}
static void dwc3_resume_gadget(struct dwc3 *dwc)
{
+ struct usb_gadget_driver *gadget_driver;
+
if (dwc->gadget_driver && dwc->gadget_driver->resume) {
+ gadget_driver = dwc->gadget_driver;
spin_unlock(&dwc->lock);
dbg_event(0xFF, "RESUME", 0);
- dwc->gadget_driver->resume(&dwc->gadget);
+ gadget_driver->resume(&dwc->gadget);
spin_lock(&dwc->lock);
}
}
static void dwc3_reset_gadget(struct dwc3 *dwc)
{
+ struct usb_gadget_driver *gadget_driver;
+
if (!dwc->gadget_driver)
return;
if (dwc->gadget.speed != USB_SPEED_UNKNOWN) {
+ gadget_driver = dwc->gadget_driver;
spin_unlock(&dwc->lock);
dbg_event(0xFF, "UDC RESET", 0);
- usb_gadget_udc_reset(&dwc->gadget, dwc->gadget_driver);
+ usb_gadget_udc_reset(&dwc->gadget, gadget_driver);
spin_lock(&dwc->lock);
}
}
diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
index 98509f2..7037696 100644
--- a/drivers/usb/gadget/composite.c
+++ b/drivers/usb/gadget/composite.c
@@ -35,6 +35,12 @@
(speed == USB_SPEED_SUPER ?\
SSUSB_GADGET_VBUS_DRAW : CONFIG_USB_GADGET_VBUS_DRAW)
+/* disable LPM by default */
+static bool disable_l1_for_hs = true;
+module_param(disable_l1_for_hs, bool, 0644);
+MODULE_PARM_DESC(disable_l1_for_hs,
+ "Disable support for L1 LPM for HS devices");
+
/**
* struct usb_os_string - represents OS String to be reported by a gadget
* @bLength: total length of the entire descritor, always 0x12
@@ -269,7 +275,7 @@ int usb_add_function(struct usb_configuration *config,
{
int value = -EINVAL;
- DBG(config->cdev, "adding '%s'/%p to config '%s'/%p\n",
+ DBG(config->cdev, "adding '%s'/%pK to config '%s'/%pK\n",
function->name, function,
config->label, config);
@@ -312,7 +318,7 @@ int usb_add_function(struct usb_configuration *config,
done:
if (value)
- DBG(config->cdev, "adding '%s'/%p --> %d\n",
+ DBG(config->cdev, "adding '%s'/%pK --> %d\n",
function->name, function, value);
return value;
}
@@ -933,7 +939,7 @@ static int set_config(struct usb_composite_dev *cdev,
result = f->set_alt(f, tmp, 0);
if (result < 0) {
- DBG(cdev, "interface %d (%s/%p) alt 0 --> %d\n",
+ DBG(cdev, "interface %d (%s/%pK) alt 0 --> %d\n",
tmp, f->name, f, result);
reset_config(cdev);
@@ -1006,7 +1012,7 @@ int usb_add_config(struct usb_composite_dev *cdev,
if (!bind)
goto done;
- DBG(cdev, "adding config #%u '%s'/%p\n",
+ DBG(cdev, "adding config #%u '%s'/%pK\n",
config->bConfigurationValue,
config->label, config);
@@ -1023,7 +1029,7 @@ int usb_add_config(struct usb_composite_dev *cdev,
struct usb_function, list);
list_del(&f->list);
if (f->unbind) {
- DBG(cdev, "unbind function '%s'/%p\n",
+ DBG(cdev, "unbind function '%s'/%pK\n",
f->name, f);
f->unbind(config, f);
/* may free memory for "f" */
@@ -1034,7 +1040,7 @@ int usb_add_config(struct usb_composite_dev *cdev,
} else {
unsigned i;
- DBG(cdev, "cfg %d/%p speeds:%s%s%s%s\n",
+ DBG(cdev, "cfg %d/%pK speeds:%s%s%s%s\n",
config->bConfigurationValue, config,
config->superspeed_plus ? " superplus" : "",
config->superspeed ? " super" : "",
@@ -1050,7 +1056,7 @@ int usb_add_config(struct usb_composite_dev *cdev,
if (!f)
continue;
- DBG(cdev, " interface %d = %s/%p\n",
+ DBG(cdev, " interface %d = %s/%pK\n",
i, f->name, f);
}
}
@@ -1076,14 +1082,14 @@ static void remove_config(struct usb_composite_dev *cdev,
struct usb_function, list);
list_del(&f->list);
if (f->unbind) {
- DBG(cdev, "unbind function '%s'/%p\n", f->name, f);
+ DBG(cdev, "unbind function '%s'/%pK\n", f->name, f);
f->unbind(config, f);
/* may free memory for "f" */
}
}
list_del(&config->list);
if (config->unbind) {
- DBG(cdev, "unbind config '%s'/%p\n", config->label, config);
+ DBG(cdev, "unbind config '%s'/%pK\n", config->label, config);
config->unbind(config);
/* may free memory for "c" */
}
@@ -1491,7 +1497,7 @@ static void composite_setup_complete(struct usb_ep *ep, struct usb_request *req)
else if (cdev->os_desc_req == req)
cdev->os_desc_pending = false;
else
- WARN(1, "unknown request %p\n", req);
+ WARN(1, "unknown request %pK\n", req);
}
static int composite_ep0_queue(struct usb_composite_dev *cdev,
@@ -1506,7 +1512,7 @@ static int composite_ep0_queue(struct usb_composite_dev *cdev,
else if (cdev->os_desc_req == req)
cdev->os_desc_pending = true;
else
- WARN(1, "unknown request %p\n", req);
+ WARN(1, "unknown request %pK\n", req);
}
return ret;
@@ -1718,10 +1724,10 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
if (gadget->speed >= USB_SPEED_SUPER) {
cdev->desc.bcdUSB = cpu_to_le16(0x0310);
cdev->desc.bMaxPacketSize0 = 9;
- } else {
+ } else if (!disable_l1_for_hs) {
cdev->desc.bcdUSB = cpu_to_le16(0x0210);
}
- } else if (gadget->l1_supported) {
+ } else if (!disable_l1_for_hs) {
cdev->desc.bcdUSB = cpu_to_le16(0x0210);
DBG(cdev, "Config HS device with LPM(L1)\n");
}
@@ -1755,7 +1761,7 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
break;
case USB_DT_BOS:
if (gadget_is_superspeed(gadget) ||
- gadget->l1_supported) {
+ !disable_l1_for_hs) {
value = bos_desc(cdev);
value = min(w_length, (u16) value);
}
@@ -2545,7 +2551,13 @@ void usb_composite_setup_continue(struct usb_composite_dev *cdev)
spin_lock_irqsave(&cdev->lock, flags);
if (cdev->delayed_status == 0) {
+ if (!cdev->config) {
+ spin_unlock_irqrestore(&cdev->lock, flags);
+ return;
+ }
+ spin_unlock_irqrestore(&cdev->lock, flags);
WARN(cdev, "%s: Unexpected call\n", __func__);
+ return;
} else if (--cdev->delayed_status == 0) {
DBG(cdev, "%s: Completing delayed status\n", __func__);
diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
index 885ed26..16b6619 100644
--- a/drivers/usb/gadget/configfs.c
+++ b/drivers/usb/gadget/configfs.c
@@ -14,11 +14,16 @@
#include <linux/kdev_t.h>
#include <linux/usb/ch9.h>
+#ifdef CONFIG_USB_F_NCM
+#include <function/u_ncm.h>
+#endif
+
#ifdef CONFIG_USB_CONFIGFS_F_ACC
extern int acc_ctrlrequest(struct usb_composite_dev *cdev,
const struct usb_ctrlrequest *ctrl);
void acc_disconnect(void);
#endif
+
static struct class *android_class;
static struct device *android_device;
static int index;
@@ -84,6 +89,7 @@ struct gadget_info {
struct usb_composite_driver composite;
struct usb_composite_dev cdev;
bool use_os_desc;
+ bool unbinding;
char b_vendor_code;
char qw_sign[OS_STRING_QW_SIGN_LEN];
#ifdef CONFIG_USB_CONFIGFS_UEVENT
@@ -157,7 +163,7 @@ static int usb_string_copy(const char *s, char **s_copy)
if (!str)
return -ENOMEM;
}
- strncpy(str, s, MAX_USB_STRING_WITH_NULL_LEN);
+ strlcpy(str, s, MAX_USB_STRING_WITH_NULL_LEN);
if (str[ret - 1] == '\n')
str[ret - 1] = '\0';
*s_copy = str;
@@ -281,9 +287,12 @@ static int unregister_gadget(struct gadget_info *gi)
if (!gi->composite.gadget_driver.udc_name)
return -ENODEV;
+ gi->unbinding = true;
ret = usb_gadget_unregister_driver(&gi->composite.gadget_driver);
if (ret)
return ret;
+
+ gi->unbinding = false;
kfree(gi->composite.gadget_driver.udc_name);
gi->composite.gadget_driver.udc_name = NULL;
return 0;
@@ -1252,7 +1261,7 @@ static void purge_configs_funcs(struct gadget_info *gi)
list_move_tail(&f->list, &cfg->func_list);
if (f->unbind) {
dev_dbg(&gi->cdev.gadget->dev,
- "unbind function '%s'/%p\n",
+ "unbind function '%s'/%pK\n",
f->name, f);
f->unbind(c, f);
}
@@ -1455,7 +1464,7 @@ static void android_work(struct work_struct *data)
}
if (!uevent_sent) {
- pr_info("%s: did not send uevent (%d %d %p)\n", __func__,
+ pr_info("%s: did not send uevent (%d %d %pK)\n", __func__,
gi->connected, gi->sw_connected, cdev->config);
}
}
@@ -1504,6 +1513,18 @@ static int android_setup(struct usb_gadget *gadget,
}
}
+#ifdef CONFIG_USB_F_NCM
+ if (value < 0)
+ value = ncm_ctrlrequest(cdev, c);
+
+ /*
+ * for mirror link command case, if it already been handled,
+ * do not pass to composite_setup
+ */
+ if (value == 0)
+ return value;
+#endif
+
#ifdef CONFIG_USB_CONFIGFS_F_ACC
if (value < 0)
value = acc_ctrlrequest(cdev, c);
@@ -1555,7 +1576,8 @@ static void android_disconnect(struct usb_gadget *gadget)
acc_disconnect();
#endif
gi->connected = 0;
- schedule_work(&gi->work);
+ if (!gi->unbinding)
+ schedule_work(&gi->work);
composite_disconnect(gadget);
}
#endif
diff --git a/drivers/usb/gadget/function/f_accessory.c b/drivers/usb/gadget/function/f_accessory.c
index 9240956..899cbf1 100644
--- a/drivers/usb/gadget/function/f_accessory.c
+++ b/drivers/usb/gadget/function/f_accessory.c
@@ -564,7 +564,7 @@ static int create_bulk_endpoints(struct acc_dev *dev,
struct usb_ep *ep;
int i;
- DBG(cdev, "create_bulk_endpoints dev: %p\n", dev);
+ DBG(cdev, "create_bulk_endpoints dev: %pK\n", dev);
ep = usb_ep_autoconfig(cdev->gadget, in_desc);
if (!ep) {
@@ -655,7 +655,7 @@ static ssize_t acc_read(struct file *fp, char __user *buf,
r = -EIO;
goto done;
} else {
- pr_debug("rx %p queue\n", req);
+ pr_debug("rx %pK queue\n", req);
}
/* wait for a request to complete */
@@ -678,7 +678,7 @@ static ssize_t acc_read(struct file *fp, char __user *buf,
if (req->actual == 0)
goto requeue_req;
- pr_debug("rx %p %u\n", req, req->actual);
+ pr_debug("rx %pK %u\n", req, req->actual);
xfer = (req->actual < count) ? req->actual : count;
r = xfer;
if (copy_to_user(buf, req->buf, xfer))
@@ -993,7 +993,7 @@ __acc_function_bind(struct usb_configuration *c,
int id;
int ret;
- DBG(cdev, "acc_function_bind dev: %p\n", dev);
+ DBG(cdev, "acc_function_bind dev: %pK\n", dev);
if (configfs) {
if (acc_string_defs[INTERFACE_STRING_INDEX].id == 0) {
@@ -1180,7 +1180,7 @@ static void acc_hid_work(struct work_struct *data)
list_for_each_safe(entry, temp, &new_list) {
hid = list_entry(entry, struct acc_hid_dev, list);
if (acc_hid_init(hid)) {
- pr_err("can't add HID device %p\n", hid);
+ pr_err("can't add HID device %pK\n", hid);
acc_hid_delete(hid);
} else {
spin_lock_irqsave(&dev->lock, flags);
diff --git a/drivers/usb/gadget/function/f_acm.c b/drivers/usb/gadget/function/f_acm.c
index 5e3828d..4f2b847 100644
--- a/drivers/usb/gadget/function/f_acm.c
+++ b/drivers/usb/gadget/function/f_acm.c
@@ -704,7 +704,7 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
if (acm->notify_req)
gs_free_req(acm->notify, acm->notify_req);
- ERROR(cdev, "%s/%p: can't bind, err %d\n", f->name, f, status);
+ ERROR(cdev, "%s/%pK: can't bind, err %d\n", f->name, f, status);
return status;
}
diff --git a/drivers/usb/gadget/function/f_cdev.c b/drivers/usb/gadget/function/f_cdev.c
index 5804840..4cb2817 100644
--- a/drivers/usb/gadget/function/f_cdev.c
+++ b/drivers/usb/gadget/function/f_cdev.c
@@ -851,7 +851,7 @@ static int usb_cser_alloc_requests(struct usb_ep *ep, struct list_head *head,
int i;
struct usb_request *req;
- pr_debug("ep:%p head:%p num:%d size:%d cb:%p",
+ pr_debug("ep:%pK head:%p num:%d size:%d cb:%p",
ep, head, num, size, cb);
for (i = 0; i < num; i++) {
@@ -901,7 +901,7 @@ static void usb_cser_start_rx(struct f_cdev *port)
ret = usb_ep_queue(ep, req, GFP_KERNEL);
spin_lock_irqsave(&port->port_lock, flags);
if (ret) {
- pr_err("port(%d):%p usb ep(%s) queue failed\n",
+ pr_err("port(%d):%pK usb ep(%s) queue failed\n",
port->port_num, port, ep->name);
list_add(&req->list, pool);
break;
@@ -916,7 +916,7 @@ static void usb_cser_read_complete(struct usb_ep *ep, struct usb_request *req)
struct f_cdev *port = ep->driver_data;
unsigned long flags;
- pr_debug("ep:(%p)(%s) port:%p req_status:%d req->actual:%u\n",
+ pr_debug("ep:(%pK)(%s) port:%p req_status:%d req->actual:%u\n",
ep, ep->name, port, req->status, req->actual);
if (!port) {
pr_err("port is null\n");
@@ -942,7 +942,7 @@ static void usb_cser_write_complete(struct usb_ep *ep, struct usb_request *req)
unsigned long flags;
struct f_cdev *port = ep->driver_data;
- pr_debug("ep:(%p)(%s) port:%p req_stats:%d\n",
+ pr_debug("ep:(%pK)(%s) port:%p req_stats:%d\n",
ep, ep->name, port, req->status);
if (!port) {
@@ -975,7 +975,7 @@ static void usb_cser_start_io(struct f_cdev *port)
int ret = -ENODEV;
unsigned long flags;
- pr_debug("port: %p\n", port);
+ pr_debug("port: %pK\n", port);
spin_lock_irqsave(&port->port_lock, flags);
if (!port->is_connected)
@@ -1018,7 +1018,7 @@ static void usb_cser_stop_io(struct f_cdev *port)
struct usb_ep *out;
unsigned long flags;
- pr_debug("port:%p\n", port);
+ pr_debug("port:%pK\n", port);
in = port->port_usb.in;
out = port->port_usb.out;
@@ -1061,7 +1061,7 @@ int f_cdev_open(struct inode *inode, struct file *file)
}
file->private_data = port;
- pr_debug("opening port(%s)(%p)\n", port->name, port);
+ pr_debug("opening port(%s)(%pK)\n", port->name, port);
ret = wait_event_interruptible(port->open_wq,
port->is_connected);
if (ret) {
@@ -1074,7 +1074,7 @@ int f_cdev_open(struct inode *inode, struct file *file)
spin_unlock_irqrestore(&port->port_lock, flags);
usb_cser_start_rx(port);
- pr_debug("port(%s)(%p) open is success\n", port->name, port);
+ pr_debug("port(%s)(%pK) open is success\n", port->name, port);
return 0;
}
@@ -1094,7 +1094,7 @@ int f_cdev_release(struct inode *inode, struct file *file)
port->port_open = false;
port->cbits_updated = false;
spin_unlock_irqrestore(&port->port_lock, flags);
- pr_debug("port(%s)(%p) is closed.\n", port->name, port);
+ pr_debug("port(%s)(%pK) is closed.\n", port->name, port);
return 0;
}
@@ -1118,7 +1118,7 @@ ssize_t f_cdev_read(struct file *file,
return -EINVAL;
}
- pr_debug("read on port(%s)(%p) count:%zu\n", port->name, port, count);
+ pr_debug("read on port(%s)(%pK) count:%zu\n", port->name, port, count);
spin_lock_irqsave(&port->port_lock, flags);
current_rx_req = port->current_rx_req;
pending_rx_bytes = port->pending_rx_bytes;
@@ -1219,7 +1219,7 @@ ssize_t f_cdev_write(struct file *file,
}
spin_lock_irqsave(&port->port_lock, flags);
- pr_debug("write on port(%s)(%p)\n", port->name, port);
+ pr_debug("write on port(%s)(%pK)\n", port->name, port);
if (!port->is_connected) {
spin_unlock_irqrestore(&port->port_lock, flags);
@@ -1388,7 +1388,7 @@ static long f_cdev_ioctl(struct file *fp, unsigned int cmd,
case TIOCMBIC:
case TIOCMBIS:
case TIOCMSET:
- pr_debug("TIOCMSET on port(%s)%p\n", port->name, port);
+ pr_debug("TIOCMSET on port(%s)%pK\n", port->name, port);
i = get_user(val, (uint32_t *)arg);
if (i) {
pr_err("Error getting TIOCMSET value\n");
@@ -1397,7 +1397,7 @@ static long f_cdev_ioctl(struct file *fp, unsigned int cmd,
ret = f_cdev_tiocmset(port, val, ~val);
break;
case TIOCMGET:
- pr_debug("TIOCMGET on port(%s)%p\n", port->name, port);
+ pr_debug("TIOCMGET on port(%s)%pK\n", port->name, port);
ret = f_cdev_tiocmget(port);
if (ret >= 0) {
ret = put_user(ret, (uint32_t *)arg);
@@ -1447,14 +1447,14 @@ int usb_cser_connect(struct f_cdev *port)
return -ENODEV;
}
- pr_debug("port(%s) (%p)\n", port->name, port);
+ pr_debug("port(%s) (%pK)\n", port->name, port);
cser = &port->port_usb;
cser->notify_modem = usb_cser_notify_modem;
ret = usb_ep_enable(cser->in);
if (ret) {
- pr_err("usb_ep_enable failed eptype:IN ep:%p, err:%d",
+ pr_err("usb_ep_enable failed eptype:IN ep:%pK, err:%d",
cser->in, ret);
return ret;
}
@@ -1462,7 +1462,7 @@ int usb_cser_connect(struct f_cdev *port)
ret = usb_ep_enable(cser->out);
if (ret) {
- pr_err("usb_ep_enable failed eptype:OUT ep:%p, err: %d",
+ pr_err("usb_ep_enable failed eptype:OUT ep:%pK, err: %d",
cser->out, ret);
cser->in->driver_data = 0;
return ret;
@@ -1574,7 +1574,7 @@ static struct f_cdev *f_cdev_alloc(char *func_name, int portno)
goto err_create_dev;
}
- pr_info("port_name:%s (%p) portno:(%d)\n",
+ pr_info("port_name:%s (%pK) portno:(%d)\n",
port->name, port, port->port_num);
return port;
diff --git a/drivers/usb/gadget/function/f_diag.c b/drivers/usb/gadget/function/f_diag.c
index be22de0..f08e443 100644
--- a/drivers/usb/gadget/function/f_diag.c
+++ b/drivers/usb/gadget/function/f_diag.c
@@ -291,7 +291,7 @@ static void diag_update_pid_and_serial_num(struct diag_context *ctxt)
}
update_dload:
- pr_debug("%s: dload:%p pid:%x serial_num:%s\n",
+ pr_debug("%s: dload:%pK pid:%x serial_num:%s\n",
__func__, diag_dload, local_diag_dload.pid,
local_diag_dload.serial_number);
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
index c4d4781..f0042ec 100644
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -766,7 +766,7 @@ static void ffs_epfile_io_complete(struct usb_ep *_ep, struct usb_request *req)
if (ep && ep->req && likely(req->context)) {
struct ffs_ep *ep = _ep->driver_data;
ep->status = req->status ? req->status : req->actual;
- ffs_log("ep status %d for req %p", ep->status, req);
+ ffs_log("ep status %d for req %pK", ep->status, req);
/* Set is_busy false to indicate completion of last request */
ep->is_busy = false;
complete(req->context);
@@ -1912,7 +1912,7 @@ static void ffs_data_clear(struct ffs_data *ffs)
ffs_log("enter: state %d setup_state %d flag %lu", ffs->state,
ffs->setup_state, ffs->flags);
- pr_debug("%s: ffs->gadget= %p, ffs->flags= %lu\n",
+ pr_debug("%s: ffs->gadget= %pK, ffs->flags= %lu\n",
__func__, ffs->gadget, ffs->flags);
ffs_closed(ffs);
@@ -2003,7 +2003,7 @@ static int functionfs_bind(struct ffs_data *ffs, struct usb_composite_dev *cdev)
ffs->gadget = cdev->gadget;
- ffs_log("exit: state %d setup_state %d flag %lu gadget %p\n",
+ ffs_log("exit: state %d setup_state %d flag %lu gadget %pK\n",
ffs->state, ffs->setup_state, ffs->flags, ffs->gadget);
ffs_data_get(ffs);
@@ -2019,7 +2019,7 @@ static void functionfs_unbind(struct ffs_data *ffs)
ffs->ep0req = NULL;
ffs->gadget = NULL;
clear_bit(FFS_FL_BOUND, &ffs->flags);
- ffs_log("state %d setup_state %d flag %lu gadget %p\n",
+ ffs_log("state %d setup_state %d flag %lu gadget %pK\n",
ffs->state, ffs->setup_state, ffs->flags, ffs->gadget);
ffs_data_put(ffs);
}
@@ -4175,6 +4175,7 @@ static void ffs_closed(struct ffs_data *ffs)
}
ffs_obj->desc_ready = false;
+ ffs_obj->ffs_data = NULL;
if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags) &&
ffs_obj->ffs_closed_callback)
diff --git a/drivers/usb/gadget/function/f_gsi.c b/drivers/usb/gadget/function/f_gsi.c
index a26d6df..71e84fd 100644
--- a/drivers/usb/gadget/function/f_gsi.c
+++ b/drivers/usb/gadget/function/f_gsi.c
@@ -930,7 +930,7 @@ static int gsi_ctrl_dev_open(struct inode *ip, struct file *fp)
struct gsi_inst_status *inst_cur;
if (!c_port) {
- pr_err_ratelimited("%s: gsi ctrl port %p", __func__, c_port);
+ pr_err_ratelimited("%s: gsi ctrl port %pK", __func__, c_port);
return -ENODEV;
}
@@ -1019,7 +1019,7 @@ gsi_ctrl_dev_read(struct file *fp, char __user *buf, size_t count, loff_t *pos)
gsi = inst_cur->opts->gsi;
c_port = &inst_cur->opts->gsi->c_port;
if (!c_port) {
- log_event_err("%s: gsi ctrl port %p", __func__, c_port);
+ log_event_err("%s: gsi ctrl port %pK", __func__, c_port);
return -ENODEV;
}
@@ -1108,7 +1108,7 @@ static ssize_t gsi_ctrl_dev_write(struct file *fp, const char __user *buf,
req = c_port->notify_req;
if (!c_port || !req || !req->buf) {
- log_event_err("%s: c_port %p req %p req->buf %p",
+ log_event_err("%s: c_port %pK req %p req->buf %p",
__func__, c_port, req, req ? req->buf : req);
return -ENODEV;
}
@@ -1186,7 +1186,7 @@ static long gsi_ctrl_dev_ioctl(struct file *fp, unsigned int cmd,
c_port = &gsi->c_port;
if (!c_port) {
- log_event_err("%s: gsi ctrl port %p", __func__, c_port);
+ log_event_err("%s: gsi ctrl port %pK", __func__, c_port);
return -ENODEV;
}
@@ -1325,7 +1325,7 @@ static unsigned int gsi_ctrl_dev_poll(struct file *fp, poll_table *wait)
gsi = inst_cur->opts->gsi;
c_port = &inst_cur->opts->gsi->c_port;
if (!c_port) {
- log_event_err("%s: gsi ctrl port %p", __func__, c_port);
+ log_event_err("%s: gsi ctrl port %pK", __func__, c_port);
return -ENODEV;
}
@@ -1454,7 +1454,7 @@ void gsi_rndis_flow_ctrl_enable(bool enable, struct rndis_params *param)
struct gsi_data_port *d_port;
if (!gsi) {
- pr_err("%s: gsi prot ctx is %p", __func__, gsi);
+ pr_err("%s: gsi prot ctx is %pK", __func__, gsi);
return;
}
@@ -1678,7 +1678,7 @@ gsi_ctrl_set_ntb_cmd_complete(struct usb_ep *ep, struct usb_request *req)
struct f_gsi *gsi = req->context;
struct gsi_ntb_info *ntb = NULL;
- log_event_dbg("dev:%p", gsi);
+ log_event_dbg("dev:%pK", gsi);
req->context = NULL;
if (req->status || req->actual != req->length) {
diff --git a/drivers/usb/gadget/function/f_gsi.h b/drivers/usb/gadget/function/f_gsi.h
index d6bf0f4..bdd0dfa 100644
--- a/drivers/usb/gadget/function/f_gsi.h
+++ b/drivers/usb/gadget/function/f_gsi.h
@@ -586,7 +586,7 @@ static struct usb_endpoint_descriptor rndis_gsi_fs_out_desc = {
};
static struct usb_descriptor_header *gsi_eth_fs_function[] = {
- (struct usb_descriptor_header *) &gsi_eth_fs_function,
+ (struct usb_descriptor_header *) &rndis_gsi_iad_descriptor,
/* control interface matches ACM, not Ethernet */
(struct usb_descriptor_header *) &rndis_gsi_control_intf,
(struct usb_descriptor_header *) &rndis_gsi_header_desc,
diff --git a/drivers/usb/gadget/function/f_mtp.c b/drivers/usb/gadget/function/f_mtp.c
index ca8ed69..aea32e4 100644
--- a/drivers/usb/gadget/function/f_mtp.c
+++ b/drivers/usb/gadget/function/f_mtp.c
@@ -499,7 +499,7 @@ static int mtp_create_bulk_endpoints(struct mtp_dev *dev,
struct usb_ep *ep;
int i;
- DBG(cdev, "create_bulk_endpoints dev: %p\n", dev);
+ DBG(cdev, "create_bulk_endpoints dev: %pK\n", dev);
ep = usb_ep_autoconfig(cdev->gadget, in_desc);
if (!ep) {
@@ -644,7 +644,7 @@ static ssize_t mtp_read(struct file *fp, char __user *buf,
r = -EIO;
goto done;
} else {
- DBG(cdev, "rx %p queue\n", req);
+ DBG(cdev, "rx %pK queue\n", req);
}
/* wait for a request to complete */
@@ -670,7 +670,7 @@ static ssize_t mtp_read(struct file *fp, char __user *buf,
if (req->actual == 0)
goto requeue_req;
- DBG(cdev, "rx %p %d\n", req, req->actual);
+ DBG(cdev, "rx %pK %d\n", req, req->actual);
xfer = (req->actual < count) ? req->actual : count;
r = xfer;
if (copy_to_user(buf, req->buf, xfer))
@@ -955,7 +955,7 @@ static void receive_file_work(struct work_struct *data)
}
if (write_req) {
- DBG(cdev, "rx %p %d\n", write_req, write_req->actual);
+ DBG(cdev, "rx %pK %d\n", write_req, write_req->actual);
start_time = ktime_get();
mutex_lock(&dev->read_mutex);
if (dev->state == STATE_OFFLINE) {
@@ -1410,7 +1410,7 @@ mtp_function_bind(struct usb_configuration *c, struct usb_function *f)
struct mtp_instance *fi_mtp;
dev->cdev = cdev;
- DBG(cdev, "mtp_function_bind dev: %p\n", dev);
+ DBG(cdev, "mtp_function_bind dev: %pK\n", dev);
/* allocate interface ID(s) */
id = usb_interface_id(c, f);
diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
index d2fbed7..98e353d 100644
--- a/drivers/usb/gadget/function/f_ncm.c
+++ b/drivers/usb/gadget/function/f_ncm.c
@@ -1605,10 +1605,57 @@ static struct config_item_type ncm_func_type = {
.ct_owner = THIS_MODULE,
};
+#ifdef CONFIG_USB_CONFIGFS_UEVENT
+
+struct ncm_setup_desc {
+ struct work_struct work;
+ struct device *device;
+ uint8_t major; // Mirror Link major version
+ uint8_t minor; // Mirror Link minor version
+};
+
+static struct ncm_setup_desc *_ncm_setup_desc;
+
+#define MIRROR_LINK_STRING_LENGTH_MAX 32
+static void ncm_setup_work(struct work_struct *data)
+{
+ char mirror_link_string[MIRROR_LINK_STRING_LENGTH_MAX];
+ char *envp[2] = { mirror_link_string, NULL };
+
+ snprintf(mirror_link_string, MIRROR_LINK_STRING_LENGTH_MAX,
+ "MirrorLink=V%d.%d",
+ _ncm_setup_desc->major, _ncm_setup_desc->minor);
+ kobject_uevent_env(&_ncm_setup_desc->device->kobj, KOBJ_CHANGE, envp);
+}
+
+int ncm_ctrlrequest(struct usb_composite_dev *cdev,
+ const struct usb_ctrlrequest *ctrl)
+{
+ int value = -EOPNOTSUPP;
+
+ if (ctrl->bRequestType == 0x40 && ctrl->bRequest == 0xF0) {
+ _ncm_setup_desc->minor = (uint8_t)(ctrl->wValue >> 8);
+ _ncm_setup_desc->major = (uint8_t)(ctrl->wValue & 0xFF);
+ schedule_work(&_ncm_setup_desc->work);
+ value = 0;
+ }
+
+ return value;
+}
+#endif
+
static void ncm_free_inst(struct usb_function_instance *f)
{
struct f_ncm_opts *opts;
+#ifdef CONFIG_USB_CONFIGFS_UEVENT
+ /* release _ncm_setup_desc related resource */
+ device_destroy(_ncm_setup_desc->device->class,
+ _ncm_setup_desc->device->devt);
+ cancel_work(&_ncm_setup_desc->work);
+ kfree(_ncm_setup_desc);
+#endif
+
opts = container_of(f, struct f_ncm_opts, func_inst);
if (opts->bound)
gether_cleanup(netdev_priv(opts->net));
@@ -1627,6 +1674,14 @@ static struct usb_function_instance *ncm_alloc_inst(void)
config_group_init_type_name(&opts->func_inst.group, "", &ncm_func_type);
+#ifdef CONFIG_USB_CONFIGFS_UEVENT
+ _ncm_setup_desc = kzalloc(sizeof(*_ncm_setup_desc), GFP_KERNEL);
+ if (!_ncm_setup_desc)
+ return ERR_PTR(-ENOMEM);
+ INIT_WORK(&_ncm_setup_desc->work, ncm_setup_work);
+ _ncm_setup_desc->device = create_function_device("f_ncm");
+#endif
+
return &opts->func_inst;
}
diff --git a/drivers/usb/gadget/function/f_obex.c b/drivers/usb/gadget/function/f_obex.c
index d43e86c..649ff4d 100644
--- a/drivers/usb/gadget/function/f_obex.c
+++ b/drivers/usb/gadget/function/f_obex.c
@@ -377,7 +377,7 @@ static int obex_bind(struct usb_configuration *c, struct usb_function *f)
return 0;
fail:
- ERROR(cdev, "%s/%p: can't bind, err %d\n", f->name, f, status);
+ ERROR(cdev, "%s/%pK: can't bind, err %d\n", f->name, f, status);
return status;
}
diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
index 969cfe7..c330814 100644
--- a/drivers/usb/gadget/function/f_uac2.c
+++ b/drivers/usb/gadget/function/f_uac2.c
@@ -23,7 +23,7 @@
#include "u_uac2.h"
/* Keep everyone on toes */
-#define USB_XFERS 2
+#define USB_XFERS 8
/*
* The driver implements a simple UAC_2 topology.
@@ -54,6 +54,10 @@
#define UNFLW_CTRL 8
#define OVFLW_CTRL 10
+static bool enable_capture;
+module_param(enable_capture, bool, 0644);
+MODULE_PARM_DESC(enable_capture, "Enable USB Peripheral speaker function");
+
static const char *uac2_name = "snd_uac2";
struct uac2_req {
@@ -126,8 +130,11 @@ struct audio_dev {
struct usb_ep *in_ep, *out_ep;
struct usb_function func;
+ bool enable_capture;
+
/* The ALSA Sound Card it represents on the USB-Client side */
struct snd_uac2_chip uac2;
+ struct device *gdev;
};
static inline
@@ -457,7 +464,7 @@ static int snd_uac2_probe(struct platform_device *pdev)
c_chmask = opts->c_chmask;
/* Choose any slot, with no id */
- err = snd_card_new(&pdev->dev, -1, NULL, THIS_MODULE, 0, &card);
+ err = snd_card_new(audio_dev->gdev, -1, NULL, THIS_MODULE, 0, &card);
if (err < 0)
return err;
@@ -468,7 +475,9 @@ static int snd_uac2_probe(struct platform_device *pdev)
* Create a substream only for non-zero channel streams
*/
err = snd_pcm_new(uac2->card, "UAC2 PCM", 0,
- p_chmask ? 1 : 0, c_chmask ? 1 : 0, &pcm);
+ p_chmask ? 1 : 0,
+ (c_chmask && audio_dev->enable_capture) ? 1 : 0,
+ &pcm);
if (err < 0)
goto snd_fail;
@@ -779,6 +788,13 @@ static struct usb_endpoint_descriptor hs_epout_desc = {
.bInterval = 4,
};
+static struct usb_ss_ep_comp_descriptor ss_epout_comp_desc = {
+ .bLength = sizeof(ss_epout_comp_desc),
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+
+ .wBytesPerInterval = cpu_to_le16(1024),
+};
+
/* CS AS ISO OUT Endpoint */
static struct uac2_iso_endpoint_descriptor as_iso_out_desc = {
.bLength = sizeof as_iso_out_desc,
@@ -856,6 +872,13 @@ static struct usb_endpoint_descriptor hs_epin_desc = {
.bInterval = 4,
};
+static struct usb_ss_ep_comp_descriptor ss_epin_comp_desc = {
+ .bLength = sizeof(ss_epin_comp_desc),
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+
+ .wBytesPerInterval = cpu_to_le16(1024),
+};
+
/* CS AS ISO IN Endpoint */
static struct uac2_iso_endpoint_descriptor as_iso_in_desc = {
.bLength = sizeof as_iso_in_desc,
@@ -898,6 +921,25 @@ static struct usb_descriptor_header *fs_audio_desc[] = {
NULL,
};
+static struct usb_descriptor_header *fs_playback_audio_desc[] = {
+ (struct usb_descriptor_header *)&iad_desc,
+ (struct usb_descriptor_header *)&std_ac_if_desc,
+
+ (struct usb_descriptor_header *)&ac_hdr_desc,
+ (struct usb_descriptor_header *)&in_clk_src_desc,
+ (struct usb_descriptor_header *)&io_in_it_desc,
+ (struct usb_descriptor_header *)&usb_in_ot_desc,
+
+ (struct usb_descriptor_header *)&std_as_in_if0_desc,
+ (struct usb_descriptor_header *)&std_as_in_if1_desc,
+
+ (struct usb_descriptor_header *)&as_in_hdr_desc,
+ (struct usb_descriptor_header *)&as_in_fmt1_desc,
+ (struct usb_descriptor_header *)&fs_epin_desc,
+ (struct usb_descriptor_header *)&as_iso_in_desc,
+ NULL,
+};
+
static struct usb_descriptor_header *hs_audio_desc[] = {
(struct usb_descriptor_header *)&iad_desc,
(struct usb_descriptor_header *)&std_ac_if_desc,
@@ -928,6 +970,77 @@ static struct usb_descriptor_header *hs_audio_desc[] = {
NULL,
};
+static struct usb_descriptor_header *hs_playback_audio_desc[] = {
+ (struct usb_descriptor_header *)&iad_desc,
+ (struct usb_descriptor_header *)&std_ac_if_desc,
+
+ (struct usb_descriptor_header *)&ac_hdr_desc,
+ (struct usb_descriptor_header *)&in_clk_src_desc,
+ (struct usb_descriptor_header *)&io_in_it_desc,
+ (struct usb_descriptor_header *)&usb_in_ot_desc,
+
+ (struct usb_descriptor_header *)&std_as_in_if0_desc,
+ (struct usb_descriptor_header *)&std_as_in_if1_desc,
+
+ (struct usb_descriptor_header *)&as_in_hdr_desc,
+ (struct usb_descriptor_header *)&as_in_fmt1_desc,
+ (struct usb_descriptor_header *)&hs_epin_desc,
+ (struct usb_descriptor_header *)&as_iso_in_desc,
+ NULL,
+};
+
+static struct usb_descriptor_header *ss_audio_desc[] = {
+ (struct usb_descriptor_header *)&iad_desc,
+ (struct usb_descriptor_header *)&std_ac_if_desc,
+
+ (struct usb_descriptor_header *)&ac_hdr_desc,
+ (struct usb_descriptor_header *)&in_clk_src_desc,
+ (struct usb_descriptor_header *)&out_clk_src_desc,
+ (struct usb_descriptor_header *)&usb_out_it_desc,
+ (struct usb_descriptor_header *)&io_in_it_desc,
+ (struct usb_descriptor_header *)&usb_in_ot_desc,
+ (struct usb_descriptor_header *)&io_out_ot_desc,
+
+ (struct usb_descriptor_header *)&std_as_out_if0_desc,
+ (struct usb_descriptor_header *)&std_as_out_if1_desc,
+
+ (struct usb_descriptor_header *)&as_out_hdr_desc,
+ (struct usb_descriptor_header *)&as_out_fmt1_desc,
+ (struct usb_descriptor_header *)&hs_epout_desc,
+ (struct usb_descriptor_header *)&ss_epout_comp_desc,
+ (struct usb_descriptor_header *)&as_iso_out_desc,
+
+ (struct usb_descriptor_header *)&std_as_in_if0_desc,
+ (struct usb_descriptor_header *)&std_as_in_if1_desc,
+
+ (struct usb_descriptor_header *)&as_in_hdr_desc,
+ (struct usb_descriptor_header *)&as_in_fmt1_desc,
+ (struct usb_descriptor_header *)&hs_epin_desc,
+ (struct usb_descriptor_header *)&ss_epin_comp_desc,
+ (struct usb_descriptor_header *)&as_iso_in_desc,
+ NULL,
+};
+
+static struct usb_descriptor_header *ss_playback_audio_desc[] = {
+ (struct usb_descriptor_header *)&iad_desc,
+ (struct usb_descriptor_header *)&std_ac_if_desc,
+
+ (struct usb_descriptor_header *)&ac_hdr_desc,
+ (struct usb_descriptor_header *)&in_clk_src_desc,
+ (struct usb_descriptor_header *)&io_in_it_desc,
+ (struct usb_descriptor_header *)&usb_in_ot_desc,
+
+ (struct usb_descriptor_header *)&std_as_in_if0_desc,
+ (struct usb_descriptor_header *)&std_as_in_if1_desc,
+
+ (struct usb_descriptor_header *)&as_in_hdr_desc,
+ (struct usb_descriptor_header *)&as_in_fmt1_desc,
+ (struct usb_descriptor_header *)&hs_epin_desc,
+ (struct usb_descriptor_header *)&ss_epin_comp_desc,
+ (struct usb_descriptor_header *)&as_iso_in_desc,
+ NULL,
+};
+
struct cntrl_cur_lay3 {
__u32 dCUR;
};
@@ -1035,24 +1148,30 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
snprintf(clksrc_in, sizeof(clksrc_in), "%uHz", uac2_opts->p_srate);
snprintf(clksrc_out, sizeof(clksrc_out), "%uHz", uac2_opts->c_srate);
+ pr_debug("%s bind with capture enabled(%d)\n", __func__,
+ enable_capture);
+ agdev->enable_capture = enable_capture;
ret = usb_interface_id(cfg, fn);
if (ret < 0) {
dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
return ret;
}
std_ac_if_desc.bInterfaceNumber = ret;
+ iad_desc.bFirstInterface = ret;
agdev->ac_intf = ret;
agdev->ac_alt = 0;
- ret = usb_interface_id(cfg, fn);
- if (ret < 0) {
- dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
- return ret;
+ if (agdev->enable_capture) {
+ ret = usb_interface_id(cfg, fn);
+ if (ret < 0) {
+ dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
+ return ret;
+ }
+ std_as_out_if0_desc.bInterfaceNumber = ret;
+ std_as_out_if1_desc.bInterfaceNumber = ret;
+ agdev->as_out_intf = ret;
+ agdev->as_out_alt = 0;
}
- std_as_out_if0_desc.bInterfaceNumber = ret;
- std_as_out_if1_desc.bInterfaceNumber = ret;
- agdev->as_out_intf = ret;
- agdev->as_out_alt = 0;
ret = usb_interface_id(cfg, fn);
if (ret < 0) {
@@ -1064,10 +1183,12 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
agdev->as_in_intf = ret;
agdev->as_in_alt = 0;
- agdev->out_ep = usb_ep_autoconfig(gadget, &fs_epout_desc);
- if (!agdev->out_ep) {
- dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
- return ret;
+ if (agdev->enable_capture) {
+ agdev->out_ep = usb_ep_autoconfig(gadget, &fs_epout_desc);
+ if (!agdev->out_ep) {
+ dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
+ return ret;
+ }
}
agdev->in_ep = usb_ep_autoconfig(gadget, &fs_epin_desc);
@@ -1088,17 +1209,25 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
hs_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress;
hs_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress;
- ret = usb_assign_descriptors(fn, fs_audio_desc, hs_audio_desc, NULL,
- NULL);
+ if (agdev->enable_capture) {
+ ret = usb_assign_descriptors(fn, fs_audio_desc, hs_audio_desc,
+ ss_audio_desc, NULL);
+ } else {
+ ret = usb_assign_descriptors(fn, fs_playback_audio_desc,
+ hs_playback_audio_desc,
+ ss_playback_audio_desc, NULL);
+ }
if (ret)
return ret;
- prm = &agdev->uac2.c_prm;
- prm->max_psize = hs_epout_desc.wMaxPacketSize;
- prm->rbuf = kzalloc(prm->max_psize * USB_XFERS, GFP_KERNEL);
- if (!prm->rbuf) {
- prm->max_psize = 0;
- goto err_free_descs;
+ if (agdev->enable_capture) {
+ prm = &agdev->uac2.c_prm;
+ prm->max_psize = hs_epout_desc.wMaxPacketSize;
+ prm->rbuf = kzalloc(prm->max_psize * USB_XFERS, GFP_KERNEL);
+ if (!prm->rbuf) {
+ prm->max_psize = 0;
+ goto err_free_descs;
+ }
}
prm = &agdev->uac2.p_prm;
@@ -1109,6 +1238,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
goto err;
}
+ agdev->gdev = &gadget->dev;
ret = alsa_uac2_init(agdev);
if (ret)
goto err;
@@ -1150,7 +1280,7 @@ afunc_set_alt(struct usb_function *fn, unsigned intf, unsigned alt)
return 0;
}
- if (intf == agdev->as_out_intf) {
+ if (intf == agdev->as_out_intf && agdev->enable_capture) {
ep = agdev->out_ep;
prm = &uac2->c_prm;
config_ep_by_speed(gadget, fn, ep);
@@ -1200,27 +1330,31 @@ afunc_set_alt(struct usb_function *fn, unsigned intf, unsigned alt)
return 0;
}
- prm->ep_enabled = true;
- usb_ep_enable(ep);
+ if (intf == agdev->as_in_intf ||
+ (intf == agdev->as_out_intf && agdev->enable_capture)) {
+ prm->ep_enabled = true;
+ usb_ep_enable(ep);
- for (i = 0; i < USB_XFERS; i++) {
- if (!prm->ureq[i].req) {
- req = usb_ep_alloc_request(ep, GFP_ATOMIC);
- if (req == NULL)
- return -ENOMEM;
+ for (i = 0; i < USB_XFERS; i++) {
+ if (!prm->ureq[i].req) {
+ req = usb_ep_alloc_request(ep, GFP_ATOMIC);
+ if (req == NULL)
+ return -ENOMEM;
- prm->ureq[i].req = req;
- prm->ureq[i].pp = prm;
+ prm->ureq[i].req = req;
+ prm->ureq[i].pp = prm;
- req->zero = 0;
- req->context = &prm->ureq[i];
- req->length = req_len;
- req->complete = agdev_iso_complete;
- req->buf = prm->rbuf + i * prm->max_psize;
+ req->zero = 0;
+ req->context = &prm->ureq[i];
+ req->length = req_len;
+ req->complete = agdev_iso_complete;
+ req->buf = prm->rbuf + i * prm->max_psize;
+ }
+
+ if (usb_ep_queue(ep, prm->ureq[i].req, GFP_ATOMIC))
+ dev_err(dev, "%s:%d Error!\n", __func__,
+ __LINE__);
}
-
- if (usb_ep_queue(ep, prm->ureq[i].req, GFP_ATOMIC))
- dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
}
return 0;
@@ -1234,7 +1368,7 @@ afunc_get_alt(struct usb_function *fn, unsigned intf)
if (intf == agdev->ac_intf)
return agdev->ac_alt;
- else if (intf == agdev->as_out_intf)
+ else if (intf == agdev->as_out_intf && agdev->enable_capture)
return agdev->as_out_alt;
else if (intf == agdev->as_in_intf)
return agdev->as_in_alt;
@@ -1255,8 +1389,10 @@ afunc_disable(struct usb_function *fn)
free_ep(&uac2->p_prm, agdev->in_ep);
agdev->as_in_alt = 0;
- free_ep(&uac2->c_prm, agdev->out_ep);
- agdev->as_out_alt = 0;
+ if (agdev->enable_capture) {
+ free_ep(&uac2->c_prm, agdev->out_ep);
+ agdev->as_out_alt = 0;
+ }
}
static int
@@ -1558,8 +1694,10 @@ static void afunc_unbind(struct usb_configuration *c, struct usb_function *f)
prm = &agdev->uac2.p_prm;
kfree(prm->rbuf);
- prm = &agdev->uac2.c_prm;
- kfree(prm->rbuf);
+ if (agdev->enable_capture) {
+ prm = &agdev->uac2.c_prm;
+ kfree(prm->rbuf);
+ }
usb_free_all_descriptors(f);
}
@@ -1590,6 +1728,19 @@ static struct usb_function *afunc_alloc(struct usb_function_instance *fi)
}
DECLARE_USB_FUNCTION_INIT(uac2, afunc_alloc_inst, afunc_alloc);
+
+static int afunc_init(void)
+{
+ return usb_function_register(&uac2usb_func);
+}
+module_init(afunc_init);
+
+static void __exit afunc_exit(void)
+{
+ usb_function_unregister(&uac2usb_func);
+}
+module_exit(afunc_exit);
+
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Yadwinder Singh");
MODULE_AUTHOR("Jaswinder Singh");
diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
index c7689d0..c99d547 100644
--- a/drivers/usb/gadget/function/f_uvc.c
+++ b/drivers/usb/gadget/function/f_uvc.c
@@ -84,7 +84,7 @@ static struct usb_interface_descriptor uvc_control_intf = {
.bNumEndpoints = 1,
.bInterfaceClass = USB_CLASS_VIDEO,
.bInterfaceSubClass = UVC_SC_VIDEOCONTROL,
- .bInterfaceProtocol = 0x00,
+ .bInterfaceProtocol = 0x01,
.iInterface = 0,
};
@@ -788,16 +788,18 @@ static struct usb_function_instance *uvc_alloc_inst(void)
cd->bmControls[2] = 0;
pd = &opts->uvc_processing;
- pd->bLength = UVC_DT_PROCESSING_UNIT_SIZE(2);
+ pd->bLength = UVC_DT_PROCESSING_UNIT_SIZE(3);
pd->bDescriptorType = USB_DT_CS_INTERFACE;
pd->bDescriptorSubType = UVC_VC_PROCESSING_UNIT;
pd->bUnitID = 2;
pd->bSourceID = 1;
pd->wMaxMultiplier = cpu_to_le16(16*1024);
- pd->bControlSize = 2;
- pd->bmControls[0] = 1;
- pd->bmControls[1] = 0;
+ pd->bControlSize = 3;
+ pd->bmControls[0] = 64;
+ pd->bmControls[1] = 16;
+ pd->bmControls[2] = 1;
pd->iProcessing = 0;
+ pd->bmVideoStandards = 0;
od = &opts->uvc_output_terminal;
od->bLength = UVC_DT_OUTPUT_TERMINAL_SIZE;
@@ -923,5 +925,18 @@ static struct usb_function *uvc_alloc(struct usb_function_instance *fi)
}
DECLARE_USB_FUNCTION_INIT(uvc, uvc_alloc_inst, uvc_alloc);
+
+static int uvc_init(void)
+{
+ return usb_function_register(&uvcusb_func);
+}
+module_init(uvc_init);
+
+static void __exit uvc_exit(void)
+{
+ usb_function_unregister(&uvcusb_func);
+}
+module_exit(uvc_exit);
+
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Laurent Pinchart");
diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h
index ce0f3a7..b4541e2 100644
--- a/drivers/usb/gadget/function/u_ncm.h
+++ b/drivers/usb/gadget/function/u_ncm.h
@@ -33,4 +33,8 @@ struct f_ncm_opts {
int refcnt;
};
+extern struct device *create_function_device(char *name);
+int ncm_ctrlrequest(struct usb_composite_dev *cdev,
+ const struct usb_ctrlrequest *ctrl);
+
#endif /* U_NCM_H */
diff --git a/drivers/usb/gadget/function/u_uac2.h b/drivers/usb/gadget/function/u_uac2.h
index 78dd372..f7d2d44 100644
--- a/drivers/usb/gadget/function/u_uac2.h
+++ b/drivers/usb/gadget/function/u_uac2.h
@@ -22,7 +22,7 @@
#define UAC2_DEF_PSRATE 48000
#define UAC2_DEF_PSSIZE 2
#define UAC2_DEF_CCHMASK 0x3
-#define UAC2_DEF_CSRATE 64000
+#define UAC2_DEF_CSRATE 44100
#define UAC2_DEF_CSSIZE 2
struct f_uac2_opts {
diff --git a/drivers/usb/gadget/function/uvc.h b/drivers/usb/gadget/function/uvc.h
index 7d3bb62..d1649f8 100644
--- a/drivers/usb/gadget/function/uvc.h
+++ b/drivers/usb/gadget/function/uvc.h
@@ -96,7 +96,7 @@ extern unsigned int uvc_gadget_trace_param;
* Driver specific constants
*/
-#define UVC_NUM_REQUESTS 4
+#define UVC_NUM_REQUESTS 16
#define UVC_MAX_REQUEST_SIZE 64
#define UVC_MAX_EVENTS 4
diff --git a/drivers/usb/gadget/function/uvc_configfs.c b/drivers/usb/gadget/function/uvc_configfs.c
index 31125a4..984b1d7 100644
--- a/drivers/usb/gadget/function/uvc_configfs.c
+++ b/drivers/usb/gadget/function/uvc_configfs.c
@@ -144,7 +144,7 @@ static struct config_item *uvcg_control_header_make(struct config_group *group,
h->desc.bLength = UVC_DT_HEADER_SIZE(1);
h->desc.bDescriptorType = USB_DT_CS_INTERFACE;
h->desc.bDescriptorSubType = UVC_VC_HEADER;
- h->desc.bcdUVC = cpu_to_le16(0x0100);
+ h->desc.bcdUVC = cpu_to_le16(0x0150);
h->desc.dwClockFrequency = cpu_to_le32(48000000);
config_item_init_type_name(&h->item, name, &uvcg_control_header_type);
@@ -626,14 +626,21 @@ static struct uvcg_mjpeg_grp {
struct config_group group;
} uvcg_mjpeg_grp;
+/* streaming/h264 */
+static struct uvcg_h264_grp {
+ struct config_group group;
+} uvcg_h264_grp;
+
static struct config_item *fmt_parent[] = {
&uvcg_uncompressed_grp.group.cg_item,
&uvcg_mjpeg_grp.group.cg_item,
+ &uvcg_h264_grp.group.cg_item,
};
enum uvcg_format_type {
UVCG_UNCOMPRESSED = 0,
UVCG_MJPEG,
+ UVCG_H264,
};
struct uvcg_format {
@@ -918,20 +925,11 @@ static struct config_item_type uvcg_streaming_header_grp_type = {
/* streaming/<mode>/<format>/<NAME> */
struct uvcg_frame {
- struct {
- u8 b_length;
- u8 b_descriptor_type;
- u8 b_descriptor_subtype;
- u8 b_frame_index;
- u8 bm_capabilities;
- u16 w_width;
- u16 w_height;
- u32 dw_min_bit_rate;
- u32 dw_max_bit_rate;
- u32 dw_max_video_frame_buffer_size;
- u32 dw_default_frame_interval;
- u8 b_frame_interval_type;
- } __attribute__((packed)) frame;
+ union {
+ struct uvc_frame_uncompressed uf;
+ struct uvc_frame_mjpeg mf;
+ struct uvc_frame_h264 hf;
+ } frame;
u32 *dw_frame_interval;
enum uvcg_format_type fmt_type;
struct config_item item;
@@ -942,8 +940,9 @@ static struct uvcg_frame *to_uvcg_frame(struct config_item *item)
return container_of(item, struct uvcg_frame, item);
}
-#define UVCG_FRAME_ATTR(cname, aname, to_cpu_endian, to_little_endian, bits) \
-static ssize_t uvcg_frame_##cname##_show(struct config_item *item, char *page)\
+#define UVCG_FRAME_ATTR(cname, fname, to_cpu_endian, to_little_endian, bits) \
+static ssize_t uvcg_frame_##fname##_##cname##_show(struct config_item *item, \
+ char *page) \
{ \
struct uvcg_frame *f = to_uvcg_frame(item); \
struct f_uvc_opts *opts; \
@@ -957,14 +956,15 @@ static ssize_t uvcg_frame_##cname##_show(struct config_item *item, char *page)\
opts = to_f_uvc_opts(opts_item); \
\
mutex_lock(&opts->lock); \
- result = sprintf(page, "%d\n", to_cpu_endian(f->frame.cname)); \
+ result = snprintf(page, PAGE_SIZE, "%d\n", \
+ to_cpu_endian(f->frame.fname.cname)); \
mutex_unlock(&opts->lock); \
\
mutex_unlock(su_mutex); \
return result; \
} \
\
-static ssize_t uvcg_frame_##cname##_store(struct config_item *item, \
+static ssize_t uvcg_frame_##fname##_##cname##_store(struct config_item *item, \
const char *page, size_t len)\
{ \
struct uvcg_frame *f = to_uvcg_frame(item); \
@@ -991,7 +991,7 @@ static ssize_t uvcg_frame_##cname##_store(struct config_item *item, \
goto end; \
} \
\
- f->frame.cname = to_little_endian(num); \
+ f->frame.fname.cname = to_little_endian(num); \
ret = len; \
end: \
mutex_unlock(&opts->lock); \
@@ -999,21 +999,46 @@ end: \
return ret; \
} \
\
-UVC_ATTR(uvcg_frame_, cname, aname);
+UVC_ATTR(uvcg_frame_, fname##_##cname, cname);
#define noop_conversion(x) (x)
-UVCG_FRAME_ATTR(bm_capabilities, bmCapabilities, noop_conversion,
+/* Declare configurable frame attributes for uncompressed format */
+UVCG_FRAME_ATTR(bmCapabilities, uf, noop_conversion,
noop_conversion, 8);
-UVCG_FRAME_ATTR(w_width, wWidth, le16_to_cpu, cpu_to_le16, 16);
-UVCG_FRAME_ATTR(w_height, wHeight, le16_to_cpu, cpu_to_le16, 16);
-UVCG_FRAME_ATTR(dw_min_bit_rate, dwMinBitRate, le32_to_cpu, cpu_to_le32, 32);
-UVCG_FRAME_ATTR(dw_max_bit_rate, dwMaxBitRate, le32_to_cpu, cpu_to_le32, 32);
-UVCG_FRAME_ATTR(dw_max_video_frame_buffer_size, dwMaxVideoFrameBufferSize,
+UVCG_FRAME_ATTR(wWidth, uf, le16_to_cpu, cpu_to_le16, 16);
+UVCG_FRAME_ATTR(wHeight, uf, le16_to_cpu, cpu_to_le16, 16);
+UVCG_FRAME_ATTR(dwMinBitRate, uf, le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(dwMaxBitRate, uf, le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(dwMaxVideoFrameBufferSize, uf,
le32_to_cpu, cpu_to_le32, 32);
-UVCG_FRAME_ATTR(dw_default_frame_interval, dwDefaultFrameInterval,
+UVCG_FRAME_ATTR(dwDefaultFrameInterval, uf,
le32_to_cpu, cpu_to_le32, 32);
+/* Declare configurable frame attributes for mjpeg format */
+UVCG_FRAME_ATTR(bmCapabilities, mf, noop_conversion,
+ noop_conversion, 8);
+UVCG_FRAME_ATTR(wWidth, mf, le16_to_cpu, cpu_to_le16, 16);
+UVCG_FRAME_ATTR(wHeight, mf, le16_to_cpu, cpu_to_le16, 16);
+UVCG_FRAME_ATTR(dwMinBitRate, mf, le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(dwMaxBitRate, mf, le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(dwMaxVideoFrameBufferSize, mf,
+ le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(dwDefaultFrameInterval, mf,
+ le32_to_cpu, cpu_to_le32, 32);
+
+/* Declare configurable frame attributes for h264 format */
+UVCG_FRAME_ATTR(bmCapabilities, hf, noop_conversion,
+ noop_conversion, 8);
+UVCG_FRAME_ATTR(wWidth, hf, le16_to_cpu, cpu_to_le16, 16);
+UVCG_FRAME_ATTR(wHeight, hf, le16_to_cpu, cpu_to_le16, 16);
+UVCG_FRAME_ATTR(dwMinBitRate, hf, le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(dwMaxBitRate, hf, le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(dwDefaultFrameInterval, hf,
+ le32_to_cpu, cpu_to_le32, 32);
+UVCG_FRAME_ATTR(bLevelIDC, hf, noop_conversion,
+ noop_conversion, 8);
+
#undef noop_conversion
#undef UVCG_FRAME_ATTR
@@ -1025,7 +1050,7 @@ static ssize_t uvcg_frame_dw_frame_interval_show(struct config_item *item,
struct f_uvc_opts *opts;
struct config_item *opts_item;
struct mutex *su_mutex = &frm->item.ci_group->cg_subsys->su_mutex;
- int result, i;
+ int result, i, n;
char *pg = page;
mutex_lock(su_mutex); /* for navigating configfs hierarchy */
@@ -1034,7 +1059,15 @@ static ssize_t uvcg_frame_dw_frame_interval_show(struct config_item *item,
opts = to_f_uvc_opts(opts_item);
mutex_lock(&opts->lock);
- for (result = 0, i = 0; i < frm->frame.b_frame_interval_type; ++i) {
+ n = 0;
+ if (frm->fmt_type == UVCG_UNCOMPRESSED)
+ n = frm->frame.uf.bFrameIntervalType;
+ else if (frm->fmt_type == UVCG_MJPEG)
+ n = frm->frame.mf.bFrameIntervalType;
+ else if (frm->fmt_type == UVCG_H264)
+ n = frm->frame.hf.bNumFrameIntervals;
+
+ for (result = 0, i = 0; i < n; ++i) {
result += sprintf(pg, "%d\n",
le32_to_cpu(frm->dw_frame_interval[i]));
pg = page + result;
@@ -1137,7 +1170,13 @@ static ssize_t uvcg_frame_dw_frame_interval_store(struct config_item *item,
kfree(ch->dw_frame_interval);
ch->dw_frame_interval = frm_intrv;
- ch->frame.b_frame_interval_type = n;
+ if (ch->fmt_type == UVCG_UNCOMPRESSED)
+ ch->frame.uf.bFrameIntervalType = n;
+ else if (ch->fmt_type == UVCG_MJPEG)
+ ch->frame.mf.bFrameIntervalType = n;
+ else if (ch->fmt_type == UVCG_H264)
+ ch->frame.hf.bNumFrameIntervals = n;
+
ret = len;
end:
@@ -1148,20 +1187,54 @@ static ssize_t uvcg_frame_dw_frame_interval_store(struct config_item *item,
UVC_ATTR(uvcg_frame_, dw_frame_interval, dwFrameInterval);
-static struct configfs_attribute *uvcg_frame_attrs[] = {
- &uvcg_frame_attr_bm_capabilities,
- &uvcg_frame_attr_w_width,
- &uvcg_frame_attr_w_height,
- &uvcg_frame_attr_dw_min_bit_rate,
- &uvcg_frame_attr_dw_max_bit_rate,
- &uvcg_frame_attr_dw_max_video_frame_buffer_size,
- &uvcg_frame_attr_dw_default_frame_interval,
+static struct configfs_attribute *uvcg_uncompressed_frame_attrs[] = {
+ &uvcg_frame_attr_uf_bmCapabilities,
+ &uvcg_frame_attr_uf_wWidth,
+ &uvcg_frame_attr_uf_wHeight,
+ &uvcg_frame_attr_uf_dwMinBitRate,
+ &uvcg_frame_attr_uf_dwMaxBitRate,
+ &uvcg_frame_attr_uf_dwMaxVideoFrameBufferSize,
+ &uvcg_frame_attr_uf_dwDefaultFrameInterval,
&uvcg_frame_attr_dw_frame_interval,
NULL,
};
-static struct config_item_type uvcg_frame_type = {
- .ct_attrs = uvcg_frame_attrs,
+static struct configfs_attribute *uvcg_mjpeg_frame_attrs[] = {
+ &uvcg_frame_attr_mf_bmCapabilities,
+ &uvcg_frame_attr_mf_wWidth,
+ &uvcg_frame_attr_mf_wHeight,
+ &uvcg_frame_attr_mf_dwMinBitRate,
+ &uvcg_frame_attr_mf_dwMaxBitRate,
+ &uvcg_frame_attr_mf_dwMaxVideoFrameBufferSize,
+ &uvcg_frame_attr_mf_dwDefaultFrameInterval,
+ &uvcg_frame_attr_dw_frame_interval,
+ NULL,
+};
+
+static struct configfs_attribute *uvcg_h264_frame_attrs[] = {
+ &uvcg_frame_attr_hf_bmCapabilities,
+ &uvcg_frame_attr_hf_wWidth,
+ &uvcg_frame_attr_hf_wHeight,
+ &uvcg_frame_attr_hf_bLevelIDC,
+ &uvcg_frame_attr_hf_dwMinBitRate,
+ &uvcg_frame_attr_hf_dwMaxBitRate,
+ &uvcg_frame_attr_hf_dwDefaultFrameInterval,
+ &uvcg_frame_attr_dw_frame_interval,
+ NULL,
+};
+
+static struct config_item_type uvcg_uncompressed_frame_type = {
+ .ct_attrs = uvcg_uncompressed_frame_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_item_type uvcg_mjpeg_frame_type = {
+ .ct_attrs = uvcg_mjpeg_frame_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_item_type uvcg_h264_frame_type = {
+ .ct_attrs = uvcg_h264_frame_attrs,
.ct_owner = THIS_MODULE,
};
@@ -1172,19 +1245,17 @@ static struct config_item *uvcg_frame_make(struct config_group *group,
struct uvcg_format *fmt;
struct f_uvc_opts *opts;
struct config_item *opts_item;
+ struct config_item_type *uvcg_frame_config_item;
+ struct uvc_frame_uncompressed *uf;
h = kzalloc(sizeof(*h), GFP_KERNEL);
if (!h)
return ERR_PTR(-ENOMEM);
- h->frame.b_descriptor_type = USB_DT_CS_INTERFACE;
- h->frame.b_frame_index = 1;
- h->frame.w_width = cpu_to_le16(640);
- h->frame.w_height = cpu_to_le16(360);
- h->frame.dw_min_bit_rate = cpu_to_le32(18432000);
- h->frame.dw_max_bit_rate = cpu_to_le32(55296000);
- h->frame.dw_max_video_frame_buffer_size = cpu_to_le32(460800);
- h->frame.dw_default_frame_interval = cpu_to_le32(666666);
+ uf = &h->frame.uf;
+
+ uf->bDescriptorType = USB_DT_CS_INTERFACE;
+ uf->bFrameIndex = 1;
opts_item = group->cg_item.ci_parent->ci_parent->ci_parent;
opts = to_f_uvc_opts(opts_item);
@@ -1192,11 +1263,52 @@ static struct config_item *uvcg_frame_make(struct config_group *group,
mutex_lock(&opts->lock);
fmt = to_uvcg_format(&group->cg_item);
if (fmt->type == UVCG_UNCOMPRESSED) {
- h->frame.b_descriptor_subtype = UVC_VS_FRAME_UNCOMPRESSED;
+ uf->bDescriptorSubType = UVC_VS_FRAME_UNCOMPRESSED;
+ uf->wWidth = cpu_to_le16(640);
+ uf->wHeight = cpu_to_le16(360);
+ uf->dwMinBitRate = cpu_to_le32(18432000);
+ uf->dwMaxBitRate = cpu_to_le32(55296000);
+ uf->dwMaxVideoFrameBufferSize = cpu_to_le32(460800);
+ uf->dwDefaultFrameInterval = cpu_to_le32(666666);
+
h->fmt_type = UVCG_UNCOMPRESSED;
+ uvcg_frame_config_item = &uvcg_uncompressed_frame_type;
} else if (fmt->type == UVCG_MJPEG) {
- h->frame.b_descriptor_subtype = UVC_VS_FRAME_MJPEG;
+ struct uvc_frame_mjpeg *mf = &h->frame.mf;
+
+ mf->bDescriptorType = USB_DT_CS_INTERFACE;
+ mf->bFrameIndex = 1;
+ mf->bDescriptorSubType = UVC_VS_FRAME_MJPEG;
+ mf->wWidth = cpu_to_le16(640);
+ mf->wHeight = cpu_to_le16(360);
+ mf->dwMinBitRate = cpu_to_le32(18432000);
+ mf->dwMaxBitRate = cpu_to_le32(55296000);
+ mf->dwMaxVideoFrameBufferSize = cpu_to_le32(460800);
+ mf->dwDefaultFrameInterval = cpu_to_le32(666666);
+
h->fmt_type = UVCG_MJPEG;
+ uvcg_frame_config_item = &uvcg_mjpeg_frame_type;
+ } else if (fmt->type == UVCG_H264) {
+ struct uvc_frame_h264 *hf = &h->frame.hf;
+
+ hf->bDescriptorSubType = UVC_VS_FRAME_H264;
+ hf->wWidth = cpu_to_le16(1920);
+ hf->wHeight = cpu_to_le16(960);
+ hf->dwMinBitRate = cpu_to_le32(29491200);
+ hf->dwMaxBitRate = cpu_to_le32(100000000);
+ hf->dwDefaultFrameInterval = cpu_to_le32(333667);
+ hf->wSARwidth = 1;
+ hf->wSARheight = 1;
+ hf->wProfile = 0x6400;
+ hf->bLevelIDC = 0x33;
+ hf->bmSupportedUsages = 0x70003;
+ hf->wConstrainedToolset = cpu_to_le16(0);
+ hf->bmCapabilities = 0x47;
+ hf->bmSVCCapabilities = 0x4;
+ hf->bmMVCCapabilities = 0;
+
+ h->fmt_type = UVCG_H264;
+ uvcg_frame_config_item = &uvcg_h264_frame_type;
} else {
mutex_unlock(&opts->lock);
kfree(h);
@@ -1205,7 +1317,7 @@ static struct config_item *uvcg_frame_make(struct config_group *group,
++fmt->num_frames;
mutex_unlock(&opts->lock);
- config_item_init_type_name(&h->item, name, &uvcg_frame_type);
+ config_item_init_type_name(&h->item, name, uvcg_frame_config_item);
return &h->item;
}
@@ -1678,6 +1790,219 @@ static struct config_item_type uvcg_mjpeg_grp_type = {
.ct_owner = THIS_MODULE,
};
+/* streaming/h264/<NAME> */
+struct uvcg_h264 {
+ struct uvcg_format fmt;
+ struct uvc_format_h264 desc;
+};
+
+static struct uvcg_h264 *to_uvcg_h264(struct config_item *item)
+{
+ return container_of(
+ container_of(to_config_group(item), struct uvcg_format, group),
+ struct uvcg_h264, fmt);
+}
+
+static struct configfs_group_operations uvcg_h264_group_ops = {
+ .make_item = uvcg_frame_make,
+ .drop_item = uvcg_frame_drop,
+};
+
+#define UVCG_H264_ATTR_RO(cname, aname, conv) \
+static ssize_t uvcg_h264_##cname##_show(struct config_item *item, char *page)\
+{ \
+ struct uvcg_h264 *u = to_uvcg_h264(item); \
+ struct f_uvc_opts *opts; \
+ struct config_item *opts_item; \
+ struct mutex *su_mutex = &u->fmt.group.cg_subsys->su_mutex; \
+ int result; \
+ \
+ mutex_lock(su_mutex); /* for navigating configfs hierarchy */ \
+ \
+ opts_item = u->fmt.group.cg_item.ci_parent->ci_parent->ci_parent;\
+ opts = to_f_uvc_opts(opts_item); \
+ \
+ mutex_lock(&opts->lock); \
+ result = snprintf(page, PAGE_SIZE, "%d\n", \
+ conv(u->desc.aname)); \
+ mutex_unlock(&opts->lock); \
+ \
+ mutex_unlock(su_mutex); \
+ return result; \
+} \
+ \
+UVC_ATTR_RO(uvcg_h264_, cname, aname)
+
+#define UVCG_H264_ATTR(cname, aname, conv) \
+static ssize_t uvcg_h264_##cname##_show(struct config_item *item, char *page)\
+{ \
+ struct uvcg_h264 *u = to_uvcg_h264(item); \
+ struct f_uvc_opts *opts; \
+ struct config_item *opts_item; \
+ struct mutex *su_mutex = &u->fmt.group.cg_subsys->su_mutex; \
+ int result; \
+ \
+ mutex_lock(su_mutex); /* for navigating configfs hierarchy */ \
+ \
+ opts_item = u->fmt.group.cg_item.ci_parent->ci_parent->ci_parent;\
+ opts = to_f_uvc_opts(opts_item); \
+ \
+ mutex_lock(&opts->lock); \
+ result = snprintf(page, PAGE_SIZE, "%d\n", \
+ conv(u->desc.aname)); \
+ mutex_unlock(&opts->lock); \
+ \
+ mutex_unlock(su_mutex); \
+ return result; \
+} \
+ \
+static ssize_t \
+uvcg_h264_##cname##_store(struct config_item *item, \
+ const char *page, size_t len) \
+{ \
+ struct uvcg_h264 *u = to_uvcg_h264(item); \
+ struct f_uvc_opts *opts; \
+ struct config_item *opts_item; \
+ struct mutex *su_mutex = &u->fmt.group.cg_subsys->su_mutex; \
+ int ret; \
+ u8 num; \
+ \
+ mutex_lock(su_mutex); /* for navigating configfs hierarchy */ \
+ \
+ opts_item = u->fmt.group.cg_item.ci_parent->ci_parent->ci_parent;\
+ opts = to_f_uvc_opts(opts_item); \
+ \
+ mutex_lock(&opts->lock); \
+ if (u->fmt.linked || opts->refcnt) { \
+ ret = -EBUSY; \
+ goto end; \
+ } \
+ \
+ ret = kstrtou8(page, 0, &num); \
+ if (ret) \
+ goto end; \
+ \
+ if (num > 255) { \
+ ret = -EINVAL; \
+ goto end; \
+ } \
+ u->desc.aname = num; \
+ ret = len; \
+end: \
+ mutex_unlock(&opts->lock); \
+ mutex_unlock(su_mutex); \
+ return ret; \
+} \
+ \
+UVC_ATTR(uvcg_h264_, cname, aname)
+
+#define identity_conv(x) (x)
+
+UVCG_H264_ATTR(b_default_frame_index, bDefaultFrameIndex,
+ identity_conv);
+
+#undef identity_conv
+
+#undef UVCG_H264_ATTR
+#undef UVCG_H264_ATTR_RO
+
+static inline ssize_t
+uvcg_h264_bma_controls_show(struct config_item *item, char *page)
+{
+ struct uvcg_h264 *u = to_uvcg_h264(item);
+
+ return uvcg_format_bma_controls_show(&u->fmt, page);
+}
+
+static inline ssize_t
+uvcg_h264_bma_controls_store(struct config_item *item,
+ const char *page, size_t len)
+{
+ struct uvcg_h264 *u = to_uvcg_h264(item);
+
+ return uvcg_format_bma_controls_store(&u->fmt, page, len);
+}
+
+UVC_ATTR(uvcg_h264_, bma_controls, bmaControls);
+
+static struct configfs_attribute *uvcg_h264_attrs[] = {
+ &uvcg_h264_attr_b_default_frame_index,
+ &uvcg_h264_attr_bma_controls,
+ NULL,
+};
+
+static struct config_item_type uvcg_h264_type = {
+ .ct_group_ops = &uvcg_h264_group_ops,
+ .ct_attrs = uvcg_h264_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group *uvcg_h264_make(struct config_group *group,
+ const char *name)
+{
+ struct uvcg_h264 *h;
+
+ h = kzalloc(sizeof(*h), GFP_KERNEL);
+ if (!h)
+ return ERR_PTR(-ENOMEM);
+
+ h->desc.bLength = UVC_DT_FORMAT_H264_SIZE;
+ h->desc.bDescriptorType = USB_DT_CS_INTERFACE;
+ h->desc.bDescriptorSubType = UVC_VS_FORMAT_H264;
+ h->desc.bDefaultFrameIndex = 1;
+ h->desc.bMaxCodecConfigDelay = 0x4;
+ h->desc.bmSupportedSliceModes = 0;
+ h->desc.bmSupportedSyncFrameTypes = 0x76;
+ h->desc.bResolutionScaling = 0;
+ h->desc.Reserved1 = 0;
+ h->desc.bmSupportedRateControlModes = 0x3F;
+ h->desc.wMaxMBperSecOneResNoScalability = cpu_to_le16(972);
+ h->desc.wMaxMBperSecTwoResNoScalability = 0;
+ h->desc.wMaxMBperSecThreeResNoScalability = 0;
+ h->desc.wMaxMBperSecFourResNoScalability = 0;
+ h->desc.wMaxMBperSecOneResTemporalScalability = cpu_to_le16(972);
+ h->desc.wMaxMBperSecTwoResTemporalScalability = 0;
+ h->desc.wMaxMBperSecThreeResTemporalScalability = 0;
+ h->desc.wMaxMBperSecFourResTemporalScalability = 0;
+ h->desc.wMaxMBperSecOneResTemporalQualityScalability =
+ cpu_to_le16(972);
+ h->desc.wMaxMBperSecTwoResTemporalQualityScalability = 0;
+ h->desc.wMaxMBperSecThreeResTemporalQualityScalability = 0;
+ h->desc.wMaxMBperSecFourResTemporalQualityScalability = 0;
+ h->desc.wMaxMBperSecOneResTemporalSpatialScalability = 0;
+ h->desc.wMaxMBperSecTwoResTemporalSpatialScalability = 0;
+ h->desc.wMaxMBperSecThreeResTemporalSpatialScalability = 0;
+ h->desc.wMaxMBperSecFourResTemporalSpatialScalability = 0;
+ h->desc.wMaxMBperSecOneResFullScalability = 0;
+ h->desc.wMaxMBperSecTwoResFullScalability = 0;
+ h->desc.wMaxMBperSecThreeResFullScalability = 0;
+ h->desc.wMaxMBperSecFourResFullScalability = 0;
+
+ h->fmt.type = UVCG_H264;
+ config_group_init_type_name(&h->fmt.group, name,
+ &uvcg_h264_type);
+
+ return &h->fmt.group;
+}
+
+static void uvcg_h264_drop(struct config_group *group,
+ struct config_item *item)
+{
+ struct uvcg_h264 *h = to_uvcg_h264(item);
+
+ kfree(h);
+}
+
+static struct configfs_group_operations uvcg_h264_grp_ops = {
+ .make_group = uvcg_h264_make,
+ .drop_item = uvcg_h264_drop,
+};
+
+static struct config_item_type uvcg_h264_grp_type = {
+ .ct_group_ops = &uvcg_h264_grp_ops,
+ .ct_owner = THIS_MODULE,
+};
+
/* streaming/color_matching/default */
static struct uvcg_default_color_matching {
struct config_group group;
@@ -1825,6 +2150,7 @@ static int __uvcg_iter_strm_cls(struct uvcg_streaming_header *h,
if (ret)
return ret;
grp = &f->fmt->group;
+ j = 0;
list_for_each_entry(item, &grp->cg_children, ci_entry) {
frm = to_uvcg_frame(item);
ret = fun(frm, priv2, priv3, j++, UVCG_FRAME);
@@ -1873,6 +2199,11 @@ static int __uvcg_cnt_strm(void *priv1, void *priv2, void *priv3, int n,
container_of(fmt, struct uvcg_mjpeg, fmt);
*size += sizeof(m->desc);
+ } else if (fmt->type == UVCG_H264) {
+ struct uvcg_h264 *h =
+ container_of(fmt, struct uvcg_h264, fmt);
+
+ *size += sizeof(h->desc);
} else {
return -EINVAL;
}
@@ -1880,10 +2211,23 @@ static int __uvcg_cnt_strm(void *priv1, void *priv2, void *priv3, int n,
break;
case UVCG_FRAME: {
struct uvcg_frame *frm = priv1;
- int sz = sizeof(frm->dw_frame_interval);
- *size += sizeof(frm->frame);
- *size += frm->frame.b_frame_interval_type * sz;
+ if (frm->fmt_type == UVCG_UNCOMPRESSED) {
+ struct uvc_frame_uncompressed uf =
+ frm->frame.uf;
+ *size +=
+ UVC_DT_FRAME_UNCOMPRESSED_SIZE(uf.bFrameIntervalType);
+ } else if (frm->fmt_type == UVCG_MJPEG) {
+ struct uvc_frame_mjpeg mf =
+ frm->frame.mf;
+ *size +=
+ UVC_DT_FRAME_UNCOMPRESSED_SIZE(mf.bFrameIntervalType);
+ } else if (frm->fmt_type == UVCG_H264) {
+ struct uvc_frame_h264 hf =
+ frm->frame.hf;
+ *size +=
+ UVC_DT_FRAME_UNCOMPRESSED_SIZE(hf.bNumFrameIntervals);
+ }
}
break;
}
@@ -1949,6 +2293,15 @@ static int __uvcg_fill_strm(void *priv1, void *priv2, void *priv3, int n,
*dest += sizeof(m->desc);
mjp->bNumFrameDescriptors = fmt->num_frames;
mjp->bFormatIndex = n + 1;
+ } else if (fmt->type == UVCG_H264) {
+ struct uvc_format_h264 *hf = *dest;
+ struct uvcg_h264 *h =
+ container_of(fmt, struct uvcg_h264, fmt);
+
+ memcpy(*dest, &h->desc, sizeof(h->desc));
+ *dest += sizeof(h->desc);
+ hf->bNumFrameDescriptors = fmt->num_frames;
+ hf->bFormatIndex = n + 1;
} else {
return -EINVAL;
}
@@ -1956,21 +2309,46 @@ static int __uvcg_fill_strm(void *priv1, void *priv2, void *priv3, int n,
break;
case UVCG_FRAME: {
struct uvcg_frame *frm = priv1;
- struct uvc_descriptor_header *h = *dest;
- sz = sizeof(frm->frame);
- memcpy(*dest, &frm->frame, sz);
- *dest += sz;
- sz = frm->frame.b_frame_interval_type *
- sizeof(*frm->dw_frame_interval);
+ if (frm->fmt_type == UVCG_UNCOMPRESSED) {
+ struct uvc_frame_uncompressed *uf =
+ &frm->frame.uf;
+ uf->bLength = UVC_DT_FRAME_UNCOMPRESSED_SIZE(
+ uf->bFrameIntervalType);
+ uf->bFrameIndex = n+1;
+ sz = UVC_DT_FRAME_UNCOMPRESSED_SIZE(0);
+ memcpy(*dest, uf, sz);
+ *dest += sz;
+ sz = uf->bFrameIntervalType *
+ sizeof(*frm->dw_frame_interval);
+ } else if (frm->fmt_type == UVCG_MJPEG) {
+ struct uvc_frame_mjpeg *mf =
+ &frm->frame.mf;
+ mf->bLength = UVC_DT_FRAME_MJPEG_SIZE(
+ mf->bFrameIntervalType);
+ mf->bFrameIndex = n+1;
+ sz = UVC_DT_FRAME_MJPEG_SIZE(0);
+ memcpy(*dest, mf, sz);
+ *dest += sz;
+ sz = mf->bFrameIntervalType *
+ sizeof(*frm->dw_frame_interval);
+ } else if (frm->fmt_type == UVCG_H264) {
+ struct uvc_frame_h264 *hf =
+ &frm->frame.hf;
+ hf->bLength = UVC_DT_FRAME_H264_SIZE(
+ hf->bNumFrameIntervals);
+ hf->bFrameIndex = n+1;
+ sz = UVC_DT_FRAME_H264_SIZE(0);
+ memcpy(*dest, hf, sz);
+ *dest += sz;
+ sz = hf->bNumFrameIntervals *
+ sizeof(*frm->dw_frame_interval);
+ } else {
+ return -EINVAL;
+ }
+
memcpy(*dest, frm->dw_frame_interval, sz);
*dest += sz;
- if (frm->fmt_type == UVCG_UNCOMPRESSED)
- h->bLength = UVC_DT_FRAME_UNCOMPRESSED_SIZE(
- frm->frame.b_frame_interval_type);
- else if (frm->fmt_type == UVCG_MJPEG)
- h->bLength = UVC_DT_FRAME_MJPEG_SIZE(
- frm->frame.b_frame_interval_type);
}
break;
}
@@ -2183,7 +2561,7 @@ end: \
return ret; \
} \
\
-UVC_ATTR(f_uvc_opts_, cname, aname)
+UVC_ATTR(f_uvc_opts_, cname, cname)
#define identity_conv(x) (x)
@@ -2278,6 +2656,9 @@ int uvcg_attach_configfs(struct f_uvc_opts *opts)
config_group_init_type_name(&uvcg_mjpeg_grp.group,
"mjpeg",
&uvcg_mjpeg_grp_type);
+ config_group_init_type_name(&uvcg_h264_grp.group,
+ "h264",
+ &uvcg_h264_grp_type);
config_group_init_type_name(&uvcg_default_color_matching.group,
"default",
&uvcg_default_color_matching_type);
@@ -2310,6 +2691,8 @@ int uvcg_attach_configfs(struct f_uvc_opts *opts)
&uvcg_streaming_grp.group);
configfs_add_default_group(&uvcg_mjpeg_grp.group,
&uvcg_streaming_grp.group);
+ configfs_add_default_group(&uvcg_h264_grp.group,
+ &uvcg_streaming_grp.group);
configfs_add_default_group(&uvcg_color_matching_grp.group,
&uvcg_streaming_grp.group);
configfs_add_default_group(&uvcg_streaming_class_grp.group,
diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
index 0f01c04..2ceb3ce 100644
--- a/drivers/usb/gadget/function/uvc_video.c
+++ b/drivers/usb/gadget/function/uvc_video.c
@@ -239,7 +239,11 @@ uvc_video_alloc_requests(struct uvc_video *video)
unsigned int i;
int ret = -ENOMEM;
- BUG_ON(video->req_size);
+ if (video->req_size) {
+ pr_err("%s: close the video node and reopen it\n",
+ __func__);
+ return -EBUSY;
+ }
req_size = video->ep->maxpacket
* max_t(unsigned int, video->ep->maxburst, 1)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 1332057..ab3633c 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -5045,6 +5045,61 @@ int xhci_get_core_id(struct usb_hcd *hcd)
return xhci->core_id;
}
+static int xhci_stop_endpoint(struct usb_hcd *hcd,
+ struct usb_device *udev, struct usb_host_endpoint *ep)
+{
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ unsigned int ep_index;
+ struct xhci_virt_device *virt_dev;
+ struct xhci_command *cmd;
+ unsigned long flags;
+ int ret = 0;
+
+ cmd = xhci_alloc_command(xhci, false, true, GFP_NOIO);
+ if (!cmd)
+ return -ENOMEM;
+
+ spin_lock_irqsave(&xhci->lock, flags);
+ virt_dev = xhci->devs[udev->slot_id];
+ if (!virt_dev) {
+ ret = -ENODEV;
+ goto err;
+ }
+
+ ep_index = xhci_get_endpoint_index(&ep->desc);
+ if (virt_dev->eps[ep_index].ring &&
+ virt_dev->eps[ep_index].ring->dequeue) {
+ ret = xhci_queue_stop_endpoint(xhci, cmd, udev->slot_id,
+ ep_index, 0);
+ if (ret)
+ goto err;
+
+ xhci_ring_cmd_db(xhci);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+
+ /* Wait for stop endpoint command to finish */
+ wait_for_completion(cmd->completion);
+
+ if (cmd->status == COMP_CMD_ABORT ||
+ cmd->status == COMP_CMD_STOP) {
+ xhci_warn(xhci,
+ "stop endpoint command timeout for ep%d%s\n",
+ usb_endpoint_num(&ep->desc),
+ usb_endpoint_dir_in(&ep->desc) ? "in" : "out");
+ ret = -ETIME;
+ }
+ goto free_cmd;
+ }
+
+err:
+ spin_unlock_irqrestore(&xhci->lock, flags);
+free_cmd:
+ xhci_free_command(xhci, cmd);
+ return ret;
+}
+
+
+
static const struct hc_driver xhci_hc_driver = {
.description = "xhci-hcd",
.product_desc = "xHCI Host Controller",
@@ -5109,6 +5164,7 @@ static const struct hc_driver xhci_hc_driver = {
.get_sec_event_ring_phys_addr = xhci_get_sec_event_ring_phys_addr,
.get_xfer_ring_phys_addr = xhci_get_xfer_ring_phys_addr,
.get_core_id = xhci_get_core_id,
+ .stop_endpoint = xhci_stop_endpoint,
};
void xhci_init_driver(struct hc_driver *drv,
diff --git a/drivers/usb/misc/usbtest.c b/drivers/usb/misc/usbtest.c
index d94927e..e31f72b 100644
--- a/drivers/usb/misc/usbtest.c
+++ b/drivers/usb/misc/usbtest.c
@@ -209,12 +209,13 @@ get_endpoints(struct usbtest_dev *dev, struct usb_interface *intf)
return tmp;
}
- if (in) {
+ if (in)
dev->in_pipe = usb_rcvbulkpipe(udev,
in->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+ if (out)
dev->out_pipe = usb_sndbulkpipe(udev,
out->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
- }
+
if (iso_in) {
dev->iso_in = &iso_in->desc;
dev->in_iso_pipe = usb_rcvisocpipe(udev,
diff --git a/drivers/usb/pd/policy_engine.c b/drivers/usb/pd/policy_engine.c
index 32d0e52..5cd7670 100644
--- a/drivers/usb/pd/policy_engine.c
+++ b/drivers/usb/pd/policy_engine.c
@@ -368,6 +368,7 @@ struct usbpd {
enum usbpd_state current_state;
bool hard_reset_recvd;
+ ktime_t hard_reset_recvd_time;
struct list_head rx_q;
spinlock_t rx_lock;
struct rx_msg *rx_ext_msg;
@@ -614,6 +615,9 @@ static int pd_send_msg(struct usbpd *pd, u8 msg_type, const u32 *data,
int ret;
u16 hdr;
+ if (pd->hard_reset_recvd)
+ return -EBUSY;
+
hdr = PD_MSG_HDR(msg_type, pd->current_dr, pd->current_pr,
pd->tx_msgid, num_data, pd->spec_rev);
@@ -805,11 +809,13 @@ static void phy_sig_received(struct usbpd *pd, enum pd_sig_type sig)
return;
}
- usbpd_dbg(&pd->dev, "hard reset received\n");
+ pd->hard_reset_recvd = true;
+ pd->hard_reset_recvd_time = ktime_get();
+
+ usbpd_err(&pd->dev, "hard reset received\n");
/* Force CC logic to source/sink to keep Rp/Rd unchanged */
set_power_role(pd, pd->current_pr);
- pd->hard_reset_recvd = true;
power_supply_set_property(pd->usb_psy,
POWER_SUPPLY_PROP_PD_IN_HARD_RESET, &val);
@@ -888,7 +894,7 @@ static struct rx_msg *pd_ext_msg_received(struct usbpd *pd, u16 header, u8 *buf,
/* allocate new message if first chunk */
rx_msg = kzalloc(sizeof(*rx_msg) +
PD_MSG_EXT_HDR_DATA_SIZE(ext_hdr),
- GFP_KERNEL);
+ GFP_ATOMIC);
if (!rx_msg)
return NULL;
@@ -941,7 +947,7 @@ static struct rx_msg *pd_ext_msg_received(struct usbpd *pd, u16 header, u8 *buf,
pd->rx_ext_msg = rx_msg;
- req = kzalloc(sizeof(*req), GFP_KERNEL);
+ req = kzalloc(sizeof(*req), GFP_ATOMIC);
if (!req)
goto queue_rx; /* return what we have anyway */
@@ -1015,7 +1021,7 @@ static void phy_msg_received(struct usbpd *pd, enum pd_sop_type sop,
PD_MSG_HDR_TYPE(header), PD_MSG_HDR_COUNT(header));
if (!PD_MSG_HDR_IS_EXTENDED(header)) {
- rx_msg = kzalloc(sizeof(*rx_msg) + len, GFP_KERNEL);
+ rx_msg = kzalloc(sizeof(*rx_msg) + len, GFP_ATOMIC);
if (!rx_msg)
return;
@@ -1074,6 +1080,9 @@ static void usbpd_set_state(struct usbpd *pd, enum usbpd_state next_state)
unsigned long flags;
int ret;
+ if (pd->hard_reset_recvd) /* let usbpd_sm handle it */
+ return;
+
usbpd_dbg(&pd->dev, "%s -> %s\n",
usbpd_state_strings[pd->current_state],
usbpd_state_strings[next_state]);
@@ -2044,8 +2053,13 @@ static void usbpd_sm(struct work_struct *w)
if (pd->current_pr == PR_SINK) {
usbpd_set_state(pd, PE_SNK_TRANSITION_TO_DEFAULT);
} else {
+ s64 delta = ktime_ms_delta(ktime_get(),
+ pd->hard_reset_recvd_time);
pd->current_state = PE_SRC_TRANSITION_TO_DEFAULT;
- kick_sm(pd, PS_HARD_RESET_TIME);
+ if (delta >= PS_HARD_RESET_TIME)
+ kick_sm(pd, 0);
+ else
+ kick_sm(pd, PS_HARD_RESET_TIME - (int)delta);
}
goto sm_done;
diff --git a/drivers/usb/pd/qpnp-pdphy.c b/drivers/usb/pd/qpnp-pdphy.c
index 6395ca2..2997976 100644
--- a/drivers/usb/pd/qpnp-pdphy.c
+++ b/drivers/usb/pd/qpnp-pdphy.c
@@ -582,6 +582,10 @@ static irqreturn_t pdphy_msg_tx_irq(int irq, void *data)
{
struct usb_pdphy *pdphy = data;
+ /* TX already aborted by received signal */
+ if (pdphy->tx_status != -EINPROGRESS)
+ return IRQ_HANDLED;
+
if (irq == pdphy->msg_tx_irq) {
pdphy->msg_tx_cnt++;
pdphy->tx_status = 0;
@@ -635,6 +639,10 @@ static irqreturn_t pdphy_sig_rx_irq_thread(int irq, void *data)
if (pdphy->signal_cb)
pdphy->signal_cb(pdphy->usbpd, frame_type);
+ if (pdphy->tx_status == -EINPROGRESS) {
+ pdphy->tx_status = -EBUSY;
+ wake_up(&pdphy->tx_waitq);
+ }
done:
return IRQ_HANDLED;
}
@@ -667,7 +675,7 @@ static int pd_phy_bist_mode(u8 bist_mode)
BIST_MODE_MASK | BIST_ENABLE, bist_mode | BIST_ENABLE);
}
-static irqreturn_t pdphy_msg_rx_irq_thread(int irq, void *data)
+static irqreturn_t pdphy_msg_rx_irq(int irq, void *data)
{
u8 size, rx_status, frame_type;
u8 buf[32];
@@ -808,8 +816,8 @@ static int pdphy_probe(struct platform_device *pdev)
return ret;
ret = pdphy_request_irq(pdphy, pdev->dev.of_node,
- &pdphy->msg_rx_irq, "msg-rx", NULL,
- pdphy_msg_rx_irq_thread, (IRQF_TRIGGER_RISING | IRQF_ONESHOT));
+ &pdphy->msg_rx_irq, "msg-rx", pdphy_msg_rx_irq,
+ NULL, (IRQF_TRIGGER_RISING | IRQF_ONESHOT));
if (ret < 0)
return ret;
diff --git a/drivers/usb/phy/phy-msm-qusb-v2.c b/drivers/usb/phy/phy-msm-qusb-v2.c
index bc27c31..81c39a3 100644
--- a/drivers/usb/phy/phy-msm-qusb-v2.c
+++ b/drivers/usb/phy/phy-msm-qusb-v2.c
@@ -26,6 +26,7 @@
#include <linux/regulator/machine.h>
#include <linux/usb/phy.h>
#include <linux/reset.h>
+#include <linux/debugfs.h>
/* QUSB2PHY_PWR_CTRL1 register related bits */
#define PWR_CTRL1_POWR_DOWN BIT(0)
@@ -65,13 +66,12 @@
#define BIAS_CTRL_2_OVERRIDE_VAL 0x28
+#define SQ_CTRL1_CHIRP_DISABLE 0x20
+#define SQ_CTRL2_CHIRP_DISABLE 0x80
+
/* PERIPH_SS_PHY_REFGEN_NORTH_BG_CTRL register bits */
#define BANDGAP_BYPASS BIT(0)
-unsigned int phy_tune1;
-module_param(phy_tune1, uint, 0644);
-MODULE_PARM_DESC(phy_tune1, "QUSB PHY v2 TUNE1");
-
enum qusb_phy_reg {
PORT_TUNE1,
PLL_COMMON_STATUS_ONE,
@@ -80,6 +80,8 @@ enum qusb_phy_reg {
PLL_CORE_INPUT_OVERRIDE,
TEST1,
BIAS_CTRL_2,
+ SQ_CTRL1,
+ SQ_CTRL2,
USB2_PHY_REG_MAX,
};
@@ -120,6 +122,10 @@ struct qusb_phy {
struct regulator_desc dpdm_rdesc;
struct regulator_dev *dpdm_rdev;
+ u32 sq_ctrl1_default;
+ u32 sq_ctrl2_default;
+ bool chirp_disable;
+
/* emulation targets specific */
void __iomem *emu_phy_base;
bool emulation;
@@ -129,6 +135,10 @@ struct qusb_phy {
int phy_pll_reset_seq_len;
int *emu_dcm_reset_seq;
int emu_dcm_reset_seq_len;
+
+ /* override TUNEX registers value */
+ struct dentry *root;
+ u8 tune[5];
};
static void qusb_phy_enable_clocks(struct qusb_phy *qphy, bool on)
@@ -410,7 +420,7 @@ static void qusb_phy_host_init(struct usb_phy *phy)
static int qusb_phy_init(struct usb_phy *phy)
{
struct qusb_phy *qphy = container_of(phy, struct qusb_phy, phy);
- int ret;
+ int ret, p_index;
u8 reg;
dev_dbg(phy->dev, "%s\n", __func__);
@@ -465,12 +475,12 @@ static int qusb_phy_init(struct usb_phy *phy)
qphy->base + qphy->phy_reg[PORT_TUNE1]);
}
- /* If phy_tune1 modparam set, override tune1 value */
- if (phy_tune1) {
- pr_debug("%s(): (modparam) TUNE1 val:0x%02x\n",
- __func__, phy_tune1);
- writel_relaxed(phy_tune1,
- qphy->base + qphy->phy_reg[PORT_TUNE1]);
+ /* if debugfs based tunex params are set, use that value. */
+ for (p_index = 0; p_index < 5; p_index++) {
+ if (qphy->tune[p_index])
+ writel_relaxed(qphy->tune[p_index],
+ qphy->base + qphy->phy_reg[PORT_TUNE1] +
+ (4 * p_index));
}
if (qphy->refgen_north_bg_reg)
@@ -651,6 +661,52 @@ static int qusb_phy_notify_disconnect(struct usb_phy *phy,
return 0;
}
+static int qusb_phy_disable_chirp(struct usb_phy *phy, bool disable)
+{
+ struct qusb_phy *qphy = container_of(phy, struct qusb_phy, phy);
+ int ret = 0;
+
+ dev_dbg(phy->dev, "%s qphy chirp disable %d disable %d\n", __func__,
+ qphy->chirp_disable, disable);
+
+ mutex_lock(&qphy->lock);
+
+ if (qphy->chirp_disable == disable) {
+ ret = -EALREADY;
+ goto done;
+ }
+
+ qphy->chirp_disable = disable;
+
+ if (disable) {
+ qphy->sq_ctrl1_default =
+ readl_relaxed(qphy->base + qphy->phy_reg[SQ_CTRL1]);
+ qphy->sq_ctrl2_default =
+ readl_relaxed(qphy->base + qphy->phy_reg[SQ_CTRL2]);
+
+ writel_relaxed(SQ_CTRL1_CHIRP_DISABLE,
+ qphy->base + qphy->phy_reg[SQ_CTRL1]);
+ readl_relaxed(qphy->base + qphy->phy_reg[SQ_CTRL1]);
+
+ writel_relaxed(SQ_CTRL1_CHIRP_DISABLE,
+ qphy->base + qphy->phy_reg[SQ_CTRL2]);
+ readl_relaxed(qphy->base + qphy->phy_reg[SQ_CTRL2]);
+
+ goto done;
+ }
+
+ writel_relaxed(qphy->sq_ctrl1_default,
+ qphy->base + qphy->phy_reg[SQ_CTRL1]);
+ readl_relaxed(qphy->base + qphy->phy_reg[SQ_CTRL1]);
+
+ writel_relaxed(qphy->sq_ctrl2_default,
+ qphy->base + qphy->phy_reg[SQ_CTRL2]);
+ readl_relaxed(qphy->base + qphy->phy_reg[SQ_CTRL2]);
+done:
+ mutex_unlock(&qphy->lock);
+ return ret;
+}
+
static int qusb_phy_dpdm_regulator_enable(struct regulator_dev *rdev)
{
int ret = 0;
@@ -736,6 +792,38 @@ static int qusb_phy_regulator_init(struct qusb_phy *qphy)
return 0;
}
+static int qusb_phy_create_debugfs(struct qusb_phy *qphy)
+{
+ struct dentry *file;
+ int ret = 0, i;
+ char name[6];
+
+ qphy->root = debugfs_create_dir(dev_name(qphy->phy.dev), NULL);
+ if (IS_ERR_OR_NULL(qphy->root)) {
+ dev_err(qphy->phy.dev,
+ "can't create debugfs root for %s\n",
+ dev_name(qphy->phy.dev));
+ ret = -ENOMEM;
+ goto create_err;
+ }
+
+ for (i = 0; i < 5; i++) {
+ snprintf(name, sizeof(name), "tune%d", (i + 1));
+ file = debugfs_create_x8(name, 0644, qphy->root,
+ &qphy->tune[i]);
+ if (IS_ERR_OR_NULL(file)) {
+ dev_err(qphy->phy.dev,
+ "can't create debugfs entry for %s\n", name);
+ debugfs_remove_recursive(qphy->root);
+ ret = ENOMEM;
+ goto create_err;
+ }
+ }
+
+create_err:
+ return ret;
+}
+
static int qusb_phy_probe(struct platform_device *pdev)
{
struct qusb_phy *qphy;
@@ -1004,6 +1092,7 @@ static int qusb_phy_probe(struct platform_device *pdev)
qphy->phy.type = USB_PHY_TYPE_USB2;
qphy->phy.notify_connect = qusb_phy_notify_connect;
qphy->phy.notify_disconnect = qusb_phy_notify_disconnect;
+ qphy->phy.disable_chirp = qusb_phy_disable_chirp;
ret = usb_add_phy_dev(&qphy->phy);
if (ret)
@@ -1013,6 +1102,8 @@ static int qusb_phy_probe(struct platform_device *pdev)
if (ret)
usb_remove_phy(&qphy->phy);
+ qusb_phy_create_debugfs(qphy);
+
return ret;
}
@@ -1023,6 +1114,7 @@ static int qusb_phy_remove(struct platform_device *pdev)
usb_remove_phy(&qphy->phy);
qusb_phy_enable_clocks(qphy, false);
qusb_phy_enable_power(qphy, false);
+ debugfs_remove_recursive(qphy->root);
return 0;
}
diff --git a/drivers/usb/phy/phy-msm-qusb.c b/drivers/usb/phy/phy-msm-qusb.c
index e355e35..f7ff9e8f 100644
--- a/drivers/usb/phy/phy-msm-qusb.c
+++ b/drivers/usb/phy/phy-msm-qusb.c
@@ -187,15 +187,14 @@ static int qusb_phy_config_vdd(struct qusb_phy *qphy, int high)
return ret;
}
-static int qusb_phy_enable_power(struct qusb_phy *qphy, bool on,
- bool toggle_vdd)
+static int qusb_phy_enable_power(struct qusb_phy *qphy, bool on)
{
int ret = 0;
dev_dbg(qphy->phy.dev, "%s turn %s regulators. power_enabled:%d\n",
__func__, on ? "on" : "off", qphy->power_enabled);
- if (toggle_vdd && qphy->power_enabled == on) {
+ if (qphy->power_enabled == on) {
dev_dbg(qphy->phy.dev, "PHYs' regulators are already ON.\n");
return 0;
}
@@ -203,19 +202,17 @@ static int qusb_phy_enable_power(struct qusb_phy *qphy, bool on,
if (!on)
goto disable_vdda33;
- if (toggle_vdd) {
- ret = qusb_phy_config_vdd(qphy, true);
- if (ret) {
- dev_err(qphy->phy.dev, "Unable to config VDD:%d\n",
- ret);
- goto err_vdd;
- }
+ ret = qusb_phy_config_vdd(qphy, true);
+ if (ret) {
+ dev_err(qphy->phy.dev, "Unable to config VDD:%d\n",
+ ret);
+ goto err_vdd;
+ }
- ret = regulator_enable(qphy->vdd);
- if (ret) {
- dev_err(qphy->phy.dev, "Unable to enable VDD\n");
- goto unconfig_vdd;
- }
+ ret = regulator_enable(qphy->vdd);
+ if (ret) {
+ dev_err(qphy->phy.dev, "Unable to enable VDD\n");
+ goto unconfig_vdd;
}
ret = regulator_set_load(qphy->vdda18, QUSB2PHY_1P8_HPM_LOAD);
@@ -258,8 +255,7 @@ static int qusb_phy_enable_power(struct qusb_phy *qphy, bool on,
goto unset_vdd33;
}
- if (toggle_vdd)
- qphy->power_enabled = true;
+ qphy->power_enabled = true;
pr_debug("%s(): QUSB PHY's regulators are turned ON.\n", __func__);
return ret;
@@ -297,21 +293,18 @@ static int qusb_phy_enable_power(struct qusb_phy *qphy, bool on,
dev_err(qphy->phy.dev, "Unable to set LPM of vdda18\n");
disable_vdd:
- if (toggle_vdd) {
- ret = regulator_disable(qphy->vdd);
- if (ret)
- dev_err(qphy->phy.dev, "Unable to disable vdd:%d\n",
+ ret = regulator_disable(qphy->vdd);
+ if (ret)
+ dev_err(qphy->phy.dev, "Unable to disable vdd:%d\n",
ret);
unconfig_vdd:
- ret = qusb_phy_config_vdd(qphy, false);
- if (ret)
- dev_err(qphy->phy.dev, "Unable unconfig VDD:%d\n",
+ ret = qusb_phy_config_vdd(qphy, false);
+ if (ret)
+ dev_err(qphy->phy.dev, "Unable unconfig VDD:%d\n",
ret);
- }
err_vdd:
- if (toggle_vdd)
- qphy->power_enabled = false;
+ qphy->power_enabled = false;
dev_dbg(qphy->phy.dev, "QUSB PHY's regulators are turned OFF.\n");
return ret;
}
@@ -375,7 +368,7 @@ static int qusb_phy_init(struct usb_phy *phy)
dev_dbg(phy->dev, "%s\n", __func__);
- ret = qusb_phy_enable_power(qphy, true, true);
+ ret = qusb_phy_enable_power(qphy, true);
if (ret)
return ret;
@@ -623,7 +616,7 @@ static int qusb_phy_set_suspend(struct usb_phy *phy, int suspend)
qusb_phy_enable_clocks(qphy, false);
- qusb_phy_enable_power(qphy, false, true);
+ qusb_phy_enable_power(qphy, false);
}
qphy->suspended = true;
} else {
@@ -635,7 +628,7 @@ static int qusb_phy_set_suspend(struct usb_phy *phy, int suspend)
writel_relaxed(0x00,
qphy->base + QUSB2PHY_PORT_INTR_CTRL);
} else {
- qusb_phy_enable_power(qphy, true, true);
+ qusb_phy_enable_power(qphy, true);
qusb_phy_enable_clocks(qphy, true);
}
qphy->suspended = false;
@@ -677,7 +670,7 @@ static int qusb_phy_dpdm_regulator_enable(struct regulator_dev *rdev)
__func__, qphy->dpdm_enable);
if (!qphy->dpdm_enable) {
- ret = qusb_phy_enable_power(qphy, true, false);
+ ret = qusb_phy_enable_power(qphy, true);
if (ret < 0) {
dev_dbg(qphy->phy.dev,
"dpdm regulator enable failed:%d\n", ret);
@@ -698,11 +691,15 @@ static int qusb_phy_dpdm_regulator_disable(struct regulator_dev *rdev)
__func__, qphy->dpdm_enable);
if (qphy->dpdm_enable) {
- ret = qusb_phy_enable_power(qphy, false, false);
- if (ret < 0) {
- dev_dbg(qphy->phy.dev,
- "dpdm regulator disable failed:%d\n", ret);
- return ret;
+ if (!qphy->cable_connected) {
+ dev_dbg(qphy->phy.dev, "turn off for HVDCP case\n");
+ ret = qusb_phy_enable_power(qphy, false);
+ if (ret < 0) {
+ dev_dbg(qphy->phy.dev,
+ "dpdm regulator disable failed:%d\n",
+ ret);
+ return ret;
+ }
}
qphy->dpdm_enable = false;
}
@@ -1029,7 +1026,7 @@ static int qusb_phy_remove(struct platform_device *pdev)
qphy->clocks_enabled = false;
}
- qusb_phy_enable_power(qphy, false, true);
+ qusb_phy_enable_power(qphy, false);
return 0;
}
diff --git a/drivers/usb/serial/garmin_gps.c b/drivers/usb/serial/garmin_gps.c
index b2f2e87..91e7e3a 100644
--- a/drivers/usb/serial/garmin_gps.c
+++ b/drivers/usb/serial/garmin_gps.c
@@ -138,6 +138,7 @@ struct garmin_data {
__u8 privpkt[4*6];
spinlock_t lock;
struct list_head pktlist;
+ struct usb_anchor write_urbs;
};
@@ -905,7 +906,7 @@ static int garmin_init_session(struct usb_serial_port *port)
sizeof(GARMIN_START_SESSION_REQ), 0);
if (status < 0)
- break;
+ goto err_kill_urbs;
}
if (status > 0)
@@ -913,6 +914,12 @@ static int garmin_init_session(struct usb_serial_port *port)
}
return status;
+
+err_kill_urbs:
+ usb_kill_anchored_urbs(&garmin_data_p->write_urbs);
+ usb_kill_urb(port->interrupt_in_urb);
+
+ return status;
}
@@ -930,7 +937,6 @@ static int garmin_open(struct tty_struct *tty, struct usb_serial_port *port)
spin_unlock_irqrestore(&garmin_data_p->lock, flags);
/* shutdown any bulk reads that might be going on */
- usb_kill_urb(port->write_urb);
usb_kill_urb(port->read_urb);
if (garmin_data_p->state == STATE_RESET)
@@ -953,7 +959,7 @@ static void garmin_close(struct usb_serial_port *port)
/* shutdown our urbs */
usb_kill_urb(port->read_urb);
- usb_kill_urb(port->write_urb);
+ usb_kill_anchored_urbs(&garmin_data_p->write_urbs);
/* keep reset state so we know that we must start a new session */
if (garmin_data_p->state != STATE_RESET)
@@ -1037,12 +1043,14 @@ static int garmin_write_bulk(struct usb_serial_port *port,
}
/* send it down the pipe */
+ usb_anchor_urb(urb, &garmin_data_p->write_urbs);
status = usb_submit_urb(urb, GFP_ATOMIC);
if (status) {
dev_err(&port->dev,
"%s - usb_submit_urb(write bulk) failed with status = %d\n",
__func__, status);
count = status;
+ usb_unanchor_urb(urb);
kfree(buffer);
}
@@ -1401,9 +1409,16 @@ static int garmin_port_probe(struct usb_serial_port *port)
garmin_data_p->state = 0;
garmin_data_p->flags = 0;
garmin_data_p->count = 0;
+ init_usb_anchor(&garmin_data_p->write_urbs);
usb_set_serial_port_data(port, garmin_data_p);
status = garmin_init_session(port);
+ if (status)
+ goto err_free;
+
+ return 0;
+err_free:
+ kfree(garmin_data_p);
return status;
}
@@ -1413,6 +1428,7 @@ static int garmin_port_remove(struct usb_serial_port *port)
{
struct garmin_data *garmin_data_p = usb_get_serial_port_data(port);
+ usb_kill_anchored_urbs(&garmin_data_p->write_urbs);
usb_kill_urb(port->interrupt_in_urb);
del_timer_sync(&garmin_data_p->timer);
kfree(garmin_data_p);
diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
index e1c1e32..4516291 100644
--- a/drivers/usb/serial/qcserial.c
+++ b/drivers/usb/serial/qcserial.c
@@ -148,6 +148,7 @@ static const struct usb_device_id id_table[] = {
{DEVICE_SWI(0x1199, 0x68a2)}, /* Sierra Wireless MC7710 */
{DEVICE_SWI(0x1199, 0x68c0)}, /* Sierra Wireless MC7304/MC7354 */
{DEVICE_SWI(0x1199, 0x901c)}, /* Sierra Wireless EM7700 */
+ {DEVICE_SWI(0x1199, 0x901e)}, /* Sierra Wireless EM7355 QDL */
{DEVICE_SWI(0x1199, 0x901f)}, /* Sierra Wireless EM7355 */
{DEVICE_SWI(0x1199, 0x9040)}, /* Sierra Wireless Modem */
{DEVICE_SWI(0x1199, 0x9041)}, /* Sierra Wireless MC7305/MC7355 */
diff --git a/drivers/video/backlight/adp5520_bl.c b/drivers/video/backlight/adp5520_bl.c
index dd88ba1..35373e2 100644
--- a/drivers/video/backlight/adp5520_bl.c
+++ b/drivers/video/backlight/adp5520_bl.c
@@ -332,10 +332,18 @@ static int adp5520_bl_probe(struct platform_device *pdev)
}
platform_set_drvdata(pdev, bl);
- ret |= adp5520_bl_setup(bl);
+ ret = adp5520_bl_setup(bl);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to setup\n");
+ if (data->pdata->en_ambl_sens)
+ sysfs_remove_group(&bl->dev.kobj,
+ &adp5520_bl_attr_group);
+ return ret;
+ }
+
backlight_update_status(bl);
- return ret;
+ return 0;
}
static int adp5520_bl_remove(struct platform_device *pdev)
diff --git a/drivers/video/backlight/lcd.c b/drivers/video/backlight/lcd.c
index 7de847d..4b40c6a 100644
--- a/drivers/video/backlight/lcd.c
+++ b/drivers/video/backlight/lcd.c
@@ -226,6 +226,8 @@ struct lcd_device *lcd_device_register(const char *name, struct device *parent,
dev_set_name(&new_ld->dev, "%s", name);
dev_set_drvdata(&new_ld->dev, devdata);
+ new_ld->ops = ops;
+
rc = device_register(&new_ld->dev);
if (rc) {
put_device(&new_ld->dev);
@@ -238,8 +240,6 @@ struct lcd_device *lcd_device_register(const char *name, struct device *parent,
return ERR_PTR(rc);
}
- new_ld->ops = ops;
-
return new_ld;
}
EXPORT_SYMBOL(lcd_device_register);
diff --git a/drivers/video/fbdev/pmag-ba-fb.c b/drivers/video/fbdev/pmag-ba-fb.c
index 5872bc4..df02fb4 100644
--- a/drivers/video/fbdev/pmag-ba-fb.c
+++ b/drivers/video/fbdev/pmag-ba-fb.c
@@ -129,7 +129,7 @@ static struct fb_ops pmagbafb_ops = {
/*
* Turn the hardware cursor off.
*/
-static void __init pmagbafb_erase_cursor(struct fb_info *info)
+static void pmagbafb_erase_cursor(struct fb_info *info)
{
struct pmagbafb_par *par = info->par;
diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index 26e5e85..9122ba2 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -277,8 +277,16 @@ static void sysrq_handler(struct xenbus_watch *watch, const char **vec,
err = xenbus_transaction_start(&xbt);
if (err)
return;
- if (!xenbus_scanf(xbt, "control", "sysrq", "%c", &sysrq_key)) {
- pr_err("Unable to read sysrq code in control/sysrq\n");
+ err = xenbus_scanf(xbt, "control", "sysrq", "%c", &sysrq_key);
+ if (err < 0) {
+ /*
+ * The Xenstore watch fires directly after registering it and
+ * after a suspend/resume cycle. So ENOENT is no error but
+ * might happen in those cases.
+ */
+ if (err != -ENOENT)
+ pr_err("Error %d reading sysrq code in control/sysrq\n",
+ err);
xenbus_transaction_end(xbt, 1);
return;
}
diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
index dd3e236..d9cbda2 100644
--- a/fs/cifs/dir.c
+++ b/fs/cifs/dir.c
@@ -193,7 +193,8 @@ check_name(struct dentry *direntry, struct cifs_tcon *tcon)
struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb);
int i;
- if (unlikely(direntry->d_name.len >
+ if (unlikely(tcon->fsAttrInfo.MaxPathNameComponentLength &&
+ direntry->d_name.len >
le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength)))
return -ENAMETOOLONG;
@@ -509,7 +510,7 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry,
rc = check_name(direntry, tcon);
if (rc)
- goto out_free_xid;
+ goto out;
server = tcon->ses->server;
diff --git a/fs/coda/upcall.c b/fs/coda/upcall.c
index f6c6c8a..7289f0a 100644
--- a/fs/coda/upcall.c
+++ b/fs/coda/upcall.c
@@ -446,8 +446,7 @@ int venus_fsync(struct super_block *sb, struct CodaFid *fid)
UPARG(CODA_FSYNC);
inp->coda_fsync.VFid = *fid;
- error = coda_upcall(coda_vcp(sb), sizeof(union inputArgs),
- &outsize, inp);
+ error = coda_upcall(coda_vcp(sb), insize, &outsize, inp);
CODA_FREE(inp, insize);
return error;
diff --git a/fs/crypto/Makefile b/fs/crypto/Makefile
index f17684c..facf63c 100644
--- a/fs/crypto/Makefile
+++ b/fs/crypto/Makefile
@@ -1,3 +1,4 @@
obj-$(CONFIG_FS_ENCRYPTION) += fscrypto.o
+ccflags-y += -Ifs/ext4
fscrypto-y := crypto.o fname.o policy.o keyinfo.o
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index 61cfcce..5c24071 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -28,6 +28,7 @@
#include <linux/dcache.h>
#include <linux/namei.h>
#include <linux/fscrypto.h>
+#include "ext4_ice.h"
static unsigned int num_prealloc_crypto_pages = 32;
static unsigned int num_prealloc_crypto_ctxs = 128;
@@ -406,6 +407,9 @@ static void completion_pages(struct work_struct *work)
bio_for_each_segment_all(bv, bio, i) {
struct page *page = bv->bv_page;
+ if (ext4_is_ice_enabled())
+ SetPageUptodate(page);
+ else {
int ret = fscrypt_decrypt_page(page);
if (ret) {
@@ -414,6 +418,7 @@ static void completion_pages(struct work_struct *work)
} else {
SetPageUptodate(page);
}
+ }
unlock_page(page);
}
fscrypt_release_ctx(ctx);
diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
index a755fa1..106e55c 100644
--- a/fs/crypto/keyinfo.c
+++ b/fs/crypto/keyinfo.c
@@ -11,6 +11,7 @@
#include <keys/user-type.h>
#include <linux/scatterlist.h>
#include <linux/fscrypto.h>
+#include "ext4_ice.h"
static void derive_crypt_complete(struct crypto_async_request *req, int rc)
{
@@ -135,13 +136,17 @@ static int validate_user_key(struct fscrypt_info *crypt_info,
}
static int determine_cipher_type(struct fscrypt_info *ci, struct inode *inode,
- const char **cipher_str_ret, int *keysize_ret)
+ const char **cipher_str_ret, int *keysize_ret, int *fname)
{
if (S_ISREG(inode->i_mode)) {
if (ci->ci_data_mode == FS_ENCRYPTION_MODE_AES_256_XTS) {
*cipher_str_ret = "xts(aes)";
*keysize_ret = FS_AES_256_XTS_KEY_SIZE;
return 0;
+ } else if (ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE) {
+ *cipher_str_ret = "bugon";
+ *keysize_ret = FS_AES_256_XTS_KEY_SIZE;
+ return 0;
}
pr_warn_once("fscrypto: unsupported contents encryption mode "
"%d for inode %lu\n",
@@ -153,6 +158,7 @@ static int determine_cipher_type(struct fscrypt_info *ci, struct inode *inode,
if (ci->ci_filename_mode == FS_ENCRYPTION_MODE_AES_256_CTS) {
*cipher_str_ret = "cts(cbc(aes))";
*keysize_ret = FS_AES_256_CTS_KEY_SIZE;
+ *fname = 1;
return 0;
}
pr_warn_once("fscrypto: unsupported filenames encryption mode "
@@ -172,9 +178,26 @@ static void put_crypt_info(struct fscrypt_info *ci)
return;
crypto_free_skcipher(ci->ci_ctfm);
+ memzero_explicit(ci->ci_raw_key,
+ sizeof(ci->ci_raw_key));
kmem_cache_free(fscrypt_info_cachep, ci);
}
+static int fs_data_encryption_mode(void)
+{
+ return ext4_is_ice_enabled() ? FS_ENCRYPTION_MODE_PRIVATE :
+ FS_ENCRYPTION_MODE_AES_256_XTS;
+}
+
+int fs_using_hardware_encryption(struct inode *inode)
+{
+ struct fscrypt_info *ci = inode->i_crypt_info;
+
+ return S_ISREG(inode->i_mode) && ci &&
+ ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE;
+}
+EXPORT_SYMBOL(fs_using_hardware_encryption);
+
int fscrypt_get_encryption_info(struct inode *inode)
{
struct fscrypt_info *crypt_info;
@@ -182,8 +205,8 @@ int fscrypt_get_encryption_info(struct inode *inode)
struct crypto_skcipher *ctfm;
const char *cipher_str;
int keysize;
- u8 *raw_key = NULL;
int res;
+ int fname = 0;
if (inode->i_crypt_info)
return 0;
@@ -200,7 +223,7 @@ int fscrypt_get_encryption_info(struct inode *inode)
if (!fscrypt_dummy_context_enabled(inode))
return res;
ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
- ctx.contents_encryption_mode = FS_ENCRYPTION_MODE_AES_256_XTS;
+ ctx.contents_encryption_mode = fs_data_encryption_mode();
ctx.filenames_encryption_mode = FS_ENCRYPTION_MODE_AES_256_CTS;
ctx.flags = 0;
} else if (res != sizeof(ctx)) {
@@ -224,7 +247,8 @@ int fscrypt_get_encryption_info(struct inode *inode)
memcpy(crypt_info->ci_master_key, ctx.master_key_descriptor,
sizeof(crypt_info->ci_master_key));
- res = determine_cipher_type(crypt_info, inode, &cipher_str, &keysize);
+ res = determine_cipher_type(crypt_info, inode, &cipher_str, &keysize,
+ &fname);
if (res)
goto out;
@@ -233,24 +257,21 @@ int fscrypt_get_encryption_info(struct inode *inode)
* crypto API as part of key derivation.
*/
res = -ENOMEM;
- raw_key = kmalloc(FS_MAX_KEY_SIZE, GFP_NOFS);
- if (!raw_key)
- goto out;
if (fscrypt_dummy_context_enabled(inode)) {
- memset(raw_key, 0x42, FS_AES_256_XTS_KEY_SIZE);
+ memset(crypt_info->ci_raw_key, 0x42, FS_AES_256_XTS_KEY_SIZE);
goto got_key;
}
- res = validate_user_key(crypt_info, &ctx, raw_key,
+ res = validate_user_key(crypt_info, &ctx, crypt_info->ci_raw_key,
FS_KEY_DESC_PREFIX, FS_KEY_DESC_PREFIX_SIZE);
if (res && inode->i_sb->s_cop->key_prefix) {
u8 *prefix = NULL;
int prefix_size, res2;
prefix_size = inode->i_sb->s_cop->key_prefix(inode, &prefix);
- res2 = validate_user_key(crypt_info, &ctx, raw_key,
- prefix, prefix_size);
+ res2 = validate_user_key(crypt_info, &ctx,
+ crypt_info->ci_raw_key, prefix, prefix_size);
if (res2) {
if (res2 == -ENOKEY)
res = -ENOKEY;
@@ -260,28 +281,33 @@ int fscrypt_get_encryption_info(struct inode *inode)
goto out;
}
got_key:
- ctfm = crypto_alloc_skcipher(cipher_str, 0, 0);
- if (!ctfm || IS_ERR(ctfm)) {
- res = ctfm ? PTR_ERR(ctfm) : -ENOMEM;
- printk(KERN_DEBUG
- "%s: error %d (inode %u) allocating crypto tfm\n",
- __func__, res, (unsigned) inode->i_ino);
+ if (crypt_info->ci_data_mode != FS_ENCRYPTION_MODE_PRIVATE || fname) {
+ ctfm = crypto_alloc_skcipher(cipher_str, 0, 0);
+ if (!ctfm || IS_ERR(ctfm)) {
+ res = ctfm ? PTR_ERR(ctfm) : -ENOMEM;
+ pr_err("%s: error %d inode %u allocating crypto tfm\n",
+ __func__, res, (unsigned int) inode->i_ino);
+ goto out;
+ }
+ crypt_info->ci_ctfm = ctfm;
+ crypto_skcipher_clear_flags(ctfm, ~0);
+ crypto_skcipher_set_flags(ctfm, CRYPTO_TFM_REQ_WEAK_KEY);
+ res = crypto_skcipher_setkey(ctfm, crypt_info->ci_raw_key,
+ keysize);
+ if (res)
+ goto out;
+ } else if (!ext4_is_ice_enabled()) {
+ pr_warn("%s: ICE support not available\n",
+ __func__);
+ res = -EINVAL;
goto out;
}
- crypt_info->ci_ctfm = ctfm;
- crypto_skcipher_clear_flags(ctfm, ~0);
- crypto_skcipher_set_flags(ctfm, CRYPTO_TFM_REQ_WEAK_KEY);
- res = crypto_skcipher_setkey(ctfm, raw_key, keysize);
- if (res)
- goto out;
-
if (cmpxchg(&inode->i_crypt_info, NULL, crypt_info) == NULL)
crypt_info = NULL;
out:
if (res == -ENOKEY)
res = 0;
put_crypt_info(crypt_info);
- kzfree(raw_key);
return res;
}
EXPORT_SYMBOL(fscrypt_get_encryption_info);
diff --git a/fs/direct-io.c b/fs/direct-io.c
index c6220a2..bf03a92 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -411,6 +411,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty)
bio_set_pages_dirty(bio);
+ bio->bi_dio_inode = dio->inode;
dio->bio_bdev = bio->bi_bdev;
if (sdio->submit_io) {
@@ -424,6 +425,18 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
sdio->logical_offset_in_bio = 0;
}
+struct inode *dio_bio_get_inode(struct bio *bio)
+{
+ struct inode *inode = NULL;
+
+ if (bio == NULL)
+ return NULL;
+
+ inode = bio->bi_dio_inode;
+
+ return inode;
+}
+EXPORT_SYMBOL(dio_bio_get_inode);
/*
* Release any resources in case of a failure
*/
diff --git a/fs/ext4/Kconfig b/fs/ext4/Kconfig
index e38039f..e9232a0 100644
--- a/fs/ext4/Kconfig
+++ b/fs/ext4/Kconfig
@@ -109,10 +109,16 @@
decrypted pages in the page cache.
config EXT4_FS_ENCRYPTION
- bool
- default y
+ bool "Ext4 FS Encryption"
+ default n
depends on EXT4_ENCRYPTION
+config EXT4_FS_ICE_ENCRYPTION
+ bool "Ext4 Encryption with ICE support"
+ default n
+ depends on EXT4_FS_ENCRYPTION
+ depends on PFK
+
config EXT4_DEBUG
bool "EXT4 debugging support"
depends on EXT4_FS
diff --git a/fs/ext4/Makefile b/fs/ext4/Makefile
index 354103f..d9e563a 100644
--- a/fs/ext4/Makefile
+++ b/fs/ext4/Makefile
@@ -1,6 +1,7 @@
#
# Makefile for the linux ext4-filesystem routines.
#
+ccflags-y += -Ifs/crypto
obj-$(CONFIG_EXT4_FS) += ext4.o
@@ -12,3 +13,4 @@
ext4-$(CONFIG_EXT4_FS_POSIX_ACL) += acl.o
ext4-$(CONFIG_EXT4_FS_SECURITY) += xattr_security.o
+ext4-$(CONFIG_EXT4_FS_ICE_ENCRYPTION) += ext4_ice.o
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 20ee0e4..9b67de7 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -2352,6 +2352,7 @@ static inline void ext4_fname_free_filename(struct ext4_filename *fname) { }
#define fscrypt_fname_free_buffer fscrypt_notsupp_fname_free_buffer
#define fscrypt_fname_disk_to_usr fscrypt_notsupp_fname_disk_to_usr
#define fscrypt_fname_usr_to_disk fscrypt_notsupp_fname_usr_to_disk
+#define fs_using_hardware_encryption fs_notsupp_using_hardware_encryption
#endif
/* dir.c */
diff --git a/fs/ext4/ext4_ice.c b/fs/ext4/ext4_ice.c
new file mode 100644
index 0000000..25f79ae
--- /dev/null
+++ b/fs/ext4/ext4_ice.c
@@ -0,0 +1,107 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "ext4_ice.h"
+
+/*
+ * Retrieves encryption key from the inode
+ */
+char *ext4_get_ice_encryption_key(const struct inode *inode)
+{
+ struct fscrypt_info *ci = NULL;
+
+ if (!inode)
+ return NULL;
+
+ ci = inode->i_crypt_info;
+ if (!ci)
+ return NULL;
+
+ return &(ci->ci_raw_key[0]);
+}
+
+/*
+ * Retrieves encryption salt from the inode
+ */
+char *ext4_get_ice_encryption_salt(const struct inode *inode)
+{
+ struct fscrypt_info *ci = NULL;
+
+ if (!inode)
+ return NULL;
+
+ ci = inode->i_crypt_info;
+ if (!ci)
+ return NULL;
+
+ return &(ci->ci_raw_key[ext4_get_ice_encryption_key_size(inode)]);
+}
+
+/*
+ * returns true if the cipher mode in inode is AES XTS
+ */
+int ext4_is_aes_xts_cipher(const struct inode *inode)
+{
+ struct fscrypt_info *ci = NULL;
+
+ ci = inode->i_crypt_info;
+ if (!ci)
+ return 0;
+
+ return (ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE);
+}
+
+/*
+ * returns true if encryption info in both inodes is equal
+ */
+int ext4_is_ice_encryption_info_equal(const struct inode *inode1,
+ const struct inode *inode2)
+{
+ char *key1 = NULL;
+ char *key2 = NULL;
+ char *salt1 = NULL;
+ char *salt2 = NULL;
+
+ if (!inode1 || !inode2)
+ return 0;
+
+ if (inode1 == inode2)
+ return 1;
+
+ /* both do not belong to ice, so we don't care, they are equal for us */
+ if (!ext4_should_be_processed_by_ice(inode1) &&
+ !ext4_should_be_processed_by_ice(inode2))
+ return 1;
+
+ /* one belongs to ice, the other does not -> not equal */
+ if (ext4_should_be_processed_by_ice(inode1) ^
+ ext4_should_be_processed_by_ice(inode2))
+ return 0;
+
+ key1 = ext4_get_ice_encryption_key(inode1);
+ key2 = ext4_get_ice_encryption_key(inode2);
+ salt1 = ext4_get_ice_encryption_salt(inode1);
+ salt2 = ext4_get_ice_encryption_salt(inode2);
+
+ /* key and salt should not be null by this point */
+ if (!key1 || !key2 || !salt1 || !salt2 ||
+ (ext4_get_ice_encryption_key_size(inode1) !=
+ ext4_get_ice_encryption_key_size(inode2)) ||
+ (ext4_get_ice_encryption_salt_size(inode1) !=
+ ext4_get_ice_encryption_salt_size(inode2)))
+ return 0;
+
+ return ((memcmp(key1, key2,
+ ext4_get_ice_encryption_key_size(inode1)) == 0) &&
+ (memcmp(salt1, salt2,
+ ext4_get_ice_encryption_salt_size(inode1)) == 0));
+}
diff --git a/fs/ext4/ext4_ice.h b/fs/ext4/ext4_ice.h
new file mode 100644
index 0000000..04e09bf
--- /dev/null
+++ b/fs/ext4/ext4_ice.h
@@ -0,0 +1,104 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _EXT4_ICE_H
+#define _EXT4_ICE_H
+
+#include "ext4.h"
+#include <linux/fscrypto.h>
+
+#ifdef CONFIG_EXT4_FS_ICE_ENCRYPTION
+static inline int ext4_should_be_processed_by_ice(const struct inode *inode)
+{
+ if (!ext4_encrypted_inode((struct inode *)inode))
+ return 0;
+
+ return fs_using_hardware_encryption((struct inode *)inode);
+}
+
+static inline int ext4_is_ice_enabled(void)
+{
+ return 1;
+}
+
+int ext4_is_aes_xts_cipher(const struct inode *inode);
+
+char *ext4_get_ice_encryption_key(const struct inode *inode);
+char *ext4_get_ice_encryption_salt(const struct inode *inode);
+
+int ext4_is_ice_encryption_info_equal(const struct inode *inode1,
+ const struct inode *inode2);
+
+static inline size_t ext4_get_ice_encryption_key_size(
+ const struct inode *inode)
+{
+ return FS_AES_256_XTS_KEY_SIZE / 2;
+}
+
+static inline size_t ext4_get_ice_encryption_salt_size(
+ const struct inode *inode)
+{
+ return FS_AES_256_XTS_KEY_SIZE / 2;
+}
+
+#else
+static inline int ext4_should_be_processed_by_ice(const struct inode *inode)
+{
+ return 0;
+}
+static inline int ext4_is_ice_enabled(void)
+{
+ return 0;
+}
+
+static inline char *ext4_get_ice_encryption_key(const struct inode *inode)
+{
+ return NULL;
+}
+
+static inline char *ext4_get_ice_encryption_salt(const struct inode *inode)
+{
+ return NULL;
+}
+
+static inline size_t ext4_get_ice_encryption_key_size(
+ const struct inode *inode)
+{
+ return 0;
+}
+
+static inline size_t ext4_get_ice_encryption_salt_size(
+ const struct inode *inode)
+{
+ return 0;
+}
+
+static inline int ext4_is_xts_cipher(const struct inode *inode)
+{
+ return 0;
+}
+
+static inline int ext4_is_ice_encryption_info_equal(
+ const struct inode *inode1,
+ const struct inode *inode2)
+{
+ return 0;
+}
+
+static inline int ext4_is_aes_xts_cipher(const struct inode *inode)
+{
+ return 0;
+}
+
+#endif
+
+#endif /* _EXT4_ICE_H */
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 496c9b5..dcb9669 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -42,6 +42,7 @@
#include "xattr.h"
#include "acl.h"
#include "truncate.h"
+#include "ext4_ice.h"
#include <trace/events/ext4.h>
#include <trace/events/android_fs.h>
@@ -1152,7 +1153,8 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
ll_rw_block(REQ_OP_READ, 0, 1, &bh);
*wait_bh++ = bh;
decrypt = ext4_encrypted_inode(inode) &&
- S_ISREG(inode->i_mode);
+ S_ISREG(inode->i_mode) &&
+ !ext4_is_ice_enabled();
}
}
/*
@@ -3509,7 +3511,8 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
get_block_func = ext4_dio_get_block_unwritten_async;
dio_flags = DIO_LOCKING;
}
-#ifdef CONFIG_EXT4_FS_ENCRYPTION
+#if defined(CONFIG_EXT4_FS_ENCRYPTION) && \
+!defined(CONFIG_EXT4_FS_ICE_ENCRYPTION)
BUG_ON(ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode));
#endif
if (IS_DAX(inode)) {
@@ -3623,7 +3626,8 @@ static ssize_t ext4_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
ssize_t ret;
int rw = iov_iter_rw(iter);
-#ifdef CONFIG_EXT4_FS_ENCRYPTION
+#if defined(CONFIG_EXT4_FS_ENCRYPTION) && \
+!defined(CONFIG_EXT4_FS_ICE_ENCRYPTION)
if (ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode))
return 0;
#endif
@@ -3820,7 +3824,8 @@ static int __ext4_block_zero_page_range(handle_t *handle,
if (!buffer_uptodate(bh))
goto unlock;
if (S_ISREG(inode->i_mode) &&
- ext4_encrypted_inode(inode)) {
+ ext4_encrypted_inode(inode) &&
+ !fs_using_hardware_encryption(inode)) {
/* We expect the key to be set. */
BUG_ON(!fscrypt_has_encryption_key(inode));
BUG_ON(blocksize != PAGE_SIZE);
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index cec9280..1ddceb6 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -773,10 +773,6 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
case EXT4_IOC_SET_ENCRYPTION_POLICY: {
#ifdef CONFIG_EXT4_FS_ENCRYPTION
struct fscrypt_policy policy;
-
- if (!ext4_has_feature_encrypt(sb))
- return -EOPNOTSUPP;
-
if (copy_from_user(&policy,
(struct fscrypt_policy __user *)arg,
sizeof(policy)))
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index df8168f..e5e99a7 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2136,8 +2136,10 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
* We search using buddy data only if the order of the request
* is greater than equal to the sbi_s_mb_order2_reqs
* You can tune it via /sys/fs/ext4/<partition>/mb_order2_req
+ * We also support searching for power-of-two requests only for
+ * requests upto maximum buddy size we have constructed.
*/
- if (i >= sbi->s_mb_order2_reqs) {
+ if (i >= sbi->s_mb_order2_reqs && i <= sb->s_blocksize_bits + 2) {
/*
* This should tell if fe_len is exactly power of 2
*/
@@ -2207,7 +2209,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
}
ac->ac_groups_scanned++;
- if (cr == 0 && ac->ac_2order < sb->s_blocksize_bits+2)
+ if (cr == 0)
ext4_mb_simple_scan_group(ac, &e4b);
else if (cr == 1 && sbi->s_stripe &&
!(ac->ac_g_ex.fe_len % sbi->s_stripe))
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 0094923..d8a0770 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -29,6 +29,7 @@
#include "ext4_jbd2.h"
#include "xattr.h"
#include "acl.h"
+#include "ext4_ice.h"
static struct kmem_cache *io_end_cachep;
@@ -470,6 +471,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
gfp_t gfp_flags = GFP_NOFS;
retry_encrypt:
+ if (!fs_using_hardware_encryption(inode))
data_page = fscrypt_encrypt_page(inode, page, gfp_flags);
if (IS_ERR(data_page)) {
ret = PTR_ERR(data_page);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index f72535e..1f58179 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2628,9 +2628,9 @@ static unsigned long ext4_get_stripe_size(struct ext4_sb_info *sbi)
if (sbi->s_stripe && sbi->s_stripe <= sbi->s_blocks_per_group)
ret = sbi->s_stripe;
- else if (stripe_width <= sbi->s_blocks_per_group)
+ else if (stripe_width && stripe_width <= sbi->s_blocks_per_group)
ret = stripe_width;
- else if (stride <= sbi->s_blocks_per_group)
+ else if (stride && stride <= sbi->s_blocks_per_group)
ret = stride;
else
ret = 0;
diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
index 1d9a8c4..57b0902 100644
--- a/fs/fat/fatent.c
+++ b/fs/fat/fatent.c
@@ -92,7 +92,8 @@ static int fat12_ent_bread(struct super_block *sb, struct fat_entry *fatent,
err_brelse:
brelse(bhs[0]);
err:
- fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", (llu)blocknr);
+ fat_msg_ratelimit(sb, KERN_ERR,
+ "FAT read failed (blocknr %llu)", (llu)blocknr);
return -EIO;
}
@@ -105,8 +106,8 @@ static int fat_ent_bread(struct super_block *sb, struct fat_entry *fatent,
fatent->fat_inode = MSDOS_SB(sb)->fat_inode;
fatent->bhs[0] = sb_bread(sb, blocknr);
if (!fatent->bhs[0]) {
- fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
- (llu)blocknr);
+ fat_msg_ratelimit(sb, KERN_ERR,
+ "FAT read failed (blocknr %llu)", (llu)blocknr);
return -EIO;
}
fatent->nr_bhs = 1;
diff --git a/fs/fat/inode.c b/fs/fat/inode.c
index a2c05f2..0b6ba8c 100644
--- a/fs/fat/inode.c
+++ b/fs/fat/inode.c
@@ -843,8 +843,9 @@ static int __fat_write_inode(struct inode *inode, int wait)
fat_get_blknr_offset(sbi, i_pos, &blocknr, &offset);
bh = sb_bread(sb, blocknr);
if (!bh) {
- fat_msg(sb, KERN_ERR, "unable to read inode block "
- "for updating (i_pos %lld)", i_pos);
+ fat_msg_ratelimit(sb, KERN_ERR,
+ "unable to read inode block for updating (i_pos %lld)",
+ i_pos);
return -EIO;
}
spin_lock(&sbi->inode_hash_lock);
diff --git a/fs/namei.c b/fs/namei.c
index e10895c..2af3818 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -2903,6 +2903,11 @@ int vfs_create2(struct vfsmount *mnt, struct inode *dir, struct dentry *dentry,
if (error)
return error;
error = dir->i_op->create(dir, dentry, mode, want_excl);
+ if (error)
+ return error;
+ error = security_inode_post_create(dir, dentry, mode);
+ if (error)
+ return error;
if (!error)
fsnotify_create(dir, dentry);
return error;
@@ -3002,10 +3007,16 @@ static inline int open_to_namei_flags(int flag)
static int may_o_create(const struct path *dir, struct dentry *dentry, umode_t mode)
{
+ struct user_namespace *s_user_ns;
int error = security_path_mknod(dir, dentry, mode, 0);
if (error)
return error;
+ s_user_ns = dir->dentry->d_sb->s_user_ns;
+ if (!kuid_has_mapping(s_user_ns, current_fsuid()) ||
+ !kgid_has_mapping(s_user_ns, current_fsgid()))
+ return -EOVERFLOW;
+
error = inode_permission2(dir->mnt, dir->dentry->d_inode, MAY_WRITE | MAY_EXEC);
if (error)
return error;
@@ -3712,6 +3723,13 @@ int vfs_mknod2(struct vfsmount *mnt, struct inode *dir, struct dentry *dentry, u
return error;
error = dir->i_op->mknod(dir, dentry, mode, dev);
+ if (error)
+ return error;
+
+ error = security_inode_post_create(dir, dentry, mode);
+ if (error)
+ return error;
+
if (!error)
fsnotify_create(dir, dentry);
return error;
diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
index f72712f..06089be 100644
--- a/fs/ocfs2/alloc.c
+++ b/fs/ocfs2/alloc.c
@@ -7310,13 +7310,24 @@ int ocfs2_truncate_inline(struct inode *inode, struct buffer_head *di_bh,
static int ocfs2_trim_extent(struct super_block *sb,
struct ocfs2_group_desc *gd,
- u32 start, u32 count)
+ u64 group, u32 start, u32 count)
{
u64 discard, bcount;
+ struct ocfs2_super *osb = OCFS2_SB(sb);
bcount = ocfs2_clusters_to_blocks(sb, count);
- discard = le64_to_cpu(gd->bg_blkno) +
- ocfs2_clusters_to_blocks(sb, start);
+ discard = ocfs2_clusters_to_blocks(sb, start);
+
+ /*
+ * For the first cluster group, the gd->bg_blkno is not at the start
+ * of the group, but at an offset from the start. If we add it while
+ * calculating discard for first group, we will wrongly start fstrim a
+ * few blocks after the desried start block and the range can cross
+ * over into the next cluster group. So, add it only if this is not
+ * the first cluster group.
+ */
+ if (group != osb->first_cluster_group_blkno)
+ discard += le64_to_cpu(gd->bg_blkno);
trace_ocfs2_trim_extent(sb, (unsigned long long)discard, bcount);
@@ -7324,7 +7335,7 @@ static int ocfs2_trim_extent(struct super_block *sb,
}
static int ocfs2_trim_group(struct super_block *sb,
- struct ocfs2_group_desc *gd,
+ struct ocfs2_group_desc *gd, u64 group,
u32 start, u32 max, u32 minbits)
{
int ret = 0, count = 0, next;
@@ -7343,7 +7354,7 @@ static int ocfs2_trim_group(struct super_block *sb,
next = ocfs2_find_next_bit(bitmap, max, start);
if ((next - start) >= minbits) {
- ret = ocfs2_trim_extent(sb, gd,
+ ret = ocfs2_trim_extent(sb, gd, group,
start, next - start);
if (ret < 0) {
mlog_errno(ret);
@@ -7441,7 +7452,8 @@ int ocfs2_trim_fs(struct super_block *sb, struct fstrim_range *range)
}
gd = (struct ocfs2_group_desc *)gd_bh->b_data;
- cnt = ocfs2_trim_group(sb, gd, first_bit, last_bit, minlen);
+ cnt = ocfs2_trim_group(sb, gd, group,
+ first_bit, last_bit, minlen);
brelse(gd_bh);
gd_bh = NULL;
if (cnt < 0) {
diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
index dd5cb8b..eef3248 100644
--- a/fs/ocfs2/dlm/dlmrecovery.c
+++ b/fs/ocfs2/dlm/dlmrecovery.c
@@ -2419,6 +2419,7 @@ static void dlm_do_local_recovery_cleanup(struct dlm_ctxt *dlm, u8 dead_node)
dlm_lockres_put(res);
continue;
}
+ dlm_move_lockres_to_recovery_list(dlm, res);
} else if (res->owner == dlm->node_num) {
dlm_free_dead_locks(dlm, res, dead_node);
__dlm_lockres_calc_usage(dlm, res);
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 0db6f83..05a0fb9 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -1166,6 +1166,13 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
}
size_change = S_ISREG(inode->i_mode) && attr->ia_valid & ATTR_SIZE;
if (size_change) {
+ /*
+ * Here we should wait dio to finish before inode lock
+ * to avoid a deadlock between ocfs2_setattr() and
+ * ocfs2_dio_end_io_write()
+ */
+ inode_dio_wait(inode);
+
status = ocfs2_rw_lock(inode, 1);
if (status < 0) {
mlog_errno(status);
@@ -1186,8 +1193,6 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
if (status)
goto bail_unlock;
- inode_dio_wait(inode);
-
if (i_size_read(inode) >= attr->ia_size) {
if (ocfs2_should_order_data(inode)) {
status = ocfs2_begin_ordered_truncate(inode,
diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
index 453a63f..50810be 100644
--- a/include/drm/drm_dp_helper.h
+++ b/include/drm/drm_dp_helper.h
@@ -491,7 +491,9 @@
# define DP_TEST_PHY_PATTERN_SYMBOL_ERR_MEASUREMENT_CNT 0x2
# define DP_TEST_PHY_PATTERN_PRBS7 0x3
# define DP_TEST_PHY_PATTERN_80_BIT_CUSTOM_PATTERN 0x4
-# define DP_TEST_PHY_PATTERN_HBR2_CTS_EYE_PATTERN 0x5
+# define DP_TEST_PHY_PATTERN_CP2520_PATTERN_1 0x5
+# define DP_TEST_PHY_PATTERN_CP2520_PATTERN_2 0x6
+# define DP_TEST_PHY_PATTERN_CP2520_PATTERN_3 0x7
#define DP_TEST_RESPONSE 0x260
# define DP_TEST_ACK (1 << 0)
diff --git a/include/dt-bindings/clock/exynos5433.h b/include/dt-bindings/clock/exynos5433.h
index 4fa6bb2..be39d23 100644
--- a/include/dt-bindings/clock/exynos5433.h
+++ b/include/dt-bindings/clock/exynos5433.h
@@ -771,7 +771,10 @@
#define CLK_PCLK_DECON 113
-#define DISP_NR_CLK 114
+#define CLK_PHYCLK_MIPIDPHY0_BITCLKDIV8_PHY 114
+#define CLK_PHYCLK_MIPIDPHY0_RXCLKESC0_PHY 115
+
+#define DISP_NR_CLK 116
/* CMU_AUD */
#define CLK_MOUT_AUD_PLL_USER 1
diff --git a/include/dt-bindings/clock/qcom,cpu-a7.h b/include/dt-bindings/clock/qcom,cpu-a7.h
new file mode 100644
index 0000000..9b89030
--- /dev/null
+++ b/include/dt-bindings/clock/qcom,cpu-a7.h
@@ -0,0 +1,21 @@
+/*
+ * Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _DT_BINDINGS_CLK_MSM_CPU_A7_H
+#define _DT_BINDINGS_CLK_MSM_CPU_A7_H
+
+#define SYS_APC0_AUX_CLK 0
+#define APCS_CPU_PLL 1
+#define APCS_CLK 2
+
+#endif
diff --git a/include/dt-bindings/clock/qcom,gcc-sdxpoorwills.h b/include/dt-bindings/clock/qcom,gcc-sdxpoorwills.h
index e773848..950811f 100644
--- a/include/dt-bindings/clock/qcom,gcc-sdxpoorwills.h
+++ b/include/dt-bindings/clock/qcom,gcc-sdxpoorwills.h
@@ -48,59 +48,60 @@
#define GCC_CPUSS_AHB_CLK 30
#define GCC_CPUSS_AHB_CLK_SRC 31
#define GCC_CPUSS_GNOC_CLK 32
-#define GCC_CPUSS_GPLL0_CLK_SRC 33
-#define GCC_CPUSS_RBCPR_CLK 34
-#define GCC_CPUSS_RBCPR_CLK_SRC 35
-#define GCC_GP1_CLK 36
-#define GCC_GP1_CLK_SRC 37
-#define GCC_GP2_CLK 38
-#define GCC_GP2_CLK_SRC 39
-#define GCC_GP3_CLK 40
-#define GCC_GP3_CLK_SRC 41
-#define GCC_MSS_CFG_AHB_CLK 42
-#define GCC_MSS_GPLL0_DIV_CLK_SRC 43
-#define GCC_MSS_SNOC_AXI_CLK 44
-#define GCC_PCIE_AUX_CLK 45
-#define GCC_PCIE_AUX_PHY_CLK_SRC 46
-#define GCC_PCIE_CFG_AHB_CLK 47
-#define GCC_PCIE_0_CLKREF_EN 48
-#define GCC_PCIE_MSTR_AXI_CLK 49
-#define GCC_PCIE_PHY_REFGEN_CLK 50
-#define GCC_PCIE_PHY_REFGEN_CLK_SRC 51
-#define GCC_PCIE_PIPE_CLK 52
-#define GCC_PCIE_SLEEP_CLK 53
-#define GCC_PCIE_SLV_AXI_CLK 54
-#define GCC_PCIE_SLV_Q2A_AXI_CLK 55
-#define GCC_PDM2_CLK 56
-#define GCC_PDM2_CLK_SRC 57
-#define GCC_PDM_AHB_CLK 58
-#define GCC_PDM_XO4_CLK 59
-#define GCC_PRNG_AHB_CLK 60
-#define GCC_SDCC1_AHB_CLK 61
-#define GCC_SDCC1_APPS_CLK 62
-#define GCC_SDCC1_APPS_CLK_SRC 63
-#define GCC_SPMI_FETCHER_AHB_CLK 64
-#define GCC_SPMI_FETCHER_CLK 65
-#define GCC_SPMI_FETCHER_CLK_SRC 66
-#define GCC_SYS_NOC_CPUSS_AHB_CLK 67
-#define GCC_SYS_NOC_USB3_CLK 68
-#define GCC_USB30_MASTER_CLK 69
-#define GCC_USB30_MASTER_CLK_SRC 70
-#define GCC_USB30_MOCK_UTMI_CLK 71
-#define GCC_USB30_MOCK_UTMI_CLK_SRC 72
-#define GCC_USB30_SLEEP_CLK 73
-#define GCC_USB3_PRIM_CLKREF_CLK 74
-#define GCC_USB3_PHY_AUX_CLK 75
-#define GCC_USB3_PHY_AUX_CLK_SRC 76
-#define GCC_USB3_PHY_PIPE_CLK 77
-#define GCC_USB_PHY_CFG_AHB2PHY_CLK 78
-#define GCC_XO_DIV4_CLK 79
-#define GPLL0 80
-#define GPLL0_OUT_EVEN 81
-
-/* GDSCs */
-#define PCIE_GDSC 0
-#define USB30_GDSC 1
+#define GCC_CPUSS_RBCPR_CLK 33
+#define GCC_CPUSS_RBCPR_CLK_SRC 34
+#define GCC_EMAC_CLK_SRC 35
+#define GCC_EMAC_PTP_CLK_SRC 36
+#define GCC_ETH_AXI_CLK 37
+#define GCC_ETH_PTP_CLK 38
+#define GCC_ETH_RGMII_CLK 39
+#define GCC_ETH_SLAVE_AHB_CLK 40
+#define GCC_GP1_CLK 41
+#define GCC_GP1_CLK_SRC 42
+#define GCC_GP2_CLK 43
+#define GCC_GP2_CLK_SRC 44
+#define GCC_GP3_CLK 45
+#define GCC_GP3_CLK_SRC 46
+#define GCC_MSS_CFG_AHB_CLK 47
+#define GCC_MSS_GPLL0_DIV_CLK_SRC 48
+#define GCC_MSS_SNOC_AXI_CLK 49
+#define GCC_PCIE_AUX_CLK 50
+#define GCC_PCIE_AUX_PHY_CLK_SRC 51
+#define GCC_PCIE_CFG_AHB_CLK 52
+#define GCC_PCIE_MSTR_AXI_CLK 53
+#define GCC_PCIE_PHY_REFGEN_CLK 54
+#define GCC_PCIE_PHY_REFGEN_CLK_SRC 55
+#define GCC_PCIE_PIPE_CLK 56
+#define GCC_PCIE_SLEEP_CLK 57
+#define GCC_PCIE_SLV_AXI_CLK 58
+#define GCC_PCIE_SLV_Q2A_AXI_CLK 59
+#define GCC_PDM2_CLK 60
+#define GCC_PDM2_CLK_SRC 61
+#define GCC_PDM_AHB_CLK 62
+#define GCC_PDM_XO4_CLK 63
+#define GCC_PRNG_AHB_CLK 64
+#define GCC_SDCC1_AHB_CLK 65
+#define GCC_SDCC1_APPS_CLK 66
+#define GCC_SDCC1_APPS_CLK_SRC 67
+#define GCC_SPMI_FETCHER_AHB_CLK 68
+#define GCC_SPMI_FETCHER_CLK 69
+#define GCC_SPMI_FETCHER_CLK_SRC 70
+#define GCC_SYS_NOC_CPUSS_AHB_CLK 71
+#define GCC_SYS_NOC_USB3_CLK 72
+#define GCC_USB30_MASTER_CLK 73
+#define GCC_USB30_MASTER_CLK_SRC 74
+#define GCC_USB30_MOCK_UTMI_CLK 75
+#define GCC_USB30_MOCK_UTMI_CLK_SRC 76
+#define GCC_USB30_SLEEP_CLK 77
+#define GCC_USB3_PHY_AUX_CLK 78
+#define GCC_USB3_PHY_AUX_CLK_SRC 79
+#define GCC_USB3_PHY_PIPE_CLK 80
+#define GCC_USB_PHY_CFG_AHB2PHY_CLK 81
+#define GPLL0 82
+#define GPLL0_OUT_EVEN 83
+#define GPLL4 84
+#define GPLL4_OUT_EVEN 85
+#define GCC_USB3_PRIM_CLKREF_CLK 86
/* CPU clocks */
#define CLOCK_A7SS 0
@@ -125,5 +126,6 @@
#define GCC_USB3PHY_PHY_BCR 16
#define GCC_QUSB2PHY_BCR 17
#define GCC_USB_PHY_CFG_AHB2PHY_BCR 18
+#define GCC_EMAC_BCR 19
#endif
diff --git a/include/dt-bindings/pinctrl/omap.h b/include/dt-bindings/pinctrl/omap.h
index effadd0..fbd6f72 100644
--- a/include/dt-bindings/pinctrl/omap.h
+++ b/include/dt-bindings/pinctrl/omap.h
@@ -45,8 +45,8 @@
#define PIN_OFF_NONE 0
#define PIN_OFF_OUTPUT_HIGH (OFF_EN | OFFOUT_EN | OFFOUT_VAL)
#define PIN_OFF_OUTPUT_LOW (OFF_EN | OFFOUT_EN)
-#define PIN_OFF_INPUT_PULLUP (OFF_EN | OFF_PULL_EN | OFF_PULL_UP)
-#define PIN_OFF_INPUT_PULLDOWN (OFF_EN | OFF_PULL_EN)
+#define PIN_OFF_INPUT_PULLUP (OFF_EN | OFFOUT_EN | OFF_PULL_EN | OFF_PULL_UP)
+#define PIN_OFF_INPUT_PULLDOWN (OFF_EN | OFFOUT_EN | OFF_PULL_EN)
#define PIN_OFF_WAKEUPENABLE WAKEUP_EN
/*
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 2b8b6e0..8a7a15c 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -81,6 +81,12 @@ struct bio {
struct bio_set *bi_pool;
/*
+ * When using dircet-io (O_DIRECT), we can't get the inode from a bio
+ * by walking bio->bi_io_vec->bv_page->mapping->host
+ * since the page is anon.
+ */
+ struct inode *bi_dio_inode;
+ /*
* We can inline a number of vecs at the end of the bio, to avoid
* double allocations for a small number of bio_vecs. This member
* MUST obviously be kept at the very end of the bio.
diff --git a/include/linux/elf.h b/include/linux/elf.h
index 20fa8d8..611e3ae 100644
--- a/include/linux/elf.h
+++ b/include/linux/elf.h
@@ -42,6 +42,39 @@ extern Elf64_Dyn _DYNAMIC [];
#endif
+/* Generic helpers for ELF use */
+/* Return first section header */
+static inline struct elf_shdr *elf_sheader(struct elfhdr *hdr)
+{
+ return (struct elf_shdr *)((size_t)hdr + (size_t)hdr->e_shoff);
+}
+
+/* Return idx section header */
+static inline struct elf_shdr *elf_section(struct elfhdr *hdr, int idx)
+{
+ return &elf_sheader(hdr)[idx];
+}
+
+/* Return first program header */
+static inline struct elf_phdr *elf_pheader(struct elfhdr *hdr)
+{
+ return (struct elf_phdr *)((size_t)hdr + (size_t)hdr->e_phoff);
+}
+
+/* Return idx program header */
+static inline struct elf_phdr *elf_program(struct elfhdr *hdr, int idx)
+{
+ return &elf_pheader(hdr)[idx];
+}
+
+/* Retunr section's string table header */
+static inline char *elf_str_table(struct elfhdr *hdr)
+{
+ if (hdr->e_shstrndx == SHN_UNDEF)
+ return NULL;
+ return (char *)hdr + elf_section(hdr, hdr->e_shstrndx)->sh_offset;
+}
+
/* Optional callbacks to write extra ELF notes. */
struct file;
struct coredump_params;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 18bd249..4f6ec47 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2925,6 +2925,8 @@ static inline void inode_dio_end(struct inode *inode)
wake_up_bit(&inode->i_state, __I_DIO_WAKEUP);
}
+struct inode *dio_bio_get_inode(struct bio *bio);
+
extern void inode_set_flags(struct inode *inode, unsigned int flags,
unsigned int mask);
diff --git a/include/linux/fscrypto.h b/include/linux/fscrypto.h
index f6dfc29..9b57c19 100644
--- a/include/linux/fscrypto.h
+++ b/include/linux/fscrypto.h
@@ -34,6 +34,7 @@
#define FS_ENCRYPTION_MODE_AES_256_GCM 2
#define FS_ENCRYPTION_MODE_AES_256_CBC 3
#define FS_ENCRYPTION_MODE_AES_256_CTS 4
+#define FS_ENCRYPTION_MODE_PRIVATE 127
/**
* Encryption context for inode
@@ -80,6 +81,7 @@ struct fscrypt_info {
u8 ci_flags;
struct crypto_skcipher *ci_ctfm;
u8 ci_master_key[FS_KEY_DESCRIPTOR_SIZE];
+ u8 ci_raw_key[FS_MAX_KEY_SIZE];
};
#define FS_CTX_REQUIRES_FREE_ENCRYPT_FL 0x00000001
@@ -176,7 +178,8 @@ static inline bool fscrypt_dummy_context_enabled(struct inode *inode)
static inline bool fscrypt_valid_contents_enc_mode(u32 mode)
{
- return (mode == FS_ENCRYPTION_MODE_AES_256_XTS);
+ return (mode == FS_ENCRYPTION_MODE_AES_256_XTS ||
+ mode == FS_ENCRYPTION_MODE_PRIVATE);
}
static inline bool fscrypt_valid_filenames_enc_mode(u32 mode)
@@ -257,6 +260,7 @@ extern int fscrypt_inherit_context(struct inode *, struct inode *,
/* keyinfo.c */
extern int fscrypt_get_encryption_info(struct inode *);
extern void fscrypt_put_encryption_info(struct inode *, struct fscrypt_info *);
+extern int fs_using_hardware_encryption(struct inode *inode);
/* fname.c */
extern int fscrypt_setup_filename(struct inode *, const struct qstr *,
@@ -354,6 +358,11 @@ static inline void fscrypt_notsupp_put_encryption_info(struct inode *i,
return;
}
+static inline int fs_notsupp_using_hardware_encryption(struct inode *inode)
+{
+ return -EOPNOTSUPP;
+}
+
/* fname.c */
static inline int fscrypt_notsupp_setup_filename(struct inode *dir,
const struct qstr *iname,
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 46cd745..16ef407 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -189,7 +189,7 @@ struct vm_area_struct;
#define __GFP_OTHER_NODE ((__force gfp_t)___GFP_OTHER_NODE)
/* Room for N __GFP_FOO bits */
-#define __GFP_BITS_SHIFT 26
+#define __GFP_BITS_SHIFT 27
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
/*
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 8aeb9be..f25acfc 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -244,10 +244,6 @@ struct iommu_ops {
/* Get the number of windows per domain */
u32 (*domain_get_windows)(struct iommu_domain *domain);
void (*trigger_fault)(struct iommu_domain *domain, unsigned long flags);
- unsigned long (*reg_read)(struct iommu_domain *domain,
- unsigned long offset);
- void (*reg_write)(struct iommu_domain *domain, unsigned long val,
- unsigned long offset);
void (*tlbi_domain)(struct iommu_domain *domain);
int (*enable_config_clocks)(struct iommu_domain *domain);
void (*disable_config_clocks)(struct iommu_domain *domain);
diff --git a/include/linux/ipa.h b/include/linux/ipa.h
index dd6849d..405aed5 100644
--- a/include/linux/ipa.h
+++ b/include/linux/ipa.h
@@ -1175,6 +1175,28 @@ struct ipa_tz_unlock_reg_info {
u64 size;
};
+/**
+ * struct ipa_smmu_in_params - information provided from client
+ * @ipa_smmu_client_type: clinet requesting for the smmu info.
+ */
+
+enum ipa_smmu_client_type {
+ IPA_SMMU_WLAN_CLIENT,
+ IPA_SMMU_CLIENT_MAX
+};
+
+struct ipa_smmu_in_params {
+ enum ipa_smmu_client_type smmu_client;
+};
+
+/**
+ * struct ipa_smmu_out_params - information provided to IPA client
+ * @ipa_smmu_s1_enable: IPA S1 SMMU enable/disable status
+ */
+struct ipa_smmu_out_params {
+ bool smmu_enable;
+};
+
#if defined CONFIG_IPA || defined CONFIG_IPA3
/*
@@ -1564,6 +1586,9 @@ int ipa_register_ipa_ready_cb(void (*ipa_ready_cb)(void *user_data),
*/
int ipa_tz_unlock_reg(struct ipa_tz_unlock_reg_info *reg_info, u16 num_regs);
+int ipa_get_smmu_params(struct ipa_smmu_in_params *in,
+ struct ipa_smmu_out_params *out);
+
#else /* (CONFIG_IPA || CONFIG_IPA3) */
/*
@@ -2351,6 +2376,12 @@ static inline int ipa_tz_unlock_reg(struct ipa_tz_unlock_reg_info *reg_info,
return -EPERM;
}
+
+static inline int ipa_get_smmu_params(struct ipa_smmu_in_params *in,
+ struct ipa_smmu_out_params *out)
+{
+ return -EPERM;
+}
#endif /* (CONFIG_IPA || CONFIG_IPA3) */
#endif /* _IPA_H_ */
diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 8f5af30..580cc10 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -1419,6 +1419,8 @@ union security_list_options {
size_t *len);
int (*inode_create)(struct inode *dir, struct dentry *dentry,
umode_t mode);
+ int (*inode_post_create)(struct inode *dir, struct dentry *dentry,
+ umode_t mode);
int (*inode_link)(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry);
int (*inode_unlink)(struct inode *dir, struct dentry *dentry);
@@ -1706,6 +1708,7 @@ struct security_hook_heads {
struct list_head inode_free_security;
struct list_head inode_init_security;
struct list_head inode_create;
+ struct list_head inode_post_create;
struct list_head inode_link;
struct list_head inode_unlink;
struct list_head inode_symlink;
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 07e1acb..90900c2 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -686,7 +686,8 @@ typedef struct pglist_data {
* is the first PFN that needs to be initialised.
*/
unsigned long first_deferred_pfn;
- unsigned long static_init_size;
+ /* Number of non-deferred pages */
+ unsigned long static_init_pgcnt;
#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
diff --git a/include/linux/msm_dma_iommu_mapping.h b/include/linux/msm_dma_iommu_mapping.h
index a46eb87..44dabb1 100644
--- a/include/linux/msm_dma_iommu_mapping.h
+++ b/include/linux/msm_dma_iommu_mapping.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -28,6 +28,21 @@ int msm_dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction dir, struct dma_buf *dma_buf,
unsigned long attrs);
+/*
+ * This function takes an extra reference to the dma_buf.
+ * What this means is that calling msm_dma_unmap_sg will not result in buffer's
+ * iommu mapping being removed, which means that subsequent calls to lazy map
+ * will simply re-use the existing iommu mapping.
+ * The iommu unmapping of the buffer will occur when the ION buffer is
+ * destroyed.
+ * Using lazy mapping can provide a performance benefit because subsequent
+ * mappings are faster.
+ *
+ * The limitation of using this API are that all subsequent iommu mappings
+ * must be the same as the original mapping, ie they must map the same part of
+ * the buffer with the same dma data direction. Also there can't be multiple
+ * mappings of different parts of the buffer.
+ */
static inline int msm_dma_map_sg_lazy(struct device *dev,
struct scatterlist *sg, int nents,
enum dma_data_direction dir,
diff --git a/include/linux/msm_ep_pcie.h b/include/linux/msm_ep_pcie.h
new file mode 100644
index 0000000..a1d2a17
--- /dev/null
+++ b/include/linux/msm_ep_pcie.h
@@ -0,0 +1,290 @@
+/* Copyright (c) 2015, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __MSM_EP_PCIE_H
+#define __MSM_EP_PCIE_H
+
+#include <linux/types.h>
+
+enum ep_pcie_link_status {
+ EP_PCIE_LINK_DISABLED,
+ EP_PCIE_LINK_UP,
+ EP_PCIE_LINK_ENABLED,
+};
+
+enum ep_pcie_event {
+ EP_PCIE_EVENT_INVALID = 0,
+ EP_PCIE_EVENT_PM_D0 = 0x1,
+ EP_PCIE_EVENT_PM_D3_HOT = 0x2,
+ EP_PCIE_EVENT_PM_D3_COLD = 0x4,
+ EP_PCIE_EVENT_PM_RST_DEAST = 0x8,
+ EP_PCIE_EVENT_LINKDOWN = 0x10,
+ EP_PCIE_EVENT_LINKUP = 0x20,
+ EP_PCIE_EVENT_MHI_A7 = 0x40,
+ EP_PCIE_EVENT_MMIO_WRITE = 0x80,
+};
+
+enum ep_pcie_irq_event {
+ EP_PCIE_INT_EVT_LINK_DOWN = 1,
+ EP_PCIE_INT_EVT_BME,
+ EP_PCIE_INT_EVT_PM_TURNOFF,
+ EP_PCIE_INT_EVT_DEBUG,
+ EP_PCIE_INT_EVT_LTR,
+ EP_PCIE_INT_EVT_MHI_Q6,
+ EP_PCIE_INT_EVT_MHI_A7,
+ EP_PCIE_INT_EVT_DSTATE_CHANGE,
+ EP_PCIE_INT_EVT_L1SUB_TIMEOUT,
+ EP_PCIE_INT_EVT_MMIO_WRITE,
+ EP_PCIE_INT_EVT_CFG_WRITE,
+ EP_PCIE_INT_EVT_BRIDGE_FLUSH_N,
+ EP_PCIE_INT_EVT_LINK_UP,
+ EP_PCIE_INT_EVT_MAX = 13,
+};
+
+enum ep_pcie_trigger {
+ EP_PCIE_TRIGGER_CALLBACK,
+ EP_PCIE_TRIGGER_COMPLETION,
+};
+
+enum ep_pcie_options {
+ EP_PCIE_OPT_NULL = 0,
+ EP_PCIE_OPT_AST_WAKE = 0x1,
+ EP_PCIE_OPT_POWER_ON = 0x2,
+ EP_PCIE_OPT_ENUM = 0x4,
+ EP_PCIE_OPT_ENUM_ASYNC = 0x8,
+ EP_PCIE_OPT_ALL = 0xFFFFFFFF,
+};
+
+struct ep_pcie_notify {
+ enum ep_pcie_event event;
+ void *user;
+ void *data;
+ u32 options;
+};
+
+struct ep_pcie_register_event {
+ u32 events;
+ void *user;
+ enum ep_pcie_trigger mode;
+ void (*callback)(struct ep_pcie_notify *notify);
+ struct ep_pcie_notify notify;
+ struct completion *completion;
+ u32 options;
+};
+
+struct ep_pcie_iatu {
+ u32 start;
+ u32 end;
+ u32 tgt_lower;
+ u32 tgt_upper;
+};
+
+struct ep_pcie_msi_config {
+ u32 lower;
+ u32 upper;
+ u32 data;
+ u32 msg_num;
+};
+
+struct ep_pcie_db_config {
+ u8 base;
+ u8 end;
+ u32 tgt_addr;
+};
+
+struct ep_pcie_hw {
+ struct list_head node;
+ u32 device_id;
+ void **private_data;
+ int (*register_event)(struct ep_pcie_register_event *reg);
+ int (*deregister_event)(void);
+ enum ep_pcie_link_status (*get_linkstatus)(void);
+ int (*config_outbound_iatu)(struct ep_pcie_iatu entries[],
+ u32 num_entries);
+ int (*get_msi_config)(struct ep_pcie_msi_config *cfg);
+ int (*trigger_msi)(u32 idx);
+ int (*wakeup_host)(void);
+ int (*enable_endpoint)(enum ep_pcie_options opt);
+ int (*disable_endpoint)(void);
+ int (*config_db_routing)(struct ep_pcie_db_config chdb_cfg,
+ struct ep_pcie_db_config erdb_cfg);
+ int (*mask_irq_event)(enum ep_pcie_irq_event event,
+ bool enable);
+};
+
+/*
+ * ep_pcie_register_drv - register HW driver.
+ * @phandle: PCIe endpoint HW driver handle
+ *
+ * This function registers PCIe HW driver to PCIe endpoint service
+ * layer.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_register_drv(struct ep_pcie_hw *phandle);
+
+/*
+ * ep_pcie_deregister_drv - deregister HW driver.
+ * @phandle: PCIe endpoint HW driver handle
+ *
+ * This function deregisters PCIe HW driver to PCIe endpoint service
+ * layer.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_deregister_drv(struct ep_pcie_hw *phandle);
+
+/*
+ * ep_pcie_get_phandle - get PCIe endpoint HW driver handle.
+ * @id: PCIe endpoint device ID
+ *
+ * This function deregisters PCIe HW driver from PCIe endpoint service
+ * layer.
+ *
+ * Return: PCIe endpoint HW driver handle
+ */
+struct ep_pcie_hw *ep_pcie_get_phandle(u32 id);
+
+/*
+ * ep_pcie_register_event - register event with PCIe driver.
+ * @phandle: PCIe endpoint HW driver handle
+ * @reg: event structure
+ *
+ * This function gives PCIe client driver an option to register
+ * event with PCIe driver.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_register_event(struct ep_pcie_hw *phandle,
+ struct ep_pcie_register_event *reg);
+
+/*
+ * ep_pcie_deregister_event - deregister event with PCIe driver.
+ * @phandle: PCIe endpoint HW driver handle
+ *
+ * This function gives PCIe client driver an option to deregister
+ * existing event with PCIe driver.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_deregister_event(struct ep_pcie_hw *phandle);
+
+/*
+ * ep_pcie_get_linkstatus - indicate the status of PCIe link.
+ * @phandle: PCIe endpoint HW driver handle
+ *
+ * This function tells PCIe client about the status of PCIe link.
+ *
+ * Return: status of PCIe link
+ */
+enum ep_pcie_link_status ep_pcie_get_linkstatus(struct ep_pcie_hw *phandle);
+
+/*
+ * ep_pcie_config_outbound_iatu - configure outbound iATU.
+ * @entries: iatu entries
+ * @num_entries: number of iatu entries
+ *
+ * This function configures the outbound iATU for PCIe
+ * client's access to the regions in the host memory which
+ * are specified by the SW on host side.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_config_outbound_iatu(struct ep_pcie_hw *phandle,
+ struct ep_pcie_iatu entries[],
+ u32 num_entries);
+
+/*
+ * ep_pcie_get_msi_config - get MSI config info.
+ * @phandle: PCIe endpoint HW driver handle
+ * @cfg: pointer to MSI config
+ *
+ * This function returns MSI config info.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_get_msi_config(struct ep_pcie_hw *phandle,
+ struct ep_pcie_msi_config *cfg);
+
+/*
+ * ep_pcie_trigger_msi - trigger an MSI.
+ * @phandle: PCIe endpoint HW driver handle
+ * @idx: MSI index number
+ *
+ * This function allows PCIe client to trigger an MSI
+ * on host side.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_trigger_msi(struct ep_pcie_hw *phandle, u32 idx);
+
+/*
+ * ep_pcie_wakeup_host - wake up the host.
+ * @phandle: PCIe endpoint HW driver handle
+ *
+ * This function asserts WAKE GPIO to wake up the host.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_wakeup_host(struct ep_pcie_hw *phandle);
+
+/*
+ * ep_pcie_enable_endpoint - enable PCIe endpoint.
+ * @phandle: PCIe endpoint HW driver handle
+ * @opt: endpoint enable options
+ *
+ * This function is to enable the PCIe endpoint device.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_enable_endpoint(struct ep_pcie_hw *phandle,
+ enum ep_pcie_options opt);
+
+/*
+ * ep_pcie_disable_endpoint - disable PCIe endpoint.
+ * @phandle: PCIe endpoint HW driver handle
+ *
+ * This function is to disable the PCIe endpoint device.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_disable_endpoint(struct ep_pcie_hw *phandle);
+
+/*
+ * ep_pcie_config_db_routing - Configure routing of doorbells to another block.
+ * @phandle: PCIe endpoint HW driver handle
+ * @chdb_cfg: channel doorbell config
+ * @erdb_cfg: event ring doorbell config
+ *
+ * This function allows PCIe core to route the doorbells intended
+ * for another entity via a target address.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_config_db_routing(struct ep_pcie_hw *phandle,
+ struct ep_pcie_db_config chdb_cfg,
+ struct ep_pcie_db_config erdb_cfg);
+
+/*
+ * ep_pcie_mask_irq_event - enable and disable IRQ event.
+ * @phandle: PCIe endpoint HW driver handle
+ * @event: IRQ event
+ * @enable: true to enable that IRQ event and false to disable
+ *
+ * This function is to enable and disable IRQ event.
+ *
+ * Return: 0 on success, negative value on error
+ */
+int ep_pcie_mask_irq_event(struct ep_pcie_hw *phandle,
+ enum ep_pcie_irq_event event,
+ bool enable);
+#endif
diff --git a/include/linux/msm_ext_display.h b/include/linux/msm_ext_display.h
index 08e0def..e34f468 100644
--- a/include/linux/msm_ext_display.h
+++ b/include/linux/msm_ext_display.h
@@ -117,6 +117,7 @@ struct msm_ext_disp_intf_ops {
* @get_intf_id: id of connected interface
* @teardown_done: audio session teardown done by qdsp
* @acknowledge: acknowledge audio status received by user modules
+ * @ready: notify audio when codec driver is ready.
*/
struct msm_ext_disp_audio_codec_ops {
int (*audio_info_setup)(struct platform_device *pdev,
@@ -127,6 +128,7 @@ struct msm_ext_disp_audio_codec_ops {
int (*get_intf_id)(struct platform_device *pdev);
void (*teardown_done)(struct platform_device *pdev);
int (*acknowledge)(struct platform_device *pdev, u32 ack);
+ int (*ready)(struct platform_device *pdev);
};
/**
diff --git a/include/linux/msm_smd_pkt.h b/include/linux/msm_smd_pkt.h
new file mode 100644
index 0000000..c79933d
--- /dev/null
+++ b/include/linux/msm_smd_pkt.h
@@ -0,0 +1,23 @@
+/* Copyright (c) 2010,2017 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#ifndef __LINUX_MSM_SMD_PKT_H
+#define __LINUX_MSM_SMD_PKT_H
+
+#include <linux/ioctl.h>
+
+#define SMD_PKT_IOCTL_MAGIC (0xC2)
+
+#define SMD_PKT_IOCTL_BLOCKING_WRITE \
+ _IOR(SMD_PKT_IOCTL_MAGIC, 0, unsigned int)
+
+#endif /* __LINUX_MSM_SMD_PKT_H */
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index c92ed22..3513226 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3751,6 +3751,9 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name,
unsigned char name_assign_type,
void (*setup)(struct net_device *),
unsigned int txqs, unsigned int rxqs);
+int dev_get_valid_name(struct net *net, struct net_device *dev,
+ const char *name);
+
#define alloc_netdev(sizeof_priv, name, name_assign_type, setup) \
alloc_netdev_mqs(sizeof_priv, name, name_assign_type, setup, 1, 1)
diff --git a/include/linux/netfilter/nf_conntrack_sip.h b/include/linux/netfilter/nf_conntrack_sip.h
index d5af3c2..220380b 100644
--- a/include/linux/netfilter/nf_conntrack_sip.h
+++ b/include/linux/netfilter/nf_conntrack_sip.h
@@ -166,6 +166,11 @@ struct nf_nat_sip_hooks {
};
extern const struct nf_nat_sip_hooks *nf_nat_sip_hooks;
+extern void (*nf_nat_sip_seq_adjust_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ s16 off);
+
int ct_sip_parse_request(const struct nf_conn *ct, const char *dptr,
unsigned int datalen, unsigned int *matchoff,
unsigned int *matchlen, union nf_inet_addr *addr,
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 6e38683..70936bf 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -497,7 +497,8 @@ struct perf_addr_filters_head {
* enum perf_event_active_state - the states of a event
*/
enum perf_event_active_state {
- PERF_EVENT_STATE_DEAD = -4,
+ PERF_EVENT_STATE_DEAD = -5,
+ PERF_EVENT_STATE_ZOMBIE = -4,
PERF_EVENT_STATE_EXIT = -3,
PERF_EVENT_STATE_ERROR = -2,
PERF_EVENT_STATE_OFF = -1,
@@ -717,6 +718,10 @@ struct perf_event {
#endif
struct list_head sb_list;
+
+ /* Is this event shared with other events */
+ bool shared;
+ struct list_head zombie_entry;
#endif /* CONFIG_PERF_EVENTS */
};
diff --git a/include/linux/pfk.h b/include/linux/pfk.h
new file mode 100644
index 0000000..82ee741
--- /dev/null
+++ b/include/linux/pfk.h
@@ -0,0 +1,57 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef PFK_H_
+#define PFK_H_
+
+#include <linux/bio.h>
+
+struct ice_crypto_setting;
+
+#ifdef CONFIG_PFK
+
+int pfk_load_key_start(const struct bio *bio,
+ struct ice_crypto_setting *ice_setting, bool *is_pfe, bool);
+int pfk_load_key_end(const struct bio *bio, bool *is_pfe);
+int pfk_remove_key(const unsigned char *key, size_t key_size);
+bool pfk_allow_merge_bio(const struct bio *bio1, const struct bio *bio2);
+void pfk_clear_on_reset(void);
+
+#else
+static inline int pfk_load_key_start(const struct bio *bio,
+ struct ice_crypto_setting *ice_setting, bool *is_pfe, bool async)
+{
+ return -ENODEV;
+}
+
+static inline int pfk_load_key_end(const struct bio *bio, bool *is_pfe)
+{
+ return -ENODEV;
+}
+
+static inline int pfk_remove_key(const unsigned char *key, size_t key_size)
+{
+ return -ENODEV;
+}
+
+static inline bool pfk_allow_merge_bio(const struct bio *bio1,
+ const struct bio *bio2)
+{
+ return true;
+}
+
+static inline void pfk_clear_on_reset(void)
+{}
+
+#endif /* CONFIG_PFK */
+
+#endif /* PFK_H */
diff --git a/include/linux/phy.h b/include/linux/phy.h
index 8431c8c..a04d69a 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -142,11 +142,7 @@ static inline const char *phy_modes(phy_interface_t interface)
/* Used when trying to connect to a specific phy (mii bus id:phy device id) */
#define PHY_ID_FMT "%s:%02x"
-/*
- * Need to be a little smaller than phydev->dev.bus_id to leave room
- * for the ":%02x"
- */
-#define MII_BUS_ID_SIZE (20 - 3)
+#define MII_BUS_ID_SIZE 61
/* Or MII_ADDR_C45 into regnum for read/write on mii_bus to enable the 21 bit
IEEE 802.3ae clause 45 addressing mode used by 10GIGE phy chips. */
@@ -602,7 +598,7 @@ struct phy_driver {
/* A Structure for boards to register fixups with the PHY Lib */
struct phy_fixup {
struct list_head list;
- char bus_id[20];
+ char bus_id[MII_BUS_ID_SIZE + 3];
u32 phy_uid;
u32 phy_uid_mask;
int (*run)(struct phy_device *phydev);
diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
index b6c8c92..d253ca6 100644
--- a/include/linux/power_supply.h
+++ b/include/linux/power_supply.h
@@ -103,6 +103,9 @@ enum {
POWER_SUPPLY_DP_DM_HVDCP3_SUPPORTED = 10,
POWER_SUPPLY_DP_DM_ICL_DOWN = 11,
POWER_SUPPLY_DP_DM_ICL_UP = 12,
+ POWER_SUPPLY_DP_DM_FORCE_5V = 13,
+ POWER_SUPPLY_DP_DM_FORCE_9V = 14,
+ POWER_SUPPLY_DP_DM_FORCE_12V = 15,
};
enum {
@@ -112,6 +115,11 @@ enum {
POWER_SUPPLY_PL_USBMID_USBMID,
};
+enum {
+ POWER_SUPPLY_CONNECTOR_TYPEC,
+ POWER_SUPPLY_CONNECTOR_MICRO_USB,
+};
+
enum power_supply_property {
/* Properties of type `int' */
POWER_SUPPLY_PROP_STATUS = 0,
@@ -257,6 +265,7 @@ enum power_supply_property {
POWER_SUPPLY_PROP_PD_VOLTAGE_MAX,
POWER_SUPPLY_PROP_PD_VOLTAGE_MIN,
POWER_SUPPLY_PROP_SDP_CURRENT_MAX,
+ POWER_SUPPLY_PROP_CONNECTOR_TYPE,
/* Local extensions of type int64_t */
POWER_SUPPLY_PROP_CHARGE_COUNTER_EXT,
/* Properties of type `const char *' */
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 75e4e30..7eeceac 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -65,19 +65,24 @@
/*
* Are we doing bottom half or hardware interrupt processing?
- * Are we in a softirq context? Interrupt context?
- * in_softirq - Are we currently processing softirq or have bh disabled?
- * in_serving_softirq - Are we currently processing softirq?
+ *
+ * in_irq() - We're in (hard) IRQ context
+ * in_softirq() - We have BH disabled, or are processing softirqs
+ * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
+ * in_serving_softirq() - We're in softirq context
+ * in_nmi() - We're in NMI context
+ * in_task() - We're in task context
+ *
+ * Note: due to the BH disabled confusion: in_softirq(),in_interrupt() really
+ * should not be used in new code.
*/
#define in_irq() (hardirq_count())
#define in_softirq() (softirq_count())
#define in_interrupt() (irq_count())
#define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET)
-
-/*
- * Are we in NMI context?
- */
-#define in_nmi() (preempt_count() & NMI_MASK)
+#define in_nmi() (preempt_count() & NMI_MASK)
+#define in_task() (!(preempt_count() & \
+ (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
/*
* The preempt_count offset after preempt_disable();
diff --git a/include/linux/qpnp/qpnp-revid.h b/include/linux/qpnp/qpnp-revid.h
index 42fd34c..bbc4625 100644
--- a/include/linux/qpnp/qpnp-revid.h
+++ b/include/linux/qpnp/qpnp-revid.h
@@ -214,6 +214,11 @@
#define PM660L_V1P1_REV3 0x01
#define PM660L_V1P1_REV4 0x01
+#define PM660L_V2P0_REV1 0x00
+#define PM660L_V2P0_REV2 0x00
+#define PM660L_V2P0_REV3 0x00
+#define PM660L_V2P0_REV4 0x02
+
/* PMI8998 FAB_ID */
#define PMI8998_FAB_ID_SMIC 0x11
#define PMI8998_FAB_ID_GF 0x30
diff --git a/include/linux/regulator/qpnp-labibb-regulator.h b/include/linux/regulator/qpnp-labibb-regulator.h
index 2470695..33985af 100644
--- a/include/linux/regulator/qpnp-labibb-regulator.h
+++ b/include/linux/regulator/qpnp-labibb-regulator.h
@@ -15,6 +15,7 @@
enum labibb_notify_event {
LAB_VREG_OK = 1,
+ LAB_VREG_NOT_OK,
};
int qpnp_labibb_notifier_register(struct notifier_block *nb);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index bc530f7..0d4035a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -184,6 +184,8 @@ extern void sched_get_nr_running_avg(int *avg, int *iowait_avg, int *big_avg,
unsigned int *big_max_nr);
extern unsigned int sched_get_cpu_util(int cpu);
extern u64 sched_get_cpu_last_busy_time(int cpu);
+extern u32 sched_get_wake_up_idle(struct task_struct *p);
+extern int sched_set_wake_up_idle(struct task_struct *p, int wake_up_idle);
#else
static inline void sched_update_nr_prod(int cpu, long delta, bool inc)
{
@@ -201,6 +203,15 @@ static inline u64 sched_get_cpu_last_busy_time(int cpu)
{
return 0;
}
+static inline u32 sched_get_wake_up_idle(struct task_struct *p)
+{
+ return 0;
+}
+static inline int sched_set_wake_up_idle(struct task_struct *p,
+ int wake_up_idle)
+{
+ return 0;
+}
#endif
extern void calc_global_load(unsigned long ticks);
@@ -2724,9 +2735,6 @@ struct sched_load {
unsigned long predicted_load;
};
-extern int sched_set_wake_up_idle(struct task_struct *p, int wake_up_idle);
-extern u32 sched_get_wake_up_idle(struct task_struct *p);
-
struct cpu_cycle_counter_cb {
u64 (*get_cpu_cycle_counter)(int cpu);
};
diff --git a/include/linux/security.h b/include/linux/security.h
index c2125e9..02e05de 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -30,6 +30,7 @@
#include <linux/string.h>
#include <linux/mm.h>
#include <linux/fs.h>
+#include <linux/bio.h>
struct linux_binprm;
struct cred;
@@ -256,6 +257,8 @@ int security_old_inode_init_security(struct inode *inode, struct inode *dir,
const struct qstr *qstr, const char **name,
void **value, size_t *len);
int security_inode_create(struct inode *dir, struct dentry *dentry, umode_t mode);
+int security_inode_post_create(struct inode *dir, struct dentry *dentry,
+ umode_t mode);
int security_inode_link(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry);
int security_inode_unlink(struct inode *dir, struct dentry *dentry);
@@ -304,6 +307,7 @@ int security_file_send_sigiotask(struct task_struct *tsk,
struct fown_struct *fown, int sig);
int security_file_receive(struct file *file);
int security_file_open(struct file *file, const struct cred *cred);
+
int security_task_create(unsigned long clone_flags);
void security_task_free(struct task_struct *task);
int security_cred_alloc_blank(struct cred *cred, gfp_t gfp);
@@ -637,6 +641,13 @@ static inline int security_inode_create(struct inode *dir,
return 0;
}
+static inline int security_inode_post_create(struct inode *dir,
+ struct dentry *dentry,
+ umode_t mode)
+{
+ return 0;
+}
+
static inline int security_inode_link(struct dentry *old_dentry,
struct inode *dir,
struct dentry *new_dentry)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index b1a09c5..c73bef9 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3588,6 +3588,13 @@ static inline void nf_reset_trace(struct sk_buff *skb)
#endif
}
+static inline void ipvs_reset(struct sk_buff *skb)
+{
+#if IS_ENABLED(CONFIG_IP_VS)
+ skb->ipvs_property = 0;
+#endif
+}
+
/* Note: This doesn't put any conntrack and bridge info in dst. */
static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src,
bool copy)
diff --git a/include/linux/usb.h b/include/linux/usb.h
index 232c3e0..81e8469 100644
--- a/include/linux/usb.h
+++ b/include/linux/usb.h
@@ -757,6 +757,9 @@ extern phys_addr_t usb_get_xfer_ring_phys_addr(struct usb_device *dev,
struct usb_host_endpoint *ep, dma_addr_t *dma);
extern int usb_get_controller_id(struct usb_device *dev);
+extern int usb_stop_endpoint(struct usb_device *dev,
+ struct usb_host_endpoint *ep);
+
/* Sets up a group of bulk endpoints to support multiple stream IDs. */
extern int usb_alloc_streams(struct usb_interface *interface,
struct usb_host_endpoint **eps, unsigned int num_eps,
diff --git a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h
index 00d2324..b0fad11 100644
--- a/include/linux/usb/cdc_ncm.h
+++ b/include/linux/usb/cdc_ncm.h
@@ -83,6 +83,7 @@
/* Driver flags */
#define CDC_NCM_FLAG_NDP_TO_END 0x02 /* NDP is placed at end of frame */
#define CDC_MBIM_FLAG_AVOID_ALTSETTING_TOGGLE 0x04 /* Avoid altsetting toggle during init */
+#define CDC_NCM_FLAG_RESET_NTB16 0x08 /* set NDP16 one more time after altsetting switch */
#define cdc_ncm_comm_intf_is_mbim(x) ((x)->desc.bInterfaceSubClass == USB_CDC_SUBCLASS_MBIM && \
(x)->desc.bInterfaceProtocol == USB_CDC_PROTO_NONE)
diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
index ddd8f4d..4f56e98 100644
--- a/include/linux/usb/gadget.h
+++ b/include/linux/usb/gadget.h
@@ -520,7 +520,6 @@ struct usb_gadget {
unsigned is_selfpowered:1;
unsigned deactivated:1;
unsigned connected:1;
- bool l1_supported;
bool remote_wakeup;
};
#define work_to_gadget(w) (container_of((w), struct usb_gadget, work))
diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
index 1699d2b..d070109 100644
--- a/include/linux/usb/hcd.h
+++ b/include/linux/usb/hcd.h
@@ -407,6 +407,8 @@ struct hc_driver {
struct usb_device *udev, struct usb_host_endpoint *ep,
dma_addr_t *dma);
int (*get_core_id)(struct usb_hcd *hcd);
+ int (*stop_endpoint)(struct usb_hcd *hcd, struct usb_device *udev,
+ struct usb_host_endpoint *ep);
};
static inline int hcd_giveback_urb_in_bh(struct usb_hcd *hcd)
@@ -454,6 +456,8 @@ extern phys_addr_t usb_hcd_get_sec_event_ring_phys_addr(
extern phys_addr_t usb_hcd_get_xfer_ring_phys_addr(
struct usb_device *udev, struct usb_host_endpoint *ep, dma_addr_t *dma);
extern int usb_hcd_get_controller_id(struct usb_device *udev);
+extern int usb_hcd_stop_endpoint(struct usb_device *udev,
+ struct usb_host_endpoint *ep);
struct usb_hcd *__usb_create_hcd(const struct hc_driver *driver,
struct device *sysdev, struct device *dev, const char *bus_name,
diff --git a/include/linux/usb/phy.h b/include/linux/usb/phy.h
index ffb6393..092c32e 100644
--- a/include/linux/usb/phy.h
+++ b/include/linux/usb/phy.h
@@ -138,6 +138,7 @@ struct usb_phy {
/* reset the PHY clocks */
int (*reset)(struct usb_phy *x);
+ int (*disable_chirp)(struct usb_phy *x, bool disable);
};
/**
diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index 73da337..63ce902 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -4231,19 +4231,6 @@ static inline int ieee80211_data_to_8023(struct sk_buff *skb, const u8 *addr,
}
/**
- * ieee80211_data_from_8023 - convert an 802.3 frame to 802.11
- * @skb: the 802.3 frame
- * @addr: the device MAC address
- * @iftype: the virtual interface type
- * @bssid: the network bssid (used only for iftype STATION and ADHOC)
- * @qos: build 802.11 QoS data frame
- * Return: 0 on success, or a negative error code.
- */
-int ieee80211_data_from_8023(struct sk_buff *skb, const u8 *addr,
- enum nl80211_iftype iftype, const u8 *bssid,
- bool qos);
-
-/**
* ieee80211_amsdu_to_8023s - decode an IEEE 802.11n A-MSDU frame
*
* Decode an IEEE 802.11 A-MSDU and convert it to a list of 802.3 frames.
diff --git a/include/net/cnss_utils.h b/include/net/cnss_utils.h
index 6ff0fd0..77d14d1 100644
--- a/include/net/cnss_utils.h
+++ b/include/net/cnss_utils.h
@@ -33,6 +33,9 @@ extern int cnss_utils_get_driver_load_cnt(struct device *dev);
extern void cnss_utils_increment_driver_load_cnt(struct device *dev);
extern int cnss_utils_set_wlan_mac_address(const u8 *in, uint32_t len);
extern u8 *cnss_utils_get_wlan_mac_address(struct device *dev, uint32_t *num);
+extern int cnss_utils_set_wlan_derived_mac_address(const u8 *in, uint32_t len);
+extern u8 *cnss_utils_get_wlan_derived_mac_address(struct device *dev,
+ uint32_t *num);
extern void cnss_utils_set_cc_source(struct device *dev,
enum cnss_utils_cc_src cc_source);
extern enum cnss_utils_cc_src cnss_utils_get_cc_source(struct device *dev);
diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
index 236a810..0464b20 100644
--- a/include/net/inet_sock.h
+++ b/include/net/inet_sock.h
@@ -96,7 +96,7 @@ struct inet_request_sock {
kmemcheck_bitfield_end(flags);
u32 ir_mark;
union {
- struct ip_options_rcu *opt;
+ struct ip_options_rcu __rcu *ireq_opt;
#if IS_ENABLED(CONFIG_IPV6)
struct {
struct ipv6_txoptions *ipv6_opt;
@@ -132,6 +132,12 @@ static inline int inet_request_bound_dev_if(const struct sock *sk,
return sk->sk_bound_dev_if;
}
+static inline struct ip_options_rcu *ireq_opt_deref(const struct inet_request_sock *ireq)
+{
+ return rcu_dereference_check(ireq->ireq_opt,
+ atomic_read(&ireq->req.rsk_refcnt) > 0);
+}
+
struct inet_cork {
unsigned int flags;
__be32 addr;
diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
index 38a02fd..0406073 100644
--- a/include/net/netfilter/nf_conntrack.h
+++ b/include/net/netfilter/nf_conntrack.h
@@ -18,6 +18,7 @@
#include <linux/compiler.h>
#include <linux/atomic.h>
#include <linux/rhashtable.h>
+#include <linux/list.h>
#include <linux/netfilter/nf_conntrack_tcp.h>
#include <linux/netfilter/nf_conntrack_dccp.h>
@@ -27,6 +28,14 @@
#include <net/netfilter/nf_conntrack_tuple.h>
+#define SIP_LIST_ELEMENTS 2
+
+struct sip_length {
+ int msg_length[SIP_LIST_ELEMENTS];
+ int skb_len[SIP_LIST_ELEMENTS];
+ int data_len[SIP_LIST_ELEMENTS];
+};
+
/* per conntrack: protocol private data */
union nf_conntrack_proto {
/* insert conntrack proto private data here */
@@ -71,6 +80,11 @@ struct nf_conn_help {
#include <net/netfilter/ipv4/nf_conntrack_ipv4.h>
#include <net/netfilter/ipv6/nf_conntrack_ipv6.h>
+/* Handle NATTYPE Stuff,only if NATTYPE module was defined */
+#ifdef CONFIG_IP_NF_TARGET_NATTYPE_MODULE
+#include <linux/netfilter_ipv4/ipt_NATTYPE.h>
+#endif
+
struct nf_conn {
/* Usage count in here is 1 for hash table, 1 per skb,
* plus 1 for any connection(s) we are `master' for
@@ -101,7 +115,7 @@ struct nf_conn {
possible_net_t ct_net;
#if IS_ENABLED(CONFIG_NF_NAT)
- struct rhlist_head nat_bysource;
+ struct hlist_node nat_bysource;
#endif
/* all members below initialized via memset */
u8 __nfct_init_offset[0];
@@ -122,6 +136,15 @@ struct nf_conn {
void *sfe_entry;
+#ifdef CONFIG_IP_NF_TARGET_NATTYPE_MODULE
+ unsigned long nattype_entry;
+#endif
+ struct list_head sip_segment_list;
+ const char *dptr_prev;
+ struct sip_length segment;
+ bool sip_original_dir;
+ bool sip_reply_dir;
+
/* Storage reserved for other modules, must be the last member */
union nf_conntrack_proto proto;
};
diff --git a/include/net/netfilter/nf_conntrack_core.h b/include/net/netfilter/nf_conntrack_core.h
index af67969..abc090c 100644
--- a/include/net/netfilter/nf_conntrack_core.h
+++ b/include/net/netfilter/nf_conntrack_core.h
@@ -20,6 +20,9 @@
/* This header is used to share core functionality between the
standalone connection tracking module, and the compatibility layer's use
of connection tracking. */
+
+extern unsigned int nf_conntrack_hash_rnd;
+
unsigned int nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum,
struct sk_buff *skb);
@@ -51,6 +54,9 @@ bool nf_ct_invert_tuple(struct nf_conntrack_tuple *inverse,
const struct nf_conntrack_l3proto *l3proto,
const struct nf_conntrack_l4proto *l4proto);
extern void (*delete_sfe_entry)(struct nf_conn *ct);
+extern bool (*nattype_refresh_timer)
+ (unsigned long nattype,
+ unsigned long timeout_value);
/* Find a connection corresponding to a tuple. */
struct nf_conntrack_tuple_hash *
@@ -87,4 +93,9 @@ void nf_conntrack_lock(spinlock_t *lock);
extern spinlock_t nf_conntrack_expect_lock;
+struct sip_list {
+ struct nf_queue_entry *entry;
+ struct list_head list;
+};
+
#endif /* _NF_CONNTRACK_CORE_H */
diff --git a/include/net/netfilter/nf_nat.h b/include/net/netfilter/nf_nat.h
index c327a43..02515f7 100644
--- a/include/net/netfilter/nf_nat.h
+++ b/include/net/netfilter/nf_nat.h
@@ -1,6 +1,5 @@
#ifndef _NF_NAT_H
#define _NF_NAT_H
-#include <linux/rhashtable.h>
#include <linux/netfilter_ipv4.h>
#include <linux/netfilter/nf_nat.h>
#include <net/netfilter/nf_conntrack_tuple.h>
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 7b93ffd..775c3bd 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1697,12 +1697,12 @@ static inline void tcp_highest_sack_reset(struct sock *sk)
tcp_sk(sk)->highest_sack = tcp_write_queue_head(sk);
}
-/* Called when old skb is about to be deleted (to be combined with new skb) */
-static inline void tcp_highest_sack_combine(struct sock *sk,
+/* Called when old skb is about to be deleted and replaced by new skb */
+static inline void tcp_highest_sack_replace(struct sock *sk,
struct sk_buff *old,
struct sk_buff *new)
{
- if (tcp_sk(sk)->sacked_out && (old == tcp_sk(sk)->highest_sack))
+ if (old == tcp_highest_sack(sk))
tcp_sk(sk)->highest_sack = new;
}
diff --git a/include/soc/qcom/cmd-db.h b/include/soc/qcom/cmd-db.h
index e2c72d1..3c2aff3 100644
--- a/include/soc/qcom/cmd-db.h
+++ b/include/soc/qcom/cmd-db.h
@@ -110,17 +110,18 @@ static inline u32 cmd_db_get_addr(const char *resource_id)
return 0;
}
-bool cmd_db_get_priority(u32 addr, u8 drv_id)
+static inline bool cmd_db_get_priority(u32 addr, u8 drv_id)
{
return false;
}
-int cmd_db_get_aux_data(const char *resource_id, u8 *data, int len)
+static inline int cmd_db_get_aux_data(const char *resource_id,
+ u8 *data, int len)
{
return -ENODEV;
}
-int cmd_db_get_aux_data_len(const char *resource_id)
+static inline int cmd_db_get_aux_data_len(const char *resource_id)
{
return -ENODEV;
}
diff --git a/include/soc/qcom/icnss.h b/include/soc/qcom/icnss.h
index e58a522..4a7b0d6 100644
--- a/include/soc/qcom/icnss.h
+++ b/include/soc/qcom/icnss.h
@@ -13,10 +13,15 @@
#define _ICNSS_WLAN_H_
#include <linux/interrupt.h>
+#include <linux/device.h>
#define ICNSS_MAX_IRQ_REGISTRATIONS 12
#define ICNSS_MAX_TIMESTAMP_LEN 32
+#ifndef ICNSS_API_WITH_DEV
+#define ICNSS_API_WITH_DEV
+#endif
+
enum icnss_uevent {
ICNSS_UEVENT_FW_READY,
ICNSS_UEVENT_FW_CRASHED,
@@ -34,6 +39,8 @@ struct icnss_uevent_data {
struct icnss_driver_ops {
char *name;
+ unsigned long drv_state;
+ struct device_driver driver;
int (*probe)(struct device *dev);
void (*remove)(struct device *dev);
void (*shutdown)(struct device *dev);
@@ -99,35 +106,41 @@ struct icnss_soc_info {
char fw_build_timestamp[ICNSS_MAX_TIMESTAMP_LEN + 1];
};
-extern int icnss_register_driver(struct icnss_driver_ops *driver);
-extern int icnss_unregister_driver(struct icnss_driver_ops *driver);
-extern int icnss_wlan_enable(struct icnss_wlan_enable_cfg *config,
+#define icnss_register_driver(ops) \
+ __icnss_register_driver(ops, THIS_MODULE, KBUILD_MODNAME)
+extern int __icnss_register_driver(struct icnss_driver_ops *ops,
+ struct module *owner, const char *mod_name);
+
+extern int icnss_unregister_driver(struct icnss_driver_ops *ops);
+
+extern int icnss_wlan_enable(struct device *dev,
+ struct icnss_wlan_enable_cfg *config,
enum icnss_driver_mode mode,
const char *host_version);
-extern int icnss_wlan_disable(enum icnss_driver_mode mode);
-extern void icnss_enable_irq(unsigned int ce_id);
-extern void icnss_disable_irq(unsigned int ce_id);
-extern int icnss_get_soc_info(struct icnss_soc_info *info);
-extern int icnss_ce_free_irq(unsigned int ce_id, void *ctx);
-extern int icnss_ce_request_irq(unsigned int ce_id,
+extern int icnss_wlan_disable(struct device *dev, enum icnss_driver_mode mode);
+extern void icnss_enable_irq(struct device *dev, unsigned int ce_id);
+extern void icnss_disable_irq(struct device *dev, unsigned int ce_id);
+extern int icnss_get_soc_info(struct device *dev, struct icnss_soc_info *info);
+extern int icnss_ce_free_irq(struct device *dev, unsigned int ce_id, void *ctx);
+extern int icnss_ce_request_irq(struct device *dev, unsigned int ce_id,
irqreturn_t (*handler)(int, void *),
unsigned long flags, const char *name, void *ctx);
-extern int icnss_get_ce_id(int irq);
-extern int icnss_set_fw_log_mode(uint8_t fw_log_mode);
+extern int icnss_get_ce_id(struct device *dev, int irq);
+extern int icnss_set_fw_log_mode(struct device *dev, uint8_t fw_log_mode);
extern int icnss_athdiag_read(struct device *dev, uint32_t offset,
uint32_t mem_type, uint32_t data_len,
uint8_t *output);
extern int icnss_athdiag_write(struct device *dev, uint32_t offset,
uint32_t mem_type, uint32_t data_len,
uint8_t *input);
-extern int icnss_get_irq(int ce_id);
+extern int icnss_get_irq(struct device *dev, int ce_id);
extern int icnss_power_on(struct device *dev);
extern int icnss_power_off(struct device *dev);
extern struct dma_iommu_mapping *icnss_smmu_get_mapping(struct device *dev);
extern int icnss_smmu_map(struct device *dev, phys_addr_t paddr,
uint32_t *iova_addr, size_t size);
extern unsigned int icnss_socinfo_get_serial_number(struct device *dev);
-extern bool icnss_is_qmi_disable(void);
+extern bool icnss_is_qmi_disable(struct device *dev);
extern bool icnss_is_fw_ready(void);
extern int icnss_trigger_recovery(struct device *dev);
#endif /* _ICNSS_WLAN_H_ */
diff --git a/include/soc/qcom/memory_dump.h b/include/soc/qcom/memory_dump.h
index 9fdc8ff..b4733d7 100644
--- a/include/soc/qcom/memory_dump.h
+++ b/include/soc/qcom/memory_dump.h
@@ -123,12 +123,19 @@ struct msm_dump_entry {
#ifdef CONFIG_QCOM_MEMORY_DUMP_V2
extern int msm_dump_data_register(enum msm_dump_table_ids id,
struct msm_dump_entry *entry);
+
+extern void *get_msm_dump_ptr(enum msm_dump_data_ids id);
#else
static inline int msm_dump_data_register(enum msm_dump_table_ids id,
struct msm_dump_entry *entry)
{
return -EINVAL;
}
+
+static inline void *get_msm_dump_ptr(enum msm_dump_data_ids id)
+{
+ return NULL;
+}
#endif
#endif
diff --git a/include/soc/qcom/minidump.h b/include/soc/qcom/minidump.h
new file mode 100644
index 0000000..5c751e8
--- /dev/null
+++ b/include/soc/qcom/minidump.h
@@ -0,0 +1,51 @@
+/* Copyright (c) 2017 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __MINIDUMP_H
+#define __MINIDUMP_H
+
+#define MAX_NAME_LENGTH 12
+/* md_region - Minidump table entry
+ * @name: Entry name, Minidump will dump binary with this name.
+ * @id: Entry ID, used only for SDI dumps.
+ * @virt_addr: Address of the entry.
+ * @phys_addr: Physical address of the entry to dump.
+ * @size: Number of byte to dump from @address location
+ * it should be 4 byte aligned.
+ */
+struct md_region {
+ char name[MAX_NAME_LENGTH];
+ u32 id;
+ u64 virt_addr;
+ u64 phys_addr;
+ u64 size;
+};
+
+/* Register an entry in Minidump table
+ * Returns:
+ * Zero: on successful addition
+ * Negetive error number on failures
+ */
+#ifdef CONFIG_QCOM_MINIDUMP
+extern int msm_minidump_add_region(const struct md_region *entry);
+extern bool msm_minidump_enabled(void);
+extern void dump_stack_minidump(u64 sp);
+#else
+static inline int msm_minidump_add_region(const struct md_region *entry)
+{
+ /* Return quietly, if minidump is not supported */
+ return 0;
+}
+static inline bool msm_minidump_enabled(void) { return false; }
+static inline void dump_stack_minidump(u64 sp) {}
+#endif
+#endif
diff --git a/include/soc/qcom/rpm-notifier.h b/include/soc/qcom/rpm-notifier.h
new file mode 100644
index 0000000..22e1e26
--- /dev/null
+++ b/include/soc/qcom/rpm-notifier.h
@@ -0,0 +1,63 @@
+/* Copyright (c) 2012-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#ifndef __ARCH_ARM_MACH_MSM_RPM_NOTIF_H
+#define __ARCH_ARM_MACH_MSM_RPM_NOTIF_H
+
+struct msm_rpm_notifier_data {
+ uint32_t rsc_type;
+ uint32_t rsc_id;
+ uint32_t key;
+ uint32_t size;
+ uint8_t *value;
+};
+/**
+ * msm_rpm_register_notifier - Register for sleep set notifications
+ *
+ * @nb - notifier block to register
+ *
+ * return 0 on success, errno on failure.
+ */
+int msm_rpm_register_notifier(struct notifier_block *nb);
+
+/**
+ * msm_rpm_unregister_notifier - Unregister previously registered notifications
+ *
+ * @nb - notifier block to unregister
+ *
+ * return 0 on success, errno on failure.
+ */
+int msm_rpm_unregister_notifier(struct notifier_block *nb);
+
+/**
+ * msm_rpm_enter_sleep - Notify RPM driver to prepare for entering sleep
+ *
+ * @bool - flag to enable print contents of sleep buffer.
+ * @cpumask - cpumask of next wakeup cpu
+ *
+ * return 0 on success errno on failure.
+ */
+int msm_rpm_enter_sleep(bool print, const struct cpumask *cpumask);
+
+/**
+ * msm_rpm_exit_sleep - Notify RPM driver about resuming from power collapse
+ */
+void msm_rpm_exit_sleep(void);
+
+/**
+ * msm_rpm_waiting_for_ack - Indicate if there is RPM message
+ * pending acknowledgment.
+ * returns true for pending messages and false otherwise
+ */
+bool msm_rpm_waiting_for_ack(void);
+
+#endif /*__ARCH_ARM_MACH_MSM_RPM_NOTIF_H */
diff --git a/include/soc/qcom/rpm-smd.h b/include/soc/qcom/rpm-smd.h
new file mode 100644
index 0000000..c84d02d
--- /dev/null
+++ b/include/soc/qcom/rpm-smd.h
@@ -0,0 +1,308 @@
+/* Copyright (c) 2012, 2014-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __ARCH_ARM_MACH_MSM_RPM_SMD_H
+#define __ARCH_ARM_MACH_MSM_RPM_SMD_H
+
+/**
+ * enum msm_rpm_set - RPM enumerations for sleep/active set
+ * %MSM_RPM_CTX_SET_0: Set resource parameters for active mode.
+ * %MSM_RPM_CTX_SET_SLEEP: Set resource parameters for sleep.
+ */
+enum msm_rpm_set {
+ MSM_RPM_CTX_ACTIVE_SET,
+ MSM_RPM_CTX_SLEEP_SET,
+};
+
+struct msm_rpm_request;
+
+struct msm_rpm_kvp {
+ uint32_t key;
+ uint32_t length;
+ uint8_t *data;
+};
+#ifdef CONFIG_MSM_RPM_SMD
+/**
+ * msm_rpm_request() - Creates a parent element to identify the
+ * resource on the RPM, that stores the KVPs for different fields modified
+ * for a hardware resource
+ *
+ * @set: if the device is setting the active/sleep set parameter
+ * for the resource
+ * @rsc_type: unsigned 32 bit integer that identifies the type of the resource
+ * @rsc_id: unsigned 32 bit that uniquely identifies a resource within a type
+ * @num_elements: number of KVPs pairs associated with the resource
+ *
+ * returns pointer to a msm_rpm_request on success, NULL on error
+ */
+struct msm_rpm_request *msm_rpm_create_request(
+ enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, int num_elements);
+
+/**
+ * msm_rpm_request_noirq() - Creates a parent element to identify the
+ * resource on the RPM, that stores the KVPs for different fields modified
+ * for a hardware resource. This function is similar to msm_rpm_create_request
+ * except that it has to be called with interrupts masked.
+ *
+ * @set: if the device is setting the active/sleep set parameter
+ * for the resource
+ * @rsc_type: unsigned 32 bit integer that identifies the type of the resource
+ * @rsc_id: unsigned 32 bit that uniquely identifies a resource within a type
+ * @num_elements: number of KVPs pairs associated with the resource
+ *
+ * returns pointer to a msm_rpm_request on success, NULL on error
+ */
+struct msm_rpm_request *msm_rpm_create_request_noirq(
+ enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, int num_elements);
+
+/**
+ * msm_rpm_add_kvp_data() - Adds a Key value pair to a existing RPM resource.
+ *
+ * @handle: RPM resource handle to which the data should be appended
+ * @key: unsigned integer identify the parameter modified
+ * @data: byte array that contains the value corresponding to key.
+ * @size: size of data in bytes.
+ *
+ * returns 0 on success or errno
+ */
+int msm_rpm_add_kvp_data(struct msm_rpm_request *handle,
+ uint32_t key, const uint8_t *data, int size);
+
+/**
+ * msm_rpm_add_kvp_data_noirq() - Adds a Key value pair to a existing RPM
+ * resource. This function is similar to msm_rpm_add_kvp_data except that it
+ * has to be called with interrupts masked.
+ *
+ * @handle: RPM resource handle to which the data should be appended
+ * @key: unsigned integer identify the parameter modified
+ * @data: byte array that contains the value corresponding to key.
+ * @size: size of data in bytes.
+ *
+ * returns 0 on success or errno
+ */
+int msm_rpm_add_kvp_data_noirq(struct msm_rpm_request *handle,
+ uint32_t key, const uint8_t *data, int size);
+
+/** msm_rpm_free_request() - clean up the RPM request handle created with
+ * msm_rpm_create_request
+ *
+ * @handle: RPM resource handle to be cleared.
+ */
+
+void msm_rpm_free_request(struct msm_rpm_request *handle);
+
+/**
+ * msm_rpm_send_request() - Send the RPM messages using SMD. The function
+ * assigns a message id before sending the data out to the RPM. RPM hardware
+ * uses the message id to acknowledge the messages.
+ *
+ * @handle: pointer to the msm_rpm_request for the resource being modified.
+ *
+ * returns non-zero message id on success and zero on a failed transaction.
+ * The drivers use message id to wait for ACK from RPM.
+ */
+int msm_rpm_send_request(struct msm_rpm_request *handle);
+
+/**
+ * msm_rpm_send_request_noack() - Send the RPM messages using SMD. The function
+ * assigns a message id before sending the data out to the RPM. RPM hardware
+ * uses the message id to acknowledge the messages, but this API does not wait
+ * on the ACK for this message id and it does not add the message id to the wait
+ * list.
+ *
+ * @handle: pointer to the msm_rpm_request for the resource being modified.
+ *
+ * returns NULL on success and PTR_ERR on a failed transaction.
+ */
+void *msm_rpm_send_request_noack(struct msm_rpm_request *handle);
+
+/**
+ * msm_rpm_send_request_noirq() - Send the RPM messages using SMD. The
+ * function assigns a message id before sending the data out to the RPM.
+ * RPM hardware uses the message id to acknowledge the messages. This function
+ * is similar to msm_rpm_send_request except that it has to be called with
+ * interrupts masked.
+ *
+ * @handle: pointer to the msm_rpm_request for the resource being modified.
+ *
+ * returns non-zero message id on success and zero on a failed transaction.
+ * The drivers use message id to wait for ACK from RPM.
+ */
+int msm_rpm_send_request_noirq(struct msm_rpm_request *handle);
+
+/**
+ * msm_rpm_wait_for_ack() - A blocking call that waits for acknowledgment of
+ * a message from RPM.
+ *
+ * @msg_id: the return from msm_rpm_send_requests
+ *
+ * returns 0 on success or errno
+ */
+int msm_rpm_wait_for_ack(uint32_t msg_id);
+
+/**
+ * msm_rpm_wait_for_ack_noirq() - A blocking call that waits for acknowledgment
+ * of a message from RPM. This function is similar to msm_rpm_wait_for_ack
+ * except that it has to be called with interrupts masked.
+ *
+ * @msg_id: the return from msm_rpm_send_request
+ *
+ * returns 0 on success or errno
+ */
+int msm_rpm_wait_for_ack_noirq(uint32_t msg_id);
+
+/**
+ * msm_rpm_send_message() -Wrapper function for clients to send data given an
+ * array of key value pairs.
+ *
+ * @set: if the device is setting the active/sleep set parameter
+ * for the resource
+ * @rsc_type: unsigned 32 bit integer that identifies the type of the resource
+ * @rsc_id: unsigned 32 bit that uniquely identifies a resource within a type
+ * @kvp: array of KVP data.
+ * @nelem: number of KVPs pairs associated with the message.
+ *
+ * returns 0 on success and errno on failure.
+ */
+int msm_rpm_send_message(enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, struct msm_rpm_kvp *kvp, int nelems);
+
+/**
+ * msm_rpm_send_message_noack() -Wrapper function for clients to send data
+ * given an array of key value pairs without waiting for ack.
+ *
+ * @set: if the device is setting the active/sleep set parameter
+ * for the resource
+ * @rsc_type: unsigned 32 bit integer that identifies the type of the resource
+ * @rsc_id: unsigned 32 bit that uniquely identifies a resource within a type
+ * @kvp: array of KVP data.
+ * @nelem: number of KVPs pairs associated with the message.
+ *
+ * returns NULL on success and PTR_ERR(errno) on failure.
+ */
+void *msm_rpm_send_message_noack(enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, struct msm_rpm_kvp *kvp, int nelems);
+
+/**
+ * msm_rpm_send_message_noirq() -Wrapper function for clients to send data
+ * given an array of key value pairs. This function is similar to the
+ * msm_rpm_send_message() except that it has to be called with interrupts
+ * disabled. Clients should choose the irq version when possible for system
+ * performance.
+ *
+ * @set: if the device is setting the active/sleep set parameter
+ * for the resource
+ * @rsc_type: unsigned 32 bit integer that identifies the type of the resource
+ * @rsc_id: unsigned 32 bit that uniquely identifies a resource within a type
+ * @kvp: array of KVP data.
+ * @nelem: number of KVPs pairs associated with the message.
+ *
+ * returns 0 on success and errno on failure.
+ */
+int msm_rpm_send_message_noirq(enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, struct msm_rpm_kvp *kvp, int nelems);
+
+/**
+ * msm_rpm_driver_init() - Initialization function that registers for a
+ * rpm platform driver.
+ *
+ * returns 0 on success.
+ */
+int __init msm_rpm_driver_init(void);
+
+#else
+
+static inline struct msm_rpm_request *msm_rpm_create_request(
+ enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, int num_elements)
+{
+ return NULL;
+}
+
+static inline struct msm_rpm_request *msm_rpm_create_request_noirq(
+ enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, int num_elements)
+{
+ return NULL;
+
+}
+static inline uint32_t msm_rpm_add_kvp_data(struct msm_rpm_request *handle,
+ uint32_t key, const uint8_t *data, int count)
+{
+ return 0;
+}
+static inline uint32_t msm_rpm_add_kvp_data_noirq(
+ struct msm_rpm_request *handle, uint32_t key,
+ const uint8_t *data, int count)
+{
+ return 0;
+}
+
+static inline void msm_rpm_free_request(struct msm_rpm_request *handle)
+{
+}
+
+static inline int msm_rpm_send_request(struct msm_rpm_request *handle)
+{
+ return 0;
+}
+
+static inline int msm_rpm_send_request_noirq(struct msm_rpm_request *handle)
+{
+ return 0;
+
+}
+
+static inline void *msm_rpm_send_request_noack(struct msm_rpm_request *handle)
+{
+ return NULL;
+}
+
+static inline int msm_rpm_send_message(enum msm_rpm_set set, uint32_t rsc_type,
+ uint32_t rsc_id, struct msm_rpm_kvp *kvp, int nelems)
+{
+ return 0;
+}
+
+static inline int msm_rpm_send_message_noirq(enum msm_rpm_set set,
+ uint32_t rsc_type, uint32_t rsc_id, struct msm_rpm_kvp *kvp,
+ int nelems)
+{
+ return 0;
+}
+
+static inline void *msm_rpm_send_message_noack(enum msm_rpm_set set,
+ uint32_t rsc_type, uint32_t rsc_id, struct msm_rpm_kvp *kvp,
+ int nelems)
+{
+ return NULL;
+}
+
+static inline int msm_rpm_wait_for_ack(uint32_t msg_id)
+{
+ return 0;
+
+}
+static inline int msm_rpm_wait_for_ack_noirq(uint32_t msg_id)
+{
+ return 0;
+}
+
+static inline int __init msm_rpm_driver_init(void)
+{
+ return 0;
+}
+#endif
+#endif /*__ARCH_ARM_MACH_MSM_RPM_SMD_H*/
diff --git a/include/soc/qcom/scm.h b/include/soc/qcom/scm.h
index ac8b2eb..63698cf 100644
--- a/include/soc/qcom/scm.h
+++ b/include/soc/qcom/scm.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2016, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -229,7 +229,7 @@ static inline int scm_io_write(phys_addr_t address, u32 val)
return 0;
}
-inline bool scm_is_secure_device(void)
+static inline bool scm_is_secure_device(void)
{
return false;
}
diff --git a/include/soc/qcom/smd.h b/include/soc/qcom/smd.h
new file mode 100644
index 0000000..9853a93
--- /dev/null
+++ b/include/soc/qcom/smd.h
@@ -0,0 +1,381 @@
+/* include/soc/qcom/smd.h
+ *
+ * Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2009-2014, 2017, The Linux Foundation. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __ASM_ARCH_MSM_SMD_H
+#define __ASM_ARCH_MSM_SMD_H
+
+#include <linux/io.h>
+
+#include <soc/qcom/smem.h>
+
+typedef struct smd_channel smd_channel_t;
+struct cpumask;
+
+#define SMD_MAX_CH_NAME_LEN 20 /* includes null char at end */
+
+#define SMD_EVENT_DATA 1
+#define SMD_EVENT_OPEN 2
+#define SMD_EVENT_CLOSE 3
+#define SMD_EVENT_STATUS 4
+#define SMD_EVENT_REOPEN_READY 5
+
+/*
+ * SMD Processor ID's.
+ *
+ * For all processors that have both SMSM and SMD clients,
+ * the SMSM Processor ID and the SMD Processor ID will
+ * be the same. In cases where a processor only supports
+ * SMD, the entry will only exist in this enum.
+ */
+enum {
+ SMD_APPS = SMEM_APPS,
+ SMD_MODEM = SMEM_MODEM,
+ SMD_Q6 = SMEM_Q6,
+ SMD_DSPS = SMEM_DSPS,
+ SMD_TZ = SMEM_DSPS,
+ SMD_WCNSS = SMEM_WCNSS,
+ SMD_MODEM_Q6_FW = SMEM_MODEM_Q6_FW,
+ SMD_RPM = SMEM_RPM,
+ NUM_SMD_SUBSYSTEMS,
+};
+
+enum {
+ SMD_APPS_MODEM = 0,
+ SMD_APPS_QDSP,
+ SMD_MODEM_QDSP,
+ SMD_APPS_DSPS,
+ SMD_MODEM_DSPS,
+ SMD_QDSP_DSPS,
+ SMD_APPS_WCNSS,
+ SMD_MODEM_WCNSS,
+ SMD_QDSP_WCNSS,
+ SMD_DSPS_WCNSS,
+ SMD_APPS_Q6FW,
+ SMD_MODEM_Q6FW,
+ SMD_QDSP_Q6FW,
+ SMD_DSPS_Q6FW,
+ SMD_WCNSS_Q6FW,
+ SMD_APPS_RPM,
+ SMD_MODEM_RPM,
+ SMD_QDSP_RPM,
+ SMD_WCNSS_RPM,
+ SMD_TZ_RPM,
+ SMD_NUM_TYPE,
+
+};
+
+#ifdef CONFIG_MSM_SMD
+int smd_close(smd_channel_t *ch);
+
+/* passing a null pointer for data reads and discards */
+int smd_read(smd_channel_t *ch, void *data, int len);
+int smd_read_from_cb(smd_channel_t *ch, void *data, int len);
+
+/* Write to stream channels may do a partial write and return
+ * the length actually written.
+ * Write to packet channels will never do a partial write --
+ * it will return the requested length written or an error.
+ */
+int smd_write(smd_channel_t *ch, const void *data, int len);
+
+int smd_write_avail(smd_channel_t *ch);
+int smd_read_avail(smd_channel_t *ch);
+
+/* Returns the total size of the current packet being read.
+ * Returns 0 if no packets available or a stream channel.
+ */
+int smd_cur_packet_size(smd_channel_t *ch);
+
+/* these are used to get and set the IF sigs of a channel.
+ * DTR and RTS can be set; DSR, CTS, CD and RI can be read.
+ */
+int smd_tiocmget(smd_channel_t *ch);
+int smd_tiocmset(smd_channel_t *ch, unsigned int set, unsigned int clear);
+int
+smd_tiocmset_from_cb(smd_channel_t *ch, unsigned int set, unsigned int clear);
+int smd_named_open_on_edge(const char *name, uint32_t edge, smd_channel_t **_ch,
+ void *priv, void (*notify)(void *, unsigned int));
+
+/* Tells the other end of the smd channel that this end wants to receive
+ * interrupts when the written data is read. Read interrupts should only
+ * enabled when there is no space left in the buffer to write to, thus the
+ * interrupt acts as notification that space may be available. If the
+ * other side does not support enabling/disabling interrupts on demand,
+ * then this function has no effect if called.
+ */
+void smd_enable_read_intr(smd_channel_t *ch);
+
+/* Tells the other end of the smd channel that this end does not want
+ * interrupts when written data is read. The interrupts should be
+ * disabled by default. If the other side does not support enabling/
+ * disabling interrupts on demand, then this function has no effect if
+ * called.
+ */
+void smd_disable_read_intr(smd_channel_t *ch);
+
+/**
+ * Enable/disable receive interrupts for the remote processor used by a
+ * particular channel.
+ * @ch: open channel handle to use for the edge
+ * @mask: 1 = mask interrupts; 0 = unmask interrupts
+ * @cpumask cpumask for the next cpu scheduled to be woken up
+ * @returns: 0 for success; < 0 for failure
+ *
+ * Note that this enables/disables all interrupts from the remote subsystem for
+ * all channels. As such, it should be used with care and only for specific
+ * use cases such as power-collapse sequencing.
+ */
+int smd_mask_receive_interrupt(smd_channel_t *ch, bool mask,
+ const struct cpumask *cpumask);
+
+/* Starts a packet transaction. The size of the packet may exceed the total
+ * size of the smd ring buffer.
+ *
+ * @ch: channel to write the packet to
+ * @len: total length of the packet
+ *
+ * Returns:
+ * 0 - success
+ * -ENODEV - invalid smd channel
+ * -EACCES - non-packet channel specified
+ * -EINVAL - invalid length
+ * -EBUSY - transaction already in progress
+ * -EAGAIN - no enough memory in ring buffer to start transaction
+ * -EPERM - unable to successfully start transaction due to write error
+ */
+int smd_write_start(smd_channel_t *ch, int len);
+
+/* Writes a segment of the packet for a packet transaction.
+ *
+ * @ch: channel to write packet to
+ * @data: buffer of data to write
+ * @len: length of data buffer
+ *
+ * Returns:
+ * number of bytes written
+ * -ENODEV - invalid smd channel
+ * -EINVAL - invalid length
+ * -ENOEXEC - transaction not started
+ */
+int smd_write_segment(smd_channel_t *ch, const void *data, int len);
+
+/* Completes a packet transaction. Do not call from interrupt context.
+ *
+ * @ch: channel to complete transaction on
+ *
+ * Returns:
+ * 0 - success
+ * -ENODEV - invalid smd channel
+ * -E2BIG - some ammount of packet is not yet written
+ */
+int smd_write_end(smd_channel_t *ch);
+
+/**
+ * smd_write_segment_avail() - available write space for packet transactions
+ * @ch: channel to write packet to
+ * @returns: number of bytes available to write to, or -ENODEV for invalid ch
+ *
+ * This is a version of smd_write_avail() intended for use with packet
+ * transactions. This version correctly accounts for any internal reserved
+ * space at all stages of the transaction.
+ */
+int smd_write_segment_avail(smd_channel_t *ch);
+
+/*
+ * Returns a pointer to the subsystem name or NULL if no
+ * subsystem name is available.
+ *
+ * @type - Edge definition
+ */
+const char *smd_edge_to_subsystem(uint32_t type);
+
+/*
+ * Returns a pointer to the subsystem name given the
+ * remote processor ID.
+ *
+ * @pid Remote processor ID
+ * @returns Pointer to subsystem name or NULL if not found
+ */
+const char *smd_pid_to_subsystem(uint32_t pid);
+
+/*
+ * Checks to see if a new packet has arrived on the channel. Only to be
+ * called with interrupts disabled.
+ *
+ * @ch: channel to check if a packet has arrived
+ *
+ * Returns:
+ * 0 - packet not available
+ * 1 - packet available
+ * -EINVAL - NULL parameter or non-packet based channel provided
+ */
+int smd_is_pkt_avail(smd_channel_t *ch);
+
+/*
+ * SMD initialization function that registers for a SMD platform driver.
+ *
+ * returns success on successful driver registration.
+ */
+int __init msm_smd_init(void);
+
+/**
+ * smd_remote_ss_to_edge() - return edge type from remote ss type
+ * @name: remote subsystem name
+ *
+ * Returns the edge type connected between the local subsystem(APPS)
+ * and remote subsystem @name.
+ */
+int smd_remote_ss_to_edge(const char *name);
+
+/**
+ * smd_edge_to_pil_str - Returns the PIL string used to load the remote side of
+ * the indicated edge.
+ *
+ * @type - Edge definition
+ * @returns - The PIL string to load the remove side of @type or NULL if the
+ * PIL string does not exist.
+ */
+const char *smd_edge_to_pil_str(uint32_t type);
+
+#else
+
+static inline int smd_close(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_read(smd_channel_t *ch, void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_read_from_cb(smd_channel_t *ch, void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write(smd_channel_t *ch, const void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_avail(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_read_avail(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_cur_packet_size(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_tiocmget(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline int
+smd_tiocmset(smd_channel_t *ch, unsigned int set, unsigned int clear)
+{
+ return -ENODEV;
+}
+
+static inline int
+smd_tiocmset_from_cb(smd_channel_t *ch, unsigned int set, unsigned int clear)
+{
+ return -ENODEV;
+}
+
+static inline int
+smd_named_open_on_edge(const char *name, uint32_t edge, smd_channel_t **_ch,
+ void *priv, void (*notify)(void *, unsigned int))
+{
+ return -ENODEV;
+}
+
+static inline void smd_enable_read_intr(smd_channel_t *ch)
+{
+}
+
+static inline void smd_disable_read_intr(smd_channel_t *ch)
+{
+}
+
+static inline int smd_mask_receive_interrupt(smd_channel_t *ch, bool mask,
+ const struct cpumask *cpumask)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_start(smd_channel_t *ch, int len)
+{
+ return -ENODEV;
+}
+
+static inline int
+smd_write_segment(smd_channel_t *ch, const void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_end(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_segment_avail(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline const char *smd_edge_to_subsystem(uint32_t type)
+{
+ return NULL;
+}
+
+static inline const char *smd_pid_to_subsystem(uint32_t pid)
+{
+ return NULL;
+}
+
+static inline int smd_is_pkt_avail(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
+static inline int __init msm_smd_init(void)
+{
+ return 0;
+}
+
+static inline int smd_remote_ss_to_edge(const char *name)
+{
+ return -EINVAL;
+}
+
+static inline const char *smd_edge_to_pil_str(uint32_t type)
+{
+ return NULL;
+}
+#endif
+
+#endif
diff --git a/include/soc/qcom/smem.h b/include/soc/qcom/smem.h
index bef98d6..6bb76f7 100644
--- a/include/soc/qcom/smem.h
+++ b/include/soc/qcom/smem.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2013-2016, 2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -21,7 +21,8 @@ enum {
SMEM_Q6,
SMEM_DSPS,
SMEM_WCNSS,
- SMEM_CDSP,
+ SMEM_MODEM_Q6_FW,
+ SMEM_CDSP = SMEM_MODEM_Q6_FW,
SMEM_RPM,
SMEM_TZ,
SMEM_SPSS,
diff --git a/include/soc/qcom/smsm.h b/include/soc/qcom/smsm.h
new file mode 100644
index 0000000..00d31e8
--- /dev/null
+++ b/include/soc/qcom/smsm.h
@@ -0,0 +1,147 @@
+/* Copyright (c) 2011-2013, 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ARCH_ARM_MACH_MSM_SMSM_H_
+#define _ARCH_ARM_MACH_MSM_SMSM_H_
+
+#include <soc/qcom/smem.h>
+
+enum {
+ SMSM_APPS_STATE,
+ SMSM_MODEM_STATE,
+ SMSM_Q6_STATE,
+ SMSM_APPS_DEM,
+ SMSM_WCNSS_STATE = SMSM_APPS_DEM,
+ SMSM_MODEM_DEM,
+ SMSM_DSPS_STATE = SMSM_MODEM_DEM,
+ SMSM_Q6_DEM,
+ SMSM_POWER_MASTER_DEM,
+ SMSM_TIME_MASTER_DEM,
+};
+extern uint32_t SMSM_NUM_ENTRIES;
+
+/*
+ * Ordered by when processors adopted the SMSM protocol. May not be 1-to-1
+ * with SMEM PIDs, despite initial expectations.
+ */
+enum {
+ SMSM_APPS = SMEM_APPS,
+ SMSM_MODEM = SMEM_MODEM,
+ SMSM_Q6 = SMEM_Q6,
+ SMSM_WCNSS,
+ SMSM_DSPS,
+};
+extern uint32_t SMSM_NUM_HOSTS;
+
+#define SMSM_INIT 0x00000001
+#define SMSM_SMDINIT 0x00000008
+#define SMSM_RPCINIT 0x00000020
+#define SMSM_RESET 0x00000040
+#define SMSM_TIMEWAIT 0x00000400
+#define SMSM_TIMEINIT 0x00000800
+#define SMSM_PROC_AWAKE 0x00001000
+#define SMSM_SMD_LOOPBACK 0x00800000
+
+#define SMSM_USB_PLUG_UNPLUG 0x00002000
+
+#define SMSM_A2_POWER_CONTROL 0x00000002
+#define SMSM_A2_POWER_CONTROL_ACK 0x00000800
+
+#ifdef CONFIG_MSM_SMD
+int smsm_change_state(uint32_t smsm_entry,
+ uint32_t clear_mask, uint32_t set_mask);
+
+/*
+ * Changes the global interrupt mask. The set and clear masks are re-applied
+ * every time the global interrupt mask is updated for callback registration
+ * and de-registration.
+ *
+ * The clear mask is applied first, so if a bit is set to 1 in both the clear
+ * mask and the set mask, the result will be that the interrupt is set.
+ *
+ * @smsm_entry SMSM entry to change
+ * @clear_mask 1 = clear bit, 0 = no-op
+ * @set_mask 1 = set bit, 0 = no-op
+ *
+ * @returns 0 for success, < 0 for error
+ */
+int smsm_change_intr_mask(uint32_t smsm_entry,
+ uint32_t clear_mask, uint32_t set_mask);
+int smsm_get_intr_mask(uint32_t smsm_entry, uint32_t *intr_mask);
+uint32_t smsm_get_state(uint32_t smsm_entry);
+int smsm_state_cb_register(uint32_t smsm_entry, uint32_t mask,
+ void (*notify)(void *, uint32_t old_state, uint32_t new_state),
+ void *data);
+int smsm_state_cb_deregister(uint32_t smsm_entry, uint32_t mask,
+ void (*notify)(void *, uint32_t, uint32_t), void *data);
+
+#else
+static inline int smsm_change_state(uint32_t smsm_entry,
+ uint32_t clear_mask, uint32_t set_mask)
+{
+ return -ENODEV;
+}
+
+/*
+ * Changes the global interrupt mask. The set and clear masks are re-applied
+ * every time the global interrupt mask is updated for callback registration
+ * and de-registration.
+ *
+ * The clear mask is applied first, so if a bit is set to 1 in both the clear
+ * mask and the set mask, the result will be that the interrupt is set.
+ *
+ * @smsm_entry SMSM entry to change
+ * @clear_mask 1 = clear bit, 0 = no-op
+ * @set_mask 1 = set bit, 0 = no-op
+ *
+ * @returns 0 for success, < 0 for error
+ */
+static inline int smsm_change_intr_mask(uint32_t smsm_entry,
+ uint32_t clear_mask, uint32_t set_mask)
+{
+ return -ENODEV;
+}
+
+static inline int smsm_get_intr_mask(uint32_t smsm_entry, uint32_t *intr_mask)
+{
+ return -ENODEV;
+}
+static inline uint32_t smsm_get_state(uint32_t smsm_entry)
+{
+ return 0;
+}
+static inline int smsm_state_cb_register(uint32_t smsm_entry, uint32_t mask,
+ void (*notify)(void *, uint32_t old_state, uint32_t new_state),
+ void *data)
+{
+ return -ENODEV;
+}
+static inline int smsm_state_cb_deregister(uint32_t smsm_entry, uint32_t mask,
+ void (*notify)(void *, uint32_t, uint32_t), void *data)
+{
+ return -ENODEV;
+}
+static inline void smsm_reset_modem(unsigned int mode)
+{
+}
+static inline void smsm_reset_modem_cont(void)
+{
+}
+static inline void smd_sleep_exit(void)
+{
+}
+static inline int smsm_check_for_modem_crash(void)
+{
+ return -ENODEV;
+}
+#endif
+#endif
diff --git a/include/sound/seq_kernel.h b/include/sound/seq_kernel.h
index feb58d4..4b9ee30 100644
--- a/include/sound/seq_kernel.h
+++ b/include/sound/seq_kernel.h
@@ -49,7 +49,8 @@ typedef union snd_seq_timestamp snd_seq_timestamp_t;
#define SNDRV_SEQ_DEFAULT_CLIENT_EVENTS 200
/* max delivery path length */
-#define SNDRV_SEQ_MAX_HOPS 10
+/* NOTE: this shouldn't be greater than MAX_LOCKDEP_SUBCLASSES */
+#define SNDRV_SEQ_MAX_HOPS 8
/* max size of event size */
#define SNDRV_SEQ_MAX_EVENT_LEN 0x3fffffff
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 0383c60..a87e894 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -197,6 +197,7 @@ enum tcm_tmreq_table {
TMR_LUN_RESET = 5,
TMR_TARGET_WARM_RESET = 6,
TMR_TARGET_COLD_RESET = 7,
+ TMR_UNKNOWN = 0xff,
};
/* fabric independent task management response values */
diff --git a/include/trace/events/iommu.h b/include/trace/events/iommu.h
index 255e228..4be890a 100644
--- a/include/trace/events/iommu.h
+++ b/include/trace/events/iommu.h
@@ -201,7 +201,7 @@ DEFINE_EVENT(iommu_error, io_page_fault,
TP_ARGS(dev, iova, flags)
);
-DECLARE_EVENT_CLASS(iommu_errata_tlbi,
+DECLARE_EVENT_CLASS(iommu_tlbi,
TP_PROTO(struct device *dev, u64 time),
@@ -222,35 +222,35 @@ DECLARE_EVENT_CLASS(iommu_errata_tlbi,
)
);
-DEFINE_EVENT(iommu_errata_tlbi, errata_tlbi_start,
+DEFINE_EVENT(iommu_tlbi, tlbi_start,
TP_PROTO(struct device *dev, u64 time),
TP_ARGS(dev, time)
);
-DEFINE_EVENT(iommu_errata_tlbi, errata_tlbi_end,
+DEFINE_EVENT(iommu_tlbi, tlbi_end,
TP_PROTO(struct device *dev, u64 time),
TP_ARGS(dev, time)
);
-DEFINE_EVENT(iommu_errata_tlbi, errata_throttle_start,
+DEFINE_EVENT(iommu_tlbi, tlbi_throttle_start,
TP_PROTO(struct device *dev, u64 time),
TP_ARGS(dev, time)
);
-DEFINE_EVENT(iommu_errata_tlbi, errata_throttle_end,
+DEFINE_EVENT(iommu_tlbi, tlbi_throttle_end,
TP_PROTO(struct device *dev, u64 time),
TP_ARGS(dev, time)
);
-DEFINE_EVENT(iommu_errata_tlbi, errata_failed,
+DEFINE_EVENT(iommu_tlbi, tlbsync_timeout,
TP_PROTO(struct device *dev, u64 time),
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index b4bcedf..8dc7ad5 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -262,9 +262,9 @@ TRACE_EVENT(sched_update_history,
TRACE_EVENT(sched_get_task_cpu_cycles,
- TP_PROTO(int cpu, int event, u64 cycles, u64 exec_time),
+ TP_PROTO(int cpu, int event, u64 cycles, u64 exec_time, struct task_struct *p),
- TP_ARGS(cpu, event, cycles, exec_time),
+ TP_ARGS(cpu, event, cycles, exec_time, p),
TP_STRUCT__entry(
__field(int, cpu )
@@ -273,6 +273,8 @@ TRACE_EVENT(sched_get_task_cpu_cycles,
__field(u64, exec_time )
__field(u32, freq )
__field(u32, legacy_freq )
+ __field(pid_t, pid )
+ __array(char, comm, TASK_COMM_LEN )
),
TP_fast_assign(
@@ -282,11 +284,13 @@ TRACE_EVENT(sched_get_task_cpu_cycles,
__entry->exec_time = exec_time;
__entry->freq = cpu_cycles_to_freq(cycles, exec_time);
__entry->legacy_freq = cpu_cur_freq(cpu);
+ __entry->pid = p->pid;
+ memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
),
- TP_printk("cpu=%d event=%d cycles=%llu exec_time=%llu freq=%u legacy_freq=%u",
+ TP_printk("cpu=%d event=%d cycles=%llu exec_time=%llu freq=%u legacy_freq=%u task=%d (%s)",
__entry->cpu, __entry->event, __entry->cycles,
- __entry->exec_time, __entry->freq, __entry->legacy_freq)
+ __entry->exec_time, __entry->freq, __entry->legacy_freq, __entry->pid, __entry->comm)
);
TRACE_EVENT(sched_update_task_ravg,
@@ -782,6 +786,11 @@ DEFINE_EVENT(sched_task_util, sched_task_util_imbalance,
TP_PROTO(struct task_struct *p, int task_cpu, unsigned long task_util, int nominated_cpu, int target_cpu, int ediff, bool need_idle),
TP_ARGS(p, task_cpu, task_util, nominated_cpu, target_cpu, ediff, need_idle)
);
+
+DEFINE_EVENT(sched_task_util, sched_task_util_need_idle,
+ TP_PROTO(struct task_struct *p, int task_cpu, unsigned long task_util, int nominated_cpu, int target_cpu, int ediff, bool need_idle),
+ TP_ARGS(p, task_cpu, task_util, nominated_cpu, target_cpu, ediff, need_idle)
+);
#endif
/*
diff --git a/include/trace/events/trace_msm_low_power.h b/include/trace/events/trace_msm_low_power.h
index 97eefc6..c25da0e 100644
--- a/include/trace/events/trace_msm_low_power.h
+++ b/include/trace/events/trace_msm_low_power.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, 2014-2016, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012, 2014-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -250,24 +250,6 @@ TRACE_EVENT(cluster_pred_hist,
__entry->sample, __entry->tmr)
);
-TRACE_EVENT(pre_pc_cb,
-
- TP_PROTO(int tzflag),
-
- TP_ARGS(tzflag),
-
- TP_STRUCT__entry(
- __field(int, tzflag)
- ),
-
- TP_fast_assign(
- __entry->tzflag = tzflag;
- ),
-
- TP_printk("tzflag:%d",
- __entry->tzflag
- )
-);
#endif
#define TRACE_INCLUDE_FILE trace_msm_low_power
#include <trace/define_trace.h>
diff --git a/include/trace/events/trace_rpm_smd.h b/include/trace/events/trace_rpm_smd.h
new file mode 100644
index 0000000..1afc06b
--- /dev/null
+++ b/include/trace/events/trace_rpm_smd.h
@@ -0,0 +1,111 @@
+/* Copyright (c) 2012, 2014-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM rpm_smd
+
+#if !defined(_TRACE_RPM_SMD_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_RPM_SMD_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(rpm_smd_ack_recvd,
+
+ TP_PROTO(unsigned int irq, unsigned int msg_id, int errno),
+
+ TP_ARGS(irq, msg_id, errno),
+
+ TP_STRUCT__entry(
+ __field(int, irq)
+ __field(int, msg_id)
+ __field(int, errno)
+ ),
+
+ TP_fast_assign(
+ __entry->irq = irq;
+ __entry->msg_id = msg_id;
+ __entry->errno = errno;
+ ),
+
+ TP_printk("ctx:%s msg_id:%d errno:%08x",
+ __entry->irq ? "noslp" : "sleep",
+ __entry->msg_id,
+ __entry->errno)
+);
+
+TRACE_EVENT(rpm_smd_interrupt_notify,
+
+ TP_PROTO(char *dummy),
+
+ TP_ARGS(dummy),
+
+ TP_STRUCT__entry(
+ __field(char *, dummy)
+ ),
+
+ TP_fast_assign(
+ __entry->dummy = dummy;
+ ),
+
+ TP_printk("%s", __entry->dummy)
+);
+
+DECLARE_EVENT_CLASS(rpm_send_msg,
+
+ TP_PROTO(unsigned int msg_id, unsigned int rsc_type,
+ unsigned int rsc_id),
+
+ TP_ARGS(msg_id, rsc_type, rsc_id),
+
+ TP_STRUCT__entry(
+ __field(u32, msg_id)
+ __field(u32, rsc_type)
+ __field(u32, rsc_id)
+ __array(char, name, 5)
+ ),
+
+ TP_fast_assign(
+ __entry->msg_id = msg_id;
+ __entry->name[4] = 0;
+ __entry->rsc_type = rsc_type;
+ __entry->rsc_id = rsc_id;
+ memcpy(__entry->name, &rsc_type, sizeof(uint32_t));
+
+ ),
+
+ TP_printk("msg_id:%d, rsc_type:0x%08x(%s), rsc_id:0x%08x",
+ __entry->msg_id,
+ __entry->rsc_type, __entry->name,
+ __entry->rsc_id)
+);
+
+DEFINE_EVENT(rpm_send_msg, rpm_smd_sleep_set,
+ TP_PROTO(unsigned int msg_id, unsigned int rsc_type,
+ unsigned int rsc_id),
+ TP_ARGS(msg_id, rsc_type, rsc_id)
+);
+
+DEFINE_EVENT(rpm_send_msg, rpm_smd_send_sleep_set,
+ TP_PROTO(unsigned int msg_id, unsigned int rsc_type,
+ unsigned int rsc_id),
+ TP_ARGS(msg_id, rsc_type, rsc_id)
+);
+
+DEFINE_EVENT(rpm_send_msg, rpm_smd_send_active_set,
+ TP_PROTO(unsigned int msg_id, unsigned int rsc_type,
+ unsigned int rsc_id),
+ TP_ARGS(msg_id, rsc_type, rsc_id)
+);
+
+#endif
+#define TRACE_INCLUDE_FILE trace_rpm_smd
+#include <trace/define_trace.h>
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 6f33a4a..d0341cd 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -69,6 +69,12 @@ struct drm_msm_timespec {
#define HDR_PRIMARIES_COUNT 3
+/* HDR EOTF */
+#define HDR_EOTF_SDR_LUM_RANGE 0x0
+#define HDR_EOTF_HDR_LUM_RANGE 0x1
+#define HDR_EOTF_SMTPE_ST2084 0x2
+#define HDR_EOTF_HLG 0x3
+
#define DRM_MSM_EXT_HDR_METADATA
struct drm_msm_ext_hdr_metadata {
__u32 hdr_state; /* HDR state */
diff --git a/include/uapi/linux/msm_kgsl.h b/include/uapi/linux/msm_kgsl.h
index f05155b..9ee2a8b 100644
--- a/include/uapi/linux/msm_kgsl.h
+++ b/include/uapi/linux/msm_kgsl.h
@@ -142,6 +142,7 @@
#define KGSL_MEMFLAGS_USE_CPU_MAP 0x10000000ULL
#define KGSL_MEMFLAGS_SPARSE_PHYS 0x20000000ULL
#define KGSL_MEMFLAGS_SPARSE_VIRT 0x40000000ULL
+#define KGSL_MEMFLAGS_IOCOHERENT 0x80000000ULL
/* Memory types for which allocations are made */
#define KGSL_MEMTYPE_MASK 0x0000FF00
diff --git a/include/uapi/linux/rds.h b/include/uapi/linux/rds.h
index 7af20a1..804c9b2 100644
--- a/include/uapi/linux/rds.h
+++ b/include/uapi/linux/rds.h
@@ -104,8 +104,8 @@
#define RDS_INFO_LAST 10010
struct rds_info_counter {
- uint8_t name[32];
- uint64_t value;
+ __u8 name[32];
+ __u64 value;
} __attribute__((packed));
#define RDS_INFO_CONNECTION_FLAG_SENDING 0x01
@@ -115,35 +115,35 @@ struct rds_info_counter {
#define TRANSNAMSIZ 16
struct rds_info_connection {
- uint64_t next_tx_seq;
- uint64_t next_rx_seq;
+ __u64 next_tx_seq;
+ __u64 next_rx_seq;
__be32 laddr;
__be32 faddr;
- uint8_t transport[TRANSNAMSIZ]; /* null term ascii */
- uint8_t flags;
+ __u8 transport[TRANSNAMSIZ]; /* null term ascii */
+ __u8 flags;
} __attribute__((packed));
#define RDS_INFO_MESSAGE_FLAG_ACK 0x01
#define RDS_INFO_MESSAGE_FLAG_FAST_ACK 0x02
struct rds_info_message {
- uint64_t seq;
- uint32_t len;
+ __u64 seq;
+ __u32 len;
__be32 laddr;
__be32 faddr;
__be16 lport;
__be16 fport;
- uint8_t flags;
+ __u8 flags;
} __attribute__((packed));
struct rds_info_socket {
- uint32_t sndbuf;
+ __u32 sndbuf;
__be32 bound_addr;
__be32 connected_addr;
__be16 bound_port;
__be16 connected_port;
- uint32_t rcvbuf;
- uint64_t inum;
+ __u32 rcvbuf;
+ __u64 inum;
} __attribute__((packed));
struct rds_info_tcp_socket {
@@ -151,25 +151,25 @@ struct rds_info_tcp_socket {
__be16 local_port;
__be32 peer_addr;
__be16 peer_port;
- uint64_t hdr_rem;
- uint64_t data_rem;
- uint32_t last_sent_nxt;
- uint32_t last_expected_una;
- uint32_t last_seen_una;
+ __u64 hdr_rem;
+ __u64 data_rem;
+ __u32 last_sent_nxt;
+ __u32 last_expected_una;
+ __u32 last_seen_una;
} __attribute__((packed));
#define RDS_IB_GID_LEN 16
struct rds_info_rdma_connection {
__be32 src_addr;
__be32 dst_addr;
- uint8_t src_gid[RDS_IB_GID_LEN];
- uint8_t dst_gid[RDS_IB_GID_LEN];
+ __u8 src_gid[RDS_IB_GID_LEN];
+ __u8 dst_gid[RDS_IB_GID_LEN];
- uint32_t max_send_wr;
- uint32_t max_recv_wr;
- uint32_t max_send_sge;
- uint32_t rdma_mr_max;
- uint32_t rdma_mr_size;
+ __u32 max_send_wr;
+ __u32 max_recv_wr;
+ __u32 max_send_sge;
+ __u32 rdma_mr_max;
+ __u32 rdma_mr_size;
};
/*
@@ -210,70 +210,70 @@ struct rds_info_rdma_connection {
* (so that the application does not have to worry about
* alignment).
*/
-typedef uint64_t rds_rdma_cookie_t;
+typedef __u64 rds_rdma_cookie_t;
struct rds_iovec {
- uint64_t addr;
- uint64_t bytes;
+ __u64 addr;
+ __u64 bytes;
};
struct rds_get_mr_args {
struct rds_iovec vec;
- uint64_t cookie_addr;
- uint64_t flags;
+ __u64 cookie_addr;
+ __u64 flags;
};
struct rds_get_mr_for_dest_args {
struct __kernel_sockaddr_storage dest_addr;
struct rds_iovec vec;
- uint64_t cookie_addr;
- uint64_t flags;
+ __u64 cookie_addr;
+ __u64 flags;
};
struct rds_free_mr_args {
rds_rdma_cookie_t cookie;
- uint64_t flags;
+ __u64 flags;
};
struct rds_rdma_args {
rds_rdma_cookie_t cookie;
struct rds_iovec remote_vec;
- uint64_t local_vec_addr;
- uint64_t nr_local;
- uint64_t flags;
- uint64_t user_token;
+ __u64 local_vec_addr;
+ __u64 nr_local;
+ __u64 flags;
+ __u64 user_token;
};
struct rds_atomic_args {
rds_rdma_cookie_t cookie;
- uint64_t local_addr;
- uint64_t remote_addr;
+ __u64 local_addr;
+ __u64 remote_addr;
union {
struct {
- uint64_t compare;
- uint64_t swap;
+ __u64 compare;
+ __u64 swap;
} cswp;
struct {
- uint64_t add;
+ __u64 add;
} fadd;
struct {
- uint64_t compare;
- uint64_t swap;
- uint64_t compare_mask;
- uint64_t swap_mask;
+ __u64 compare;
+ __u64 swap;
+ __u64 compare_mask;
+ __u64 swap_mask;
} m_cswp;
struct {
- uint64_t add;
- uint64_t nocarry_mask;
+ __u64 add;
+ __u64 nocarry_mask;
} m_fadd;
};
- uint64_t flags;
- uint64_t user_token;
+ __u64 flags;
+ __u64 user_token;
};
struct rds_rdma_notify {
- uint64_t user_token;
- int32_t status;
+ __u64 user_token;
+ __s32 status;
};
#define RDS_RDMA_SUCCESS 0
diff --git a/include/uapi/linux/usb/video.h b/include/uapi/linux/usb/video.h
index 69ab695..dc9380b 100644
--- a/include/uapi/linux/usb/video.h
+++ b/include/uapi/linux/usb/video.h
@@ -54,6 +54,8 @@
#define UVC_VS_FORMAT_FRAME_BASED 0x10
#define UVC_VS_FRAME_FRAME_BASED 0x11
#define UVC_VS_FORMAT_STREAM_BASED 0x12
+#define UVC_VS_FORMAT_H264 0x13
+#define UVC_VS_FRAME_H264 0x14
/* A.7. Video Class-Specific Endpoint Descriptor Subtypes */
#define UVC_EP_UNDEFINED 0x00
@@ -299,11 +301,12 @@ struct uvc_processing_unit_descriptor {
__u8 bSourceID;
__u16 wMaxMultiplier;
__u8 bControlSize;
- __u8 bmControls[2];
+ __u8 bmControls[3];
__u8 iProcessing;
+ __u8 bmVideoStandards;
} __attribute__((__packed__));
-#define UVC_DT_PROCESSING_UNIT_SIZE(n) (9+(n))
+#define UVC_DT_PROCESSING_UNIT_SIZE(n) (10+(n))
/* 3.7.2.6. Extension Unit Descriptor */
struct uvc_extension_unit_descriptor {
@@ -565,5 +568,96 @@ struct UVC_FRAME_MJPEG(n) { \
__u32 dwFrameInterval[n]; \
} __attribute__ ((packed))
+/* H264 Payload - 3.1.1. H264 Video Format Descriptor */
+struct uvc_format_h264 {
+ __u8 bLength;
+ __u8 bDescriptorType;
+ __u8 bDescriptorSubType;
+ __u8 bFormatIndex;
+ __u8 bNumFrameDescriptors;
+ __u8 bDefaultFrameIndex;
+ __u8 bMaxCodecConfigDelay;
+ __u8 bmSupportedSliceModes;
+ __u8 bmSupportedSyncFrameTypes;
+ __u8 bResolutionScaling;
+ __u8 Reserved1;
+ __u8 bmSupportedRateControlModes;
+ __u16 wMaxMBperSecOneResNoScalability;
+ __u16 wMaxMBperSecTwoResNoScalability;
+ __u16 wMaxMBperSecThreeResNoScalability;
+ __u16 wMaxMBperSecFourResNoScalability;
+ __u16 wMaxMBperSecOneResTemporalScalability;
+ __u16 wMaxMBperSecTwoResTemporalScalability;
+ __u16 wMaxMBperSecThreeResTemporalScalability;
+ __u16 wMaxMBperSecFourResTemporalScalability;
+ __u16 wMaxMBperSecOneResTemporalQualityScalability;
+ __u16 wMaxMBperSecTwoResTemporalQualityScalability;
+ __u16 wMaxMBperSecThreeResTemporalQualityScalability;
+ __u16 wMaxMBperSecFourResTemporalQualityScalability;
+ __u16 wMaxMBperSecOneResTemporalSpatialScalability;
+ __u16 wMaxMBperSecTwoResTemporalSpatialScalability;
+ __u16 wMaxMBperSecThreeResTemporalSpatialScalability;
+ __u16 wMaxMBperSecFourResTemporalSpatialScalability;
+ __u16 wMaxMBperSecOneResFullScalability;
+ __u16 wMaxMBperSecTwoResFullScalability;
+ __u16 wMaxMBperSecThreeResFullScalability;
+ __u16 wMaxMBperSecFourResFullScalability;
+} __attribute__((__packed__));
+
+#define UVC_DT_FORMAT_H264_SIZE 52
+
+/* H264 Payload - 3.1.2. H264 Video Frame Descriptor */
+struct uvc_frame_h264 {
+ __u8 bLength;
+ __u8 bDescriptorType;
+ __u8 bDescriptorSubType;
+ __u8 bFrameIndex;
+ __u16 wWidth;
+ __u16 wHeight;
+ __u16 wSARwidth;
+ __u16 wSARheight;
+ __u16 wProfile;
+ __u8 bLevelIDC;
+ __u16 wConstrainedToolset;
+ __u32 bmSupportedUsages;
+ __u16 bmCapabilities;
+ __u32 bmSVCCapabilities;
+ __u32 bmMVCCapabilities;
+ __u32 dwMinBitRate;
+ __u32 dwMaxBitRate;
+ __u32 dwDefaultFrameInterval;
+ __u8 bNumFrameIntervals;
+ __u32 dwFrameInterval[];
+} __attribute__((__packed__));
+
+#define UVC_DT_FRAME_H264_SIZE(n) (44+4*(n))
+
+#define UVC_FRAME_H264(n) \
+ uvc_frame_h264_##n
+
+#define DECLARE_UVC_FRAME_H264(n) \
+struct UVC_FRAME_H264(n) { \
+ __u8 bLength; \
+ __u8 bDescriptorType; \
+ __u8 bDescriptorSubType; \
+ __u8 bFrameIndex; \
+ __u16 wWidth; \
+ __u16 wHeight; \
+ __u16 wSARwidth; \
+ __u16 wSARheight; \
+ __u16 wProfile; \
+ __u8 bLevelIDC; \
+ __u16 wConstrainedToolset; \
+ __u32 bmSupportedUsages; \
+ __u16 bmCapabilities; \
+ __u32 bmSVCCapabilities; \
+ __u32 bmMVCCapabilities; \
+ __u32 dwMinBitRate; \
+ __u32 dwMaxBitRate; \
+ __u32 dwDefaultFrameInterval; \
+ __u8 bNumFrameIntervals; \
+ __u32 dwFrameInterval[n]; \
+} __attribute__ ((packed))
+
#endif /* __LINUX_USB_VIDEO_H */
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 85b7e87..229dd25 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -718,6 +718,8 @@ struct v4l2_pix_format {
v4l2_fourcc('T', 'P', '1', '0') /* Y/CbCr 4:2:0 TP10 */
#define V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010 \
v4l2_fourcc('P', '0', '1', '0') /* Y/CbCr 4:2:0 P10 */
+#define V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010_VENUS \
+ v4l2_fourcc('Q', 'P', '1', '0') /* Y/CbCr 4:2:0 P10 Venus*/
/* SDR formats - used only for Software Defined Radio devices */
#define V4L2_SDR_FMT_CU8 v4l2_fourcc('C', 'U', '0', '8') /* IQ u8 */
diff --git a/include/uapi/media/Kbuild b/include/uapi/media/Kbuild
index e72a1f0..1e087a1 100644
--- a/include/uapi/media/Kbuild
+++ b/include/uapi/media/Kbuild
@@ -14,3 +14,4 @@
header-y += msm_sde_rotator.h
header-y += radio-iris.h
header-y += radio-iris-commands.h
+header-y += cam_lrme.h
diff --git a/include/uapi/media/cam_isp.h b/include/uapi/media/cam_isp.h
index 4a63292..afd109f 100644
--- a/include/uapi/media/cam_isp.h
+++ b/include/uapi/media/cam_isp.h
@@ -84,7 +84,9 @@
#define CAM_ISP_DSP_MODE_ROUND 2
/* ISP Generic Cmd Buffer Blob types */
-#define CAM_ISP_GENERIC_BLOB_TYPE_HFR_CONFIG 0
+#define CAM_ISP_GENERIC_BLOB_TYPE_HFR_CONFIG 0
+#define CAM_ISP_GENERIC_BLOB_TYPE_CLOCK_CONFIG 1
+#define CAM_ISP_GENERIC_BLOB_TYPE_BW_CONFIG 2
/* Query devices */
/**
@@ -248,7 +250,7 @@ struct cam_isp_port_hfr_config {
uint32_t framedrop_pattern;
uint32_t framedrop_period;
uint32_t reserved;
-};
+} __attribute__((packed));
/**
* struct cam_isp_resource_hfr_config - Resource HFR configuration
@@ -261,7 +263,7 @@ struct cam_isp_resource_hfr_config {
uint32_t num_ports;
uint32_t reserved;
struct cam_isp_port_hfr_config port_hfr_config[1];
-};
+} __attribute__((packed));
/**
* struct cam_isp_dual_split_params - dual isp spilt parameters
@@ -317,6 +319,60 @@ struct cam_isp_dual_config {
uint32_t reserved;
struct cam_isp_dual_split_params split_params;
struct cam_isp_dual_stripe_config stripes[1];
-};
+} __attribute__((packed));
+
+/**
+ * struct cam_isp_clock_config - Clock configuration
+ *
+ * @usage_type: Usage type (Single/Dual)
+ * @num_rdi: Number of RDI votes
+ * @left_pix_hz: Pixel Clock for Left ISP
+ * @right_pix_hz: Pixel Clock for Right ISP, valid only if Dual
+ * @rdi_hz: RDI Clock. ISP clock will be max of RDI and
+ * PIX clocks. For a particular context which ISP
+ * HW the RDI is allocated to is not known to UMD.
+ * Hence pass the clock and let KMD decide.
+ */
+struct cam_isp_clock_config {
+ uint32_t usage_type;
+ uint32_t num_rdi;
+ uint64_t left_pix_hz;
+ uint64_t right_pix_hz;
+ uint64_t rdi_hz[1];
+} __attribute__((packed));
+
+/**
+ * struct cam_isp_bw_vote - Bandwidth vote information
+ *
+ * @resource_id: Resource ID
+ * @reserved: Reserved field for alignment
+ * @cam_bw_bps: Bandwidth vote for CAMNOC
+ * @ext_bw_bps: Bandwidth vote for path-to-DDR after CAMNOC
+ */
+
+struct cam_isp_bw_vote {
+ uint32_t resource_id;
+ uint32_t reserved;
+ uint64_t cam_bw_bps;
+ uint64_t ext_bw_bps;
+} __attribute__((packed));
+
+/**
+ * struct cam_isp_bw_config - Bandwidth configuration
+ *
+ * @usage_type: Usage type (Single/Dual)
+ * @num_rdi: Number of RDI votes
+ * @left_pix_vote: Bandwidth vote for left ISP
+ * @right_pix_vote: Bandwidth vote for right ISP
+ * @rdi_vote: RDI bandwidth requirements
+ */
+
+struct cam_isp_bw_config {
+ uint32_t usage_type;
+ uint32_t num_rdi;
+ struct cam_isp_bw_vote left_pix_vote;
+ struct cam_isp_bw_vote right_pix_vote;
+ struct cam_isp_bw_vote rdi_vote[1];
+} __attribute__((packed));
#endif /* __UAPI_CAM_ISP_H__ */
diff --git a/include/uapi/media/cam_lrme.h b/include/uapi/media/cam_lrme.h
new file mode 100644
index 0000000..97d9578
--- /dev/null
+++ b/include/uapi/media/cam_lrme.h
@@ -0,0 +1,65 @@
+#ifndef __UAPI_CAM_LRME_H__
+#define __UAPI_CAM_LRME_H__
+
+#include "cam_defs.h"
+
+/* LRME Resource Types */
+
+enum CAM_LRME_IO_TYPE {
+ CAM_LRME_IO_TYPE_TAR,
+ CAM_LRME_IO_TYPE_REF,
+ CAM_LRME_IO_TYPE_RES,
+ CAM_LRME_IO_TYPE_DS2,
+};
+
+#define CAM_LRME_INPUT_PORT_TYPE_TAR (1 << 0)
+#define CAM_LRME_INPUT_PORT_TYPE_REF (1 << 1)
+
+#define CAM_LRME_OUTPUT_PORT_TYPE_DS2 (1 << 0)
+#define CAM_LRME_OUTPUT_PORT_TYPE_RES (1 << 1)
+
+#define CAM_LRME_DEV_MAX 1
+
+
+struct cam_lrme_hw_version {
+ uint32_t gen;
+ uint32_t rev;
+ uint32_t step;
+};
+
+struct cam_lrme_dev_cap {
+ struct cam_lrme_hw_version clc_hw_version;
+ struct cam_lrme_hw_version bus_rd_hw_version;
+ struct cam_lrme_hw_version bus_wr_hw_version;
+ struct cam_lrme_hw_version top_hw_version;
+ struct cam_lrme_hw_version top_titan_version;
+};
+
+/**
+ * struct cam_lrme_query_cap_cmd - LRME query device capability payload
+ *
+ * @dev_iommu_handle: LRME iommu handles for secure/non secure
+ * modes
+ * @cdm_iommu_handle: Iommu handles for secure/non secure modes
+ * @num_devices: number of hardware devices
+ * @dev_caps: Returned device capability array
+ */
+struct cam_lrme_query_cap_cmd {
+ struct cam_iommu_handle device_iommu;
+ struct cam_iommu_handle cdm_iommu;
+ uint32_t num_devices;
+ struct cam_lrme_dev_cap dev_caps[CAM_LRME_DEV_MAX];
+};
+
+struct cam_lrme_soc_info {
+ uint64_t clock_rate;
+ uint64_t bandwidth;
+ uint64_t reserved[4];
+};
+
+struct cam_lrme_acquire_args {
+ struct cam_lrme_soc_info lrme_soc_info;
+};
+
+#endif /* __UAPI_CAM_LRME_H__ */
+
diff --git a/include/uapi/media/cam_req_mgr.h b/include/uapi/media/cam_req_mgr.h
index 9b7d055..6846b8f 100644
--- a/include/uapi/media/cam_req_mgr.h
+++ b/include/uapi/media/cam_req_mgr.h
@@ -49,6 +49,10 @@
#define CAM_REQ_MGR_SOF_EVENT_SUCCESS 0
#define CAM_REQ_MGR_SOF_EVENT_ERROR 1
+/* Link control operations */
+#define CAM_REQ_MGR_LINK_ACTIVATE 0
+#define CAM_REQ_MGR_LINK_DEACTIVATE 1
+
/**
* Request Manager : flush_type
* @CAM_REQ_MGR_FLUSH_TYPE_ALL: Req mgr will remove all the pending
@@ -63,6 +67,15 @@
#define CAM_REQ_MGR_FLUSH_TYPE_MAX 2
/**
+ * Request Manager : Sync Mode type
+ * @CAM_REQ_MGR_SYNC_MODE_NO_SYNC: Req mgr will apply non-sync mode for this
+ * request.
+ * @CAM_REQ_MGR_SYNC_MODE_SYNC: Req mgr will apply sync mode for this request.
+ */
+#define CAM_REQ_MGR_SYNC_MODE_NO_SYNC 0
+#define CAM_REQ_MGR_SYNC_MODE_SYNC 1
+
+/**
* struct cam_req_mgr_event_data
* @session_hdl: session handle
* @link_hdl: link handle
@@ -148,33 +161,35 @@ struct cam_req_mgr_flush_info {
* inluding itself.
* @bubble_enable: Input Param - Cam req mgr will do bubble recovery if this
* flag is set.
- * @reserved: reserved field for alignment
+ * @sync_mode: Type of Sync mode for this request
* @req_id: Input Param - Request Id from which all requests will be flushed
*/
struct cam_req_mgr_sched_request {
int32_t session_hdl;
int32_t link_hdl;
int32_t bubble_enable;
- int32_t reserved;
+ int32_t sync_mode;
int64_t req_id;
};
/**
* struct cam_req_mgr_sync_mode
* @session_hdl: Input param - Identifier for CSL session
- * @sync_enable: Input Param -Enable sync mode or disable
+ * @sync_mode: Input Param - Type of sync mode
* @num_links: Input Param - Num of links in sync mode (Valid only
- * when sync_enable is TRUE)
+ * when sync_mode is one of SYNC enabled modes)
* @link_hdls: Input Param - Array of link handles to be in sync mode
- * (Valid only when sync_enable is TRUE)
+ * (Valid only when sync_mode is one of SYNC
+ * enabled modes)
* @master_link_hdl: Input Param - To dictate which link's SOF drives system
- * (Valid only when sync_enable is TRUE)
+ * (Valid only when sync_mode is one of SYNC
+ * enabled modes)
*
* @opcode: CAM_REQ_MGR_SYNC_MODE
*/
struct cam_req_mgr_sync_mode {
int32_t session_hdl;
- int32_t sync_enable;
+ int32_t sync_mode;
int32_t num_links;
int32_t link_hdls[MAX_LINKS_PER_SESSION];
int32_t master_link_hdl;
@@ -182,6 +197,24 @@ struct cam_req_mgr_sync_mode {
};
/**
+ * struct cam_req_mgr_link_control
+ * @ops: Link operations: activate/deactive
+ * @session_hdl: Input param - Identifier for CSL session
+ * @num_links: Input Param - Num of links
+ * @reserved: reserved field
+ * @link_hdls: Input Param - Links to be activated/deactivated
+ *
+ * @opcode: CAM_REQ_MGR_LINK_CONTROL
+ */
+struct cam_req_mgr_link_control {
+ int32_t ops;
+ int32_t session_hdl;
+ int32_t num_links;
+ int32_t reserved;
+ int32_t link_hdls[MAX_LINKS_PER_SESSION];
+};
+
+/**
* cam_req_mgr specific opcode ids
*/
#define CAM_REQ_MGR_CREATE_DEV_NODES (CAM_COMMON_OPCODE_MAX + 1)
@@ -196,6 +229,7 @@ struct cam_req_mgr_sync_mode {
#define CAM_REQ_MGR_MAP_BUF (CAM_COMMON_OPCODE_MAX + 10)
#define CAM_REQ_MGR_RELEASE_BUF (CAM_COMMON_OPCODE_MAX + 11)
#define CAM_REQ_MGR_CACHE_OPS (CAM_COMMON_OPCODE_MAX + 12)
+#define CAM_REQ_MGR_LINK_CONTROL (CAM_COMMON_OPCODE_MAX + 13)
/* end of cam_req_mgr opcodes */
#define CAM_MEM_FLAG_HW_READ_WRITE (1<<0)
@@ -355,14 +389,14 @@ struct cam_mem_cache_ops_cmd {
* @error_type: type of error
* @request_id: request id of frame
* @device_hdl: device handle
- * @reserved: reserved field
+ * @linke_hdl: link_hdl
* @resource_size: size of the resource
*/
struct cam_req_mgr_error_msg {
uint32_t error_type;
uint32_t request_id;
int32_t device_hdl;
- int32_t reserved;
+ int32_t link_hdl;
uint64_t resource_size;
};
diff --git a/include/uapi/media/msm_sde_rotator.h b/include/uapi/media/msm_sde_rotator.h
index 212eb26..dcdbb85 100644
--- a/include/uapi/media/msm_sde_rotator.h
+++ b/include/uapi/media/msm_sde_rotator.h
@@ -61,6 +61,8 @@
#define SDE_PIX_FMT_RGBA_1010102_UBWC V4L2_PIX_FMT_SDE_RGBA_1010102_UBWC
#define SDE_PIX_FMT_RGBX_1010102_UBWC V4L2_PIX_FMT_SDE_RGBX_1010102_UBWC
#define SDE_PIX_FMT_Y_CBCR_H2V2_P010 V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010
+#define SDE_PIX_FMT_Y_CBCR_H2V2_P010_VENUS \
+ V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_P010_VENUS
#define SDE_PIX_FMT_Y_CBCR_H2V2_TP10 V4L2_PIX_FMT_SDE_Y_CBCR_H2V2_TP10
#define SDE_PIX_FMT_Y_CBCR_H2V2_TP10_UBWC V4L2_PIX_FMT_NV12_TP10_UBWC
#define SDE_PIX_FMT_Y_CBCR_H2V2_P010_UBWC V4L2_PIX_FMT_NV12_P010_UBWC
diff --git a/init/Kconfig b/init/Kconfig
index 7b3006a..d4a2e32 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1869,6 +1869,16 @@
Say Y if unsure.
+config PERF_USER_SHARE
+ bool "Perf event sharing with user-space"
+ help
+ Say yes here to enable the user-space sharing of events. The events
+ can be shared among other user-space events or with kernel created
+ events that has the same config and type event attributes.
+
+ Say N if unsure.
+
+
config DEBUG_PERF_USE_VMALLOC
default n
bool "Debug: use vmalloc to back perf mmap() buffers"
diff --git a/kernel/events/core.c b/kernel/events/core.c
index b784662..712ba4e 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1747,6 +1747,10 @@ static void perf_group_detach(struct perf_event *event)
if (event->group_leader != event) {
list_del_init(&event->group_entry);
event->group_leader->nr_siblings--;
+
+ if (event->shared)
+ event->group_leader = event;
+
goto out;
}
@@ -4259,6 +4263,14 @@ static void put_event(struct perf_event *event)
}
/*
+ * Maintain a zombie list to collect all the zombie events
+ */
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+static LIST_HEAD(zombie_list);
+static DEFINE_SPINLOCK(zombie_list_lock);
+#endif
+
+/*
* Kill an event dead; while event:refcount will preserve the event
* object, it will not preserve its functionality. Once the last 'user'
* gives up the object, we'll destroy the thing.
@@ -4269,6 +4281,26 @@ int perf_event_release_kernel(struct perf_event *event)
struct perf_event *child, *tmp;
/*
+ * If the cpu associated to this event is offline, set the event as a
+ * zombie event. The cleanup of the cpu would be done if the CPU is
+ * back online.
+ */
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+ if (!cpu_online(event->cpu)) {
+ if (event->state == PERF_EVENT_STATE_ZOMBIE)
+ return 0;
+
+ event->state = PERF_EVENT_STATE_ZOMBIE;
+
+ spin_lock(&zombie_list_lock);
+ list_add_tail(&event->zombie_entry, &zombie_list);
+ spin_unlock(&zombie_list_lock);
+
+ return 0;
+ }
+#endif
+
+ /*
* If we got here through err_file: fput(event_file); we will not have
* attached to a context yet.
*/
@@ -4280,15 +4312,23 @@ int perf_event_release_kernel(struct perf_event *event)
if (!is_kernel_event(event)) {
perf_remove_from_owner(event);
- } else {
- if (perf_event_delete_kernel_shared(event) > 0)
- return 0;
}
ctx = perf_event_ctx_lock(event);
WARN_ON_ONCE(ctx->parent_ctx);
perf_remove_from_context(event, DETACH_GROUP);
+ if (perf_event_delete_kernel_shared(event) > 0) {
+ perf_event__state_init(event);
+ perf_install_in_context(ctx, event, event->cpu);
+
+ perf_event_ctx_unlock(event, ctx);
+
+ perf_event_enable(event);
+
+ return 0;
+ }
+
raw_spin_lock_irq(&ctx->lock);
/*
* Mark this even as STATE_DEAD, there is no external reference to it
@@ -9218,6 +9258,122 @@ static void account_event(struct perf_event *event)
account_pmu_sb_event(event);
}
+static struct perf_event *
+perf_event_create_kernel_shared_check(struct perf_event_attr *attr, int cpu,
+ struct task_struct *task,
+ perf_overflow_handler_t overflow_handler,
+ struct perf_event *group_leader)
+{
+ unsigned long idx;
+ struct perf_event *event;
+ struct shared_events_str *shrd_events;
+
+ /*
+ * Have to be per cpu events for sharing
+ */
+ if (!shared_events || (u32)cpu >= nr_cpu_ids)
+ return NULL;
+
+ /*
+ * Can't handle these type requests for sharing right now.
+ */
+ if (task || overflow_handler || attr->sample_period ||
+ (attr->type != PERF_TYPE_HARDWARE &&
+ attr->type != PERF_TYPE_RAW)) {
+ return NULL;
+ }
+
+ /*
+ * Using per_cpu_ptr (or could do cross cpu call which is what most of
+ * perf does to access per cpu data structures
+ */
+ shrd_events = per_cpu_ptr(shared_events, cpu);
+
+ mutex_lock(&shrd_events->list_mutex);
+
+ event = NULL;
+ for_each_set_bit(idx, shrd_events->used_mask, SHARED_EVENTS_MAX) {
+ /* Do the comparisons field by field on the attr structure.
+ * This is because the user-space and kernel-space might
+ * be using different versions of perf. As a result,
+ * the fields' position in the memory and the size might not
+ * be the same. Hence memcmp() is not the best way to
+ * compare.
+ */
+ if (attr->type == shrd_events->attr[idx].type &&
+ attr->config == shrd_events->attr[idx].config) {
+
+ event = shrd_events->events[idx];
+
+ /* Do not change the group for this shared event */
+ if (group_leader && event->group_leader != event) {
+ event = NULL;
+ continue;
+ }
+
+ event->shared = true;
+ atomic_inc(&shrd_events->refcount[idx]);
+ break;
+ }
+ }
+ mutex_unlock(&shrd_events->list_mutex);
+
+ return event;
+}
+
+static void
+perf_event_create_kernel_shared_add(struct perf_event_attr *attr, int cpu,
+ struct task_struct *task,
+ perf_overflow_handler_t overflow_handler,
+ void *context,
+ struct perf_event *event)
+{
+ unsigned long idx;
+ struct shared_events_str *shrd_events;
+
+ /*
+ * Have to be per cpu events for sharing
+ */
+ if (!shared_events || (u32)cpu >= nr_cpu_ids)
+ return;
+
+ /*
+ * Can't handle these type requests for sharing right now.
+ */
+ if (overflow_handler || attr->sample_period ||
+ (attr->type != PERF_TYPE_HARDWARE &&
+ attr->type != PERF_TYPE_RAW)) {
+ return;
+ }
+
+ /*
+ * Using per_cpu_ptr (or could do cross cpu call which is what most of
+ * perf does to access per cpu data structures
+ */
+ shrd_events = per_cpu_ptr(shared_events, cpu);
+
+ mutex_lock(&shrd_events->list_mutex);
+
+ /*
+ * If we are in this routine, we know that this event isn't already in
+ * the shared list. Check if slot available in shared list
+ */
+ idx = find_first_zero_bit(shrd_events->used_mask, SHARED_EVENTS_MAX);
+
+ if (idx >= SHARED_EVENTS_MAX)
+ goto out;
+
+ /*
+ * The event isn't in the list and there is an empty slot so add it.
+ */
+ shrd_events->attr[idx] = *attr;
+ shrd_events->events[idx] = event;
+ set_bit(idx, shrd_events->used_mask);
+ atomic_set(&shrd_events->refcount[idx], 1);
+out:
+ mutex_unlock(&shrd_events->list_mutex);
+}
+
/*
* Allocate and initialize a event structure
*/
@@ -9260,6 +9416,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
INIT_LIST_HEAD(&event->rb_entry);
INIT_LIST_HEAD(&event->active_entry);
INIT_LIST_HEAD(&event->addr_filters.list);
+ INIT_LIST_HEAD(&event->zombie_entry);
INIT_HLIST_NODE(&event->hlist_entry);
@@ -9684,6 +9841,31 @@ __perf_event_ctx_lock_double(struct perf_event *group_leader,
return gctx;
}
+#ifdef CONFIG_PERF_USER_SHARE
+static void perf_group_shared_event(struct perf_event *event,
+ struct perf_event *group_leader)
+{
+ if (!event->shared || !group_leader)
+ return;
+
+ /* Do not attempt to change the group for this shared event */
+ if (event->group_leader != event)
+ return;
+
+ /*
+ * Single events have the group leaders as themselves.
+ * As we now have a new group to attach to, remove from
+ * the previous group and attach it to the new group.
+ */
+ perf_remove_from_context(event, DETACH_GROUP);
+
+ event->group_leader = group_leader;
+ perf_event__state_init(event);
+
+ perf_install_in_context(group_leader->ctx, event, event->cpu);
+}
+#endif
+
/**
* sys_perf_event_open - open a performance event, associate it to a task/cpu
*
@@ -9697,7 +9879,7 @@ SYSCALL_DEFINE5(perf_event_open,
pid_t, pid, int, cpu, int, group_fd, unsigned long, flags)
{
struct perf_event *group_leader = NULL, *output_event = NULL;
- struct perf_event *event, *sibling;
+ struct perf_event *event = NULL, *sibling;
struct perf_event_attr attr;
struct perf_event_context *ctx, *uninitialized_var(gctx);
struct file *event_file = NULL;
@@ -9811,11 +9993,17 @@ SYSCALL_DEFINE5(perf_event_open,
if (flags & PERF_FLAG_PID_CGROUP)
cgroup_fd = pid;
- event = perf_event_alloc(&attr, cpu, task, group_leader, NULL,
- NULL, NULL, cgroup_fd);
- if (IS_ERR(event)) {
- err = PTR_ERR(event);
- goto err_cred;
+#ifdef CONFIG_PERF_USER_SHARE
+ event = perf_event_create_kernel_shared_check(&attr, cpu, task, NULL,
+ group_leader);
+#endif
+ if (!event) {
+ event = perf_event_alloc(&attr, cpu, task, group_leader, NULL,
+ NULL, NULL, cgroup_fd);
+ if (IS_ERR(event)) {
+ err = PTR_ERR(event);
+ goto err_cred;
+ }
}
if (is_sampling_event(event)) {
@@ -9982,7 +10170,7 @@ SYSCALL_DEFINE5(perf_event_open,
* Must be under the same ctx::mutex as perf_install_in_context(),
* because we need to serialize with concurrent event creation.
*/
- if (!exclusive_event_installable(event, ctx)) {
+ if (!event->shared && !exclusive_event_installable(event, ctx)) {
/* exclusive and group stuff are assumed mutually exclusive */
WARN_ON_ONCE(move_group);
@@ -10059,10 +10247,17 @@ SYSCALL_DEFINE5(perf_event_open,
perf_event__header_size(event);
perf_event__id_header_size(event);
- event->owner = current;
+#ifdef CONFIG_PERF_USER_SHARE
+ if (event->shared && group_leader)
+ perf_group_shared_event(event, group_leader);
+#endif
- perf_install_in_context(ctx, event, event->cpu);
- perf_unpin_context(ctx);
+ if (!event->shared) {
+ event->owner = current;
+
+ perf_install_in_context(ctx, event, event->cpu);
+ perf_unpin_context(ctx);
+ }
if (move_group)
perf_event_ctx_unlock(group_leader, gctx);
@@ -10077,9 +10272,11 @@ SYSCALL_DEFINE5(perf_event_open,
put_online_cpus();
- mutex_lock(¤t->perf_event_mutex);
- list_add_tail(&event->owner_entry, ¤t->perf_event_list);
- mutex_unlock(¤t->perf_event_mutex);
+ if (!event->shared) {
+ mutex_lock(¤t->perf_event_mutex);
+ list_add_tail(&event->owner_entry, ¤t->perf_event_list);
+ mutex_unlock(¤t->perf_event_mutex);
+ }
/*
* Drop the reference on the group_event after placing the
@@ -10089,6 +10286,14 @@ SYSCALL_DEFINE5(perf_event_open,
*/
fdput(group);
fd_install(event_fd, event_file);
+
+#ifdef CONFIG_PERF_USER_SHARE
+ /* Add the event to the shared events list */
+ if (!event->shared)
+ perf_event_create_kernel_shared_add(&attr, cpu,
+ task, NULL, ctx, event);
+#endif
+
return event_fd;
err_locked:
@@ -10124,102 +10329,6 @@ SYSCALL_DEFINE5(perf_event_open,
return err;
}
-static struct perf_event *
-perf_event_create_kernel_shared_check(struct perf_event_attr *attr, int cpu,
- struct task_struct *task,
- perf_overflow_handler_t overflow_handler,
- void *context)
-{
- unsigned long idx;
- struct perf_event *event;
- struct shared_events_str *shrd_events;
-
- /*
- * Have to be per cpu events for sharing
- */
- if (!shared_events || (u32)cpu >= nr_cpu_ids)
- return NULL;
-
- /*
- * Can't handle these type requests for sharing right now.
- */
- if (task || context || overflow_handler ||
- (attr->type != PERF_TYPE_HARDWARE &&
- attr->type != PERF_TYPE_RAW))
- return NULL;
-
- /*
- * Using per_cpu_ptr (or could do cross cpu call which is what most of
- * perf does to access per cpu data structures
- */
- shrd_events = per_cpu_ptr(shared_events, cpu);
-
- mutex_lock(&shrd_events->list_mutex);
-
- event = NULL;
- for_each_set_bit(idx, shrd_events->used_mask, SHARED_EVENTS_MAX) {
- if (memcmp(attr, &shrd_events->attr[idx],
- sizeof(shrd_events->attr[idx])) == 0) {
- atomic_inc(&shrd_events->refcount[idx]);
- event = shrd_events->events[idx];
- break;
- }
- }
- mutex_unlock(&shrd_events->list_mutex);
- return event;
-}
-
-static void
-perf_event_create_kernel_shared_add(struct perf_event_attr *attr, int cpu,
- struct task_struct *task,
- perf_overflow_handler_t overflow_handler,
- void *context,
- struct perf_event *event)
-{
- unsigned long idx;
- struct shared_events_str *shrd_events;
-
- /*
- * Have to be per cpu events for sharing
- */
- if (!shared_events || (u32)cpu >= nr_cpu_ids)
- return;
-
- /*
- * Can't handle these type requests for sharing right now.
- */
- if (task || context || overflow_handler ||
- (attr->type != PERF_TYPE_HARDWARE &&
- attr->type != PERF_TYPE_RAW))
- return;
-
- /*
- * Using per_cpu_ptr (or could do cross cpu call which is what most of
- * perf does to access per cpu data structures
- */
- shrd_events = per_cpu_ptr(shared_events, cpu);
-
- mutex_lock(&shrd_events->list_mutex);
-
- /*
- * If we are in this routine, we know that this event isn't already in
- * the shared list. Check if slot available in shared list
- */
- idx = find_first_zero_bit(shrd_events->used_mask, SHARED_EVENTS_MAX);
-
- if (idx >= SHARED_EVENTS_MAX)
- goto out;
-
- /*
- * The event isn't in the list and there is an empty slot so add it.
- */
- shrd_events->attr[idx] = *attr;
- shrd_events->events[idx] = event;
- set_bit(idx, shrd_events->used_mask);
- atomic_set(&shrd_events->refcount[idx], 1);
-out:
- mutex_unlock(&shrd_events->list_mutex);
-}
/**
* perf_event_create_kernel_counter
@@ -10238,28 +10347,26 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
struct perf_event *event;
int err;
- /*
- * Check if the requested attributes match a shared event
- */
- event = perf_event_create_kernel_shared_check(attr, cpu,
- task, overflow_handler, context);
- if (event)
- return event;
-
- /*
- * Get the target context (task or percpu):
- */
-
- event = perf_event_alloc(attr, cpu, task, NULL, NULL,
- overflow_handler, context, -1);
- if (IS_ERR(event)) {
- err = PTR_ERR(event);
- goto err;
+ event = perf_event_create_kernel_shared_check(attr, cpu, task,
+ overflow_handler, NULL);
+ if (!event) {
+ event = perf_event_alloc(attr, cpu, task, NULL, NULL,
+ overflow_handler, context, -1);
+ if (IS_ERR(event)) {
+ err = PTR_ERR(event);
+ goto err;
+ }
}
/* Mark owner so we could distinguish it from user events. */
event->owner = TASK_TOMBSTONE;
+ if (event->shared)
+ return event;
+
+ /*
+ * Get the target context (task or percpu):
+ */
ctx = find_get_context(event->pmu, task, event);
if (IS_ERR(ctx)) {
err = PTR_ERR(ctx);
@@ -10965,6 +11072,32 @@ check_hotplug_start_event(struct perf_event *event)
event->pmu->start(event, 0);
}
+static void perf_event_zombie_cleanup(unsigned int cpu)
+{
+ struct perf_event *event, *tmp;
+
+ spin_lock(&zombie_list_lock);
+
+ list_for_each_entry_safe(event, tmp, &zombie_list, zombie_entry) {
+ if (event->cpu != cpu)
+ continue;
+
+ list_del(&event->zombie_entry);
+ spin_unlock(&zombie_list_lock);
+
+ /*
+ * The detachment of the event with the
+ * PMU expects it to be in an active state
+ */
+ event->state = PERF_EVENT_STATE_ACTIVE;
+ perf_event_release_kernel(event);
+
+ spin_lock(&zombie_list_lock);
+ }
+
+ spin_unlock(&zombie_list_lock);
+}
+
static int perf_event_start_swevents(unsigned int cpu)
{
struct perf_event_context *ctx;
@@ -10972,6 +11105,8 @@ static int perf_event_start_swevents(unsigned int cpu)
struct perf_event *event;
int idx;
+ perf_event_zombie_cleanup(cpu);
+
idx = srcu_read_lock(&pmus_srcu);
list_for_each_entry_rcu(pmu, &pmus, entry) {
ctx = &per_cpu_ptr(pmu->pmu_cpu_context, cpu)->ctx;
diff --git a/kernel/panic.c b/kernel/panic.c
index fcc8786..d797170 100644
--- a/kernel/panic.c
+++ b/kernel/panic.c
@@ -27,6 +27,7 @@
#include <linux/bug.h>
#define CREATE_TRACE_POINTS
#include <trace/events/exception.h>
+#include <soc/qcom/minidump.h>
#define PANIC_TIMER_STEP 100
#define PANIC_BLINK_SPD 18
@@ -174,6 +175,7 @@ void panic(const char *fmt, ...)
va_start(args, fmt);
vsnprintf(buf, sizeof(buf), fmt, args);
va_end(args);
+ dump_stack_minidump(0);
pr_emerg("Kernel panic - not syncing: %s\n", buf);
#ifdef CONFIG_DEBUG_BUGVERBOSE
/*
diff --git a/kernel/sched/boost.c b/kernel/sched/boost.c
index 1ccd19d..09ad1f0 100644
--- a/kernel/sched/boost.c
+++ b/kernel/sched/boost.c
@@ -90,7 +90,7 @@ static void set_boost_policy(int type)
return;
}
- if (min_possible_efficiency != max_possible_efficiency) {
+ if (sysctl_sched_is_big_little) {
boost_policy = SCHED_BOOST_ON_BIG;
return;
}
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 01a589c..bbe783e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8660,6 +8660,7 @@ void sched_move_task(struct task_struct *tsk)
struct rq *rq;
rq = task_rq_lock(tsk, &rf);
+ update_rq_clock(rq);
running = task_current(rq, tsk);
queued = task_on_rq_queued(tsk);
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 3192612..32b67eb 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -235,7 +235,9 @@ static void sugov_track_cycles(struct sugov_policy *sg_policy,
/* Track cycles in current window */
delta_ns = upto - sg_policy->last_cyc_update_time;
- cycles = (prev_freq * delta_ns) / (NSEC_PER_SEC / KHZ);
+ delta_ns *= prev_freq;
+ do_div(delta_ns, (NSEC_PER_SEC / KHZ));
+ cycles = delta_ns;
sg_policy->curr_cycles += cycles;
sg_policy->last_cyc_update_time = upto;
}
diff --git a/kernel/sched/energy.c b/kernel/sched/energy.c
index c32defa..420cb52 100644
--- a/kernel/sched/energy.c
+++ b/kernel/sched/energy.c
@@ -28,6 +28,8 @@
#include <linux/pm_opp.h>
#include <linux/platform_device.h>
+#include "sched.h"
+
struct sched_group_energy *sge_array[NR_CPUS][NR_SD_LEVELS];
static void free_resources(void)
@@ -269,6 +271,7 @@ static int sched_energy_probe(struct platform_device *pdev)
kfree(max_frequencies);
+ walt_sched_energy_populated_callback();
dev_info(&pdev->dev, "Sched-energy-costs capacity updated\n");
return 0;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8cae0c4..130bbb7 100755
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2769,6 +2769,7 @@ u32 sched_get_wake_up_idle(struct task_struct *p)
return !!enabled;
}
+EXPORT_SYMBOL(sched_get_wake_up_idle);
int sched_set_wake_up_idle(struct task_struct *p, int wake_up_idle)
{
@@ -2781,6 +2782,7 @@ int sched_set_wake_up_idle(struct task_struct *p, int wake_up_idle)
return 0;
}
+EXPORT_SYMBOL(sched_set_wake_up_idle);
/* Precomputed fixed inverse multiplies for multiplication by y^n */
static const u32 runnable_avg_yN_inv[] = {
@@ -5593,13 +5595,6 @@ static unsigned long __cpu_norm_util(int cpu, unsigned long capacity, int delta)
return DIV_ROUND_UP(util << SCHED_CAPACITY_SHIFT, capacity);
}
-static inline bool bias_to_waker_cpu_enabled(struct task_struct *wakee,
- struct task_struct *waker)
-{
- return task_util(waker) > sched_big_waker_task_load &&
- task_util(wakee) < sched_small_wakee_task_load;
-}
-
static inline bool
bias_to_waker_cpu(struct task_struct *p, int cpu, struct cpumask *rtg_target)
{
@@ -6751,107 +6746,6 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
return target;
}
-static inline int find_best_target(struct task_struct *p, bool boosted, bool prefer_idle)
-{
- int iter_cpu;
- int target_cpu = -1;
- int target_util = 0;
- int backup_capacity = 0;
- int best_idle_cpu = -1;
- int best_idle_cstate = INT_MAX;
- int backup_cpu = -1;
- unsigned long task_util_boosted, new_util;
-
- task_util_boosted = boosted_task_util(p);
- for (iter_cpu = 0; iter_cpu < NR_CPUS; iter_cpu++) {
- int cur_capacity;
- struct rq *rq;
- int idle_idx;
-
- /*
- * Iterate from higher cpus for boosted tasks.
- */
- int i = boosted ? NR_CPUS-iter_cpu-1 : iter_cpu;
-
- if (!cpu_online(i) || !cpumask_test_cpu(i, tsk_cpus_allowed(p)))
- continue;
-
- /*
- * p's blocked utilization is still accounted for on prev_cpu
- * so prev_cpu will receive a negative bias due to the double
- * accounting. However, the blocked utilization may be zero.
- */
- new_util = cpu_util(i) + task_util_boosted;
-
- /*
- * Ensure minimum capacity to grant the required boost.
- * The target CPU can be already at a capacity level higher
- * than the one required to boost the task.
- */
- if (new_util > capacity_orig_of(i))
- continue;
-
-#ifdef CONFIG_SCHED_WALT
- if (sched_cpu_high_irqload(i))
- continue;
-#endif
- /*
- * Unconditionally favoring tasks that prefer idle cpus to
- * improve latency.
- */
- if (idle_cpu(i) && prefer_idle) {
- if (best_idle_cpu < 0)
- best_idle_cpu = i;
- continue;
- }
-
- cur_capacity = capacity_curr_of(i);
- rq = cpu_rq(i);
- idle_idx = idle_get_state_idx(rq);
-
- if (new_util < cur_capacity) {
- if (cpu_rq(i)->nr_running) {
- if(prefer_idle) {
- // Find a target cpu with lowest
- // utilization.
- if (target_util == 0 ||
- target_util < new_util) {
- target_cpu = i;
- target_util = new_util;
- }
- } else {
- // Find a target cpu with highest
- // utilization.
- if (target_util == 0 ||
- target_util > new_util) {
- target_cpu = i;
- target_util = new_util;
- }
- }
- } else if (!prefer_idle) {
- if (best_idle_cpu < 0 ||
- (sysctl_sched_cstate_aware &&
- best_idle_cstate > idle_idx)) {
- best_idle_cstate = idle_idx;
- best_idle_cpu = i;
- }
- }
- } else if (backup_capacity == 0 ||
- backup_capacity > cur_capacity) {
- // Find a backup cpu with least capacity.
- backup_capacity = cur_capacity;
- backup_cpu = i;
- }
- }
-
- if (prefer_idle && best_idle_cpu >= 0)
- target_cpu = best_idle_cpu;
- else if (target_cpu < 0)
- target_cpu = best_idle_cpu >= 0 ? best_idle_cpu : backup_cpu;
-
- return target_cpu;
-}
-
/*
* Should task be woken to any available idle cpu?
*
@@ -6925,15 +6819,17 @@ is_packing_eligible(struct task_struct *p, unsigned long task_util,
return cpu_cap_idx_pack == cpu_cap_idx_spread;
}
+unsigned int sched_smp_overlap_capacity = SCHED_CAPACITY_SCALE;
+
static int energy_aware_wake_cpu(struct task_struct *p, int target, int sync)
{
struct sched_domain *sd;
- struct sched_group *sg, *sg_target;
+ struct sched_group *sg, *sg_target, *start_sg;
int target_max_cap = INT_MAX;
- int target_cpu, targeted_cpus = 0;
+ int target_cpu = -1, targeted_cpus = 0;
unsigned long task_util_boosted = 0, curr_util = 0;
long new_util, new_util_cum;
- int i = -1;
+ int i;
int ediff = -1;
int cpu = smp_processor_id();
int min_util_cpu = -1;
@@ -6954,16 +6850,11 @@ static int energy_aware_wake_cpu(struct task_struct *p, int target, int sync)
struct related_thread_group *grp;
cpumask_t search_cpus;
int prev_cpu = task_cpu(p);
- struct task_struct *curr = cpu_rq(cpu)->curr;
-#ifdef CONFIG_SCHED_CORE_ROTATE
+ int start_cpu = walt_start_cpu(prev_cpu);
bool do_rotate = false;
bool avoid_prev_cpu = false;
-#else
-#define do_rotate false
-#define avoid_prev_cpu false
-#endif
- sd = rcu_dereference(per_cpu(sd_ea, prev_cpu));
+ sd = rcu_dereference(per_cpu(sd_ea, start_cpu));
if (!sd)
return target;
@@ -6976,21 +6867,20 @@ static int energy_aware_wake_cpu(struct task_struct *p, int target, int sync)
curr_util = boosted_task_util(cpu_rq(cpu)->curr);
need_idle = wake_to_idle(p) || schedtune_prefer_idle(p);
-
+ if (need_idle)
+ sync = 0;
grp = task_related_thread_group(p);
if (grp && grp->preferred_cluster)
rtg_target = &grp->preferred_cluster->cpus;
- if (sync && bias_to_waker_cpu_enabled(p, curr) &&
- bias_to_waker_cpu(p, cpu, rtg_target)) {
+ if (sync && bias_to_waker_cpu(p, cpu, rtg_target)) {
trace_sched_task_util_bias_to_waker(p, prev_cpu,
task_util(p), cpu, cpu, 0, need_idle);
return cpu;
}
+ task_util_boosted = boosted_task_util(p);
if (sysctl_sched_is_big_little) {
- task_util_boosted = boosted_task_util(p);
-
/*
* Find group with sufficient capacity. We only get here if no cpu is
* overutilized. We may end up overutilizing a cpu by adding the task,
@@ -7043,204 +6933,195 @@ static int energy_aware_wake_cpu(struct task_struct *p, int target, int sync)
target_max_cap = capacity_of(max_cap_cpu);
}
} while (sg = sg->next, sg != sd->groups);
+ }
- target_cpu = -1;
+ start_sg = sg_target;
+next_sg:
+ cpumask_copy(&search_cpus, tsk_cpus_allowed(p));
+ cpumask_and(&search_cpus, &search_cpus,
+ sched_group_cpus(sg_target));
- cpumask_copy(&search_cpus, tsk_cpus_allowed(p));
- cpumask_and(&search_cpus, &search_cpus,
- sched_group_cpus(sg_target));
-
-#ifdef CONFIG_SCHED_CORE_ROTATE
- i = find_first_cpu_bit(p, &search_cpus, sg_target,
- &avoid_prev_cpu, &do_rotate,
- &first_cpu_bit_env);
+ i = find_first_cpu_bit(p, &search_cpus, sg_target,
+ &avoid_prev_cpu, &do_rotate,
+ &first_cpu_bit_env);
retry:
-#endif
- /* Find cpu with sufficient capacity */
- while ((i = cpumask_next(i, &search_cpus)) < nr_cpu_ids) {
- cpumask_clear_cpu(i, &search_cpus);
+ /* Find cpu with sufficient capacity */
+ while ((i = cpumask_next(i, &search_cpus)) < nr_cpu_ids) {
+ cpumask_clear_cpu(i, &search_cpus);
- if (cpu_isolated(i))
- continue;
+ if (cpu_isolated(i))
+ continue;
- if (isolated_candidate == -1)
- isolated_candidate = i;
+ if (isolated_candidate == -1)
+ isolated_candidate = i;
- if (avoid_prev_cpu && i == prev_cpu)
- continue;
+ if (avoid_prev_cpu && i == prev_cpu)
+ continue;
- if (is_reserved(i))
- continue;
+ if (is_reserved(i))
+ continue;
- if (sched_cpu_high_irqload(i))
- continue;
+ if (sched_cpu_high_irqload(i))
+ continue;
- /*
- * Since this code is inside sched_is_big_little,
- * we are going to assume that boost policy is
- * SCHED_BOOST_ON_BIG.
- */
- if (placement_boost != SCHED_BOOST_NONE) {
- new_util = cpu_util(i);
- if (new_util < min_util) {
- min_util_cpu = i;
- min_util = new_util;
- }
- continue;
- }
-
- /*
- * p's blocked utilization is still accounted for on prev_cpu
- * so prev_cpu will receive a negative bias due to the double
- * accounting. However, the blocked utilization may be zero.
- */
- new_util = cpu_util(i) + task_util_boosted;
-
- if (task_in_cum_window_demand(cpu_rq(i), p))
- new_util_cum = cpu_util_cum(i, 0) +
- task_util_boosted - task_util(p);
- else
- new_util_cum = cpu_util_cum(i, 0) +
- task_util_boosted;
-
- if (sync && i == cpu)
- new_util -= curr_util;
-
- trace_sched_cpu_util(p, i, task_util_boosted, curr_util,
- new_util_cum, sync);
-
- /*
- * Ensure minimum capacity to grant the required boost.
- * The target CPU can be already at a capacity level higher
- * than the one required to boost the task.
- */
- if (new_util > capacity_orig_of(i))
- continue;
-
- cpu_idle_idx = idle_get_state_idx(cpu_rq(i));
-
- if (!need_idle &&
- add_capacity_margin(new_util_cum, i) <
- capacity_curr_of(i)) {
- if (sysctl_sched_cstate_aware) {
- if (cpu_idle_idx < min_idle_idx) {
- min_idle_idx = cpu_idle_idx;
- min_idle_idx_cpu = i;
- target_cpu = i;
- target_cpu_util = new_util;
- target_cpu_new_util_cum =
- new_util_cum;
- targeted_cpus = 1;
- } else if (cpu_idle_idx ==
- min_idle_idx &&
- (target_cpu_util >
- new_util ||
- (target_cpu_util ==
- new_util &&
- (i == prev_cpu ||
- (target_cpu !=
- prev_cpu &&
- target_cpu_new_util_cum >
- new_util_cum))))) {
- min_idle_idx_cpu = i;
- target_cpu = i;
- target_cpu_util = new_util;
- target_cpu_new_util_cum =
- new_util_cum;
- targeted_cpus++;
- }
- } else if (cpu_rq(i)->nr_running) {
- target_cpu = i;
-#ifdef CONFIG_SCHED_CORE_ROTATE
- do_rotate = false;
-#endif
- break;
- }
- } else if (!need_idle) {
- /*
- * At least one CPU other than target_cpu is
- * going to raise CPU's OPP higher than current
- * because current CPU util is more than current
- * capacity + margin. We can safely do task
- * packing without worrying about doing such
- * itself raises OPP.
- */
- safe_to_pack = true;
- }
-
- /*
- * cpu has capacity at higher OPP, keep it as
- * fallback.
- */
+ /*
+ * Since this code is inside sched_is_big_little,
+ * we are going to assume that boost policy is
+ * SCHED_BOOST_ON_BIG.
+ */
+ if (placement_boost != SCHED_BOOST_NONE) {
+ new_util = cpu_util(i);
if (new_util < min_util) {
min_util_cpu = i;
min_util = new_util;
+ }
+ continue;
+ }
+
+ /*
+ * p's blocked utilization is still accounted for on prev_cpu
+ * so prev_cpu will receive a negative bias due to the double
+ * accounting. However, the blocked utilization may be zero.
+ */
+ new_util = cpu_util(i) + task_util_boosted;
+
+ if (task_in_cum_window_demand(cpu_rq(i), p))
+ new_util_cum = cpu_util_cum(i, 0) +
+ task_util_boosted - task_util(p);
+ else
+ new_util_cum = cpu_util_cum(i, 0) +
+ task_util_boosted;
+
+ if (sync && i == cpu)
+ new_util -= curr_util;
+
+ trace_sched_cpu_util(p, i, task_util_boosted, curr_util,
+ new_util_cum, sync);
+
+ /*
+ * Ensure minimum capacity to grant the required boost.
+ * The target CPU can be already at a capacity level higher
+ * than the one required to boost the task.
+ */
+ if (new_util > capacity_orig_of(i))
+ continue;
+
+ cpu_idle_idx = idle_get_state_idx(cpu_rq(i));
+
+ if (!need_idle &&
+ add_capacity_margin(new_util_cum, i) <
+ capacity_curr_of(i)) {
+ if (sysctl_sched_cstate_aware) {
+ if (cpu_idle_idx < min_idle_idx) {
+ min_idle_idx = cpu_idle_idx;
+ min_idle_idx_cpu = i;
+ target_cpu = i;
+ target_cpu_util = new_util;
+ target_cpu_new_util_cum =
+ new_util_cum;
+ targeted_cpus = 1;
+ } else if (cpu_idle_idx ==
+ min_idle_idx &&
+ (target_cpu_util >
+ new_util ||
+ (target_cpu_util ==
+ new_util &&
+ (i == prev_cpu ||
+ (target_cpu !=
+ prev_cpu &&
+ target_cpu_new_util_cum >
+ new_util_cum))))) {
+ min_idle_idx_cpu = i;
+ target_cpu = i;
+ target_cpu_util = new_util;
+ target_cpu_new_util_cum =
+ new_util_cum;
+ targeted_cpus++;
+ }
+ } else if (cpu_rq(i)->nr_running) {
+ target_cpu = i;
+ do_rotate = false;
+ break;
+ }
+ } else if (!need_idle) {
+ /*
+ * At least one CPU other than target_cpu is
+ * going to raise CPU's OPP higher than current
+ * because current CPU util is more than current
+ * capacity + margin. We can safely do task
+ * packing without worrying about doing such
+ * itself raises OPP.
+ */
+ safe_to_pack = true;
+ }
+
+ /*
+ * cpu has capacity at higher OPP, keep it as
+ * fallback.
+ */
+ if (new_util < min_util) {
+ min_util_cpu = i;
+ min_util = new_util;
+ min_util_cpu_idle_idx = cpu_idle_idx;
+ min_util_cpu_util_cum = new_util_cum;
+ } else if (sysctl_sched_cstate_aware &&
+ min_util == new_util) {
+ if (min_util_cpu == task_cpu(p))
+ continue;
+
+ if (i == task_cpu(p) ||
+ (cpu_idle_idx < min_util_cpu_idle_idx ||
+ (cpu_idle_idx == min_util_cpu_idle_idx &&
+ min_util_cpu_util_cum > new_util_cum))) {
+ min_util_cpu = i;
min_util_cpu_idle_idx = cpu_idle_idx;
min_util_cpu_util_cum = new_util_cum;
- } else if (sysctl_sched_cstate_aware &&
- min_util == new_util) {
- if (min_util_cpu == task_cpu(p))
- continue;
-
- if (i == task_cpu(p) ||
- (cpu_idle_idx < min_util_cpu_idle_idx ||
- (cpu_idle_idx == min_util_cpu_idle_idx &&
- min_util_cpu_util_cum > new_util_cum))) {
- min_util_cpu = i;
- min_util_cpu_idle_idx = cpu_idle_idx;
- min_util_cpu_util_cum = new_util_cum;
- }
}
}
+ }
-#ifdef CONFIG_SCHED_CORE_ROTATE
- if (do_rotate) {
- /*
- * We started iteration somewhere in the middle of
- * cpumask. Iterate once again from bit 0 to the
- * previous starting point bit.
- */
- do_rotate = false;
- i = -1;
- goto retry;
- }
-#endif
-
- if (target_cpu == -1 ||
- (target_cpu != min_util_cpu && !safe_to_pack &&
- !is_packing_eligible(p, task_util_boosted, sg_target,
- target_cpu_new_util_cum,
- targeted_cpus))) {
- if (likely(min_util_cpu != -1))
- target_cpu = min_util_cpu;
- else if (cpu_isolated(task_cpu(p)) &&
- isolated_candidate != -1)
- target_cpu = isolated_candidate;
- else
- target_cpu = task_cpu(p);
- }
- } else {
+ if (do_rotate) {
/*
- * Find a cpu with sufficient capacity
+ * We started iteration somewhere in the middle of
+ * cpumask. Iterate once again from bit 0 to the
+ * previous starting point bit.
*/
-#ifdef CONFIG_CGROUP_SCHEDTUNE
- bool boosted = schedtune_task_boost(p) > 0;
- bool prefer_idle = schedtune_prefer_idle(p) > 0;
-#else
- bool boosted = 0;
- bool prefer_idle = 0;
-#endif
- int tmp_target = find_best_target(p, boosted, prefer_idle);
+ do_rotate = false;
+ i = -1;
+ goto retry;
+ }
- target_cpu = task_cpu(p);
- if (tmp_target >= 0) {
- target_cpu = tmp_target;
- if ((boosted || prefer_idle) && idle_cpu(target_cpu))
- return target_cpu;
+ /*
+ * If we don't find a CPU that fits this task without
+ * increasing OPP above sched_smp_overlap_capacity or
+ * when placement boost is active, expand the search to
+ * the other groups on a SMP system.
+ */
+ if (!sysctl_sched_is_big_little &&
+ (placement_boost == SCHED_BOOST_ON_ALL ||
+ (target_cpu == -1 && min_util_cpu_util_cum >
+ sched_smp_overlap_capacity))) {
+ if (sg_target->next != start_sg) {
+ sg_target = sg_target->next;
+ goto next_sg;
}
}
+ if (target_cpu == -1 ||
+ (target_cpu != min_util_cpu && !safe_to_pack &&
+ !is_packing_eligible(p, task_util_boosted, sg_target,
+ target_cpu_new_util_cum,
+ targeted_cpus))) {
+ if (likely(min_util_cpu != -1))
+ target_cpu = min_util_cpu;
+ else if (cpu_isolated(task_cpu(p)) &&
+ isolated_candidate != -1)
+ target_cpu = isolated_candidate;
+ else
+ target_cpu = task_cpu(p);
+ }
+
if (target_cpu != task_cpu(p) && !avoid_prev_cpu &&
!cpu_isolated(task_cpu(p))) {
struct energy_env eenv = {
@@ -7263,6 +7144,14 @@ static int energy_aware_wake_cpu(struct task_struct *p, int target, int sync)
return target_cpu;
}
+ if (need_idle) {
+ trace_sched_task_util_need_idle(p, task_cpu(p),
+ task_util(p),
+ target_cpu, target_cpu,
+ 0, need_idle);
+ return target_cpu;
+ }
+
/*
* We always want to migrate the task to the best CPU when
* placement boost is active.
@@ -9420,7 +9309,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
if (energy_aware() && !env->dst_rq->rd->overutilized) {
int cpu_local, cpu_busiest;
long util_cum;
- unsigned long capacity_local, capacity_busiest;
+ unsigned long energy_local, energy_busiest;
if (env->idle != CPU_NEWLY_IDLE)
goto out_balanced;
@@ -9431,12 +9320,12 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
cpu_local = group_first_cpu(sds.local);
cpu_busiest = group_first_cpu(sds.busiest);
- /* TODO: don't assume same cap cpus are in same domain */
- capacity_local = capacity_orig_of(cpu_local);
- capacity_busiest = capacity_orig_of(cpu_busiest);
- if (capacity_local > capacity_busiest) {
+ /* TODO: don't assume same energy cpus are in same domain */
+ energy_local = cpu_max_power_cost(cpu_local);
+ energy_busiest = cpu_max_power_cost(cpu_busiest);
+ if (energy_local > energy_busiest) {
goto out_balanced;
- } else if (capacity_local == capacity_busiest) {
+ } else if (energy_local == energy_busiest) {
if (cpu_rq(cpu_busiest)->nr_running < 2)
goto out_balanced;
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 1bf8e63..1294950 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1757,14 +1757,11 @@ static int find_lowest_rq(struct task_struct *task)
unsigned long tutil = task_util(task);
int best_cpu_idle_idx = INT_MAX;
int cpu_idle_idx = -1;
- bool placement_boost;
-#ifdef CONFIG_SCHED_CORE_ROTATE
+ enum sched_boost_policy placement_boost;
+ int prev_cpu = task_cpu(task);
+ int start_cpu = walt_start_cpu(prev_cpu);
bool do_rotate = false;
bool avoid_prev_cpu = false;
-#else
-#define do_rotate false
-#define avoid_prev_cpu false
-#endif
/* Make sure the mask is initialized first */
if (unlikely(!lowest_mask))
@@ -1776,19 +1773,16 @@ static int find_lowest_rq(struct task_struct *task)
if (!cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask))
return -1; /* No targets found */
- if (energy_aware() && sysctl_sched_is_big_little) {
+ if (energy_aware()) {
sg_target = NULL;
best_cpu = -1;
- /*
- * Since this code is inside sched_is_big_little, we are going
- * to assume that boost policy is SCHED_BOOST_ON_BIG
- */
- placement_boost = sched_boost() == FULL_THROTTLE_BOOST;
+ placement_boost = sched_boost() == FULL_THROTTLE_BOOST ?
+ sched_boost_policy() : SCHED_BOOST_NONE;
best_capacity = placement_boost ? 0 : ULONG_MAX;
rcu_read_lock();
- sd = rcu_dereference(per_cpu(sd_ea, task_cpu(task)));
+ sd = rcu_dereference(per_cpu(sd_ea, start_cpu));
if (!sd) {
rcu_read_unlock();
goto noea;
@@ -1800,6 +1794,11 @@ static int find_lowest_rq(struct task_struct *task)
sched_group_cpus(sg)))
continue;
+ if (!sysctl_sched_is_big_little) {
+ sg_target = sg;
+ break;
+ }
+
cpu = group_first_cpu(sg);
cpu_capacity = capacity_orig_of(cpu);
@@ -1824,11 +1823,9 @@ static int find_lowest_rq(struct task_struct *task)
cpumask_andnot(&backup_search_cpu, &backup_search_cpu,
&search_cpu);
-#ifdef CONFIG_SCHED_CORE_ROTATE
cpu = find_first_cpu_bit(task, &search_cpu, sg_target,
&avoid_prev_cpu, &do_rotate,
&first_cpu_bit_env);
-#endif
} else {
cpumask_copy(&search_cpu, lowest_mask);
cpumask_clear(&backup_search_cpu);
@@ -1845,7 +1842,7 @@ static int find_lowest_rq(struct task_struct *task)
*/
util = cpu_util(cpu);
- if (avoid_prev_cpu && cpu == task_cpu(task))
+ if (avoid_prev_cpu && cpu == prev_cpu)
continue;
if (__cpu_overutilized(cpu, util + tutil))
@@ -1894,7 +1891,6 @@ static int find_lowest_rq(struct task_struct *task)
best_cpu = cpu;
}
-#ifdef CONFIG_SCHED_CORE_ROTATE
if (do_rotate) {
/*
* We started iteration somewhere in the middle of
@@ -1905,13 +1901,14 @@ static int find_lowest_rq(struct task_struct *task)
cpu = -1;
goto retry;
}
-#endif
- if (best_cpu != -1) {
+ if (best_cpu != -1 && placement_boost != SCHED_BOOST_ON_ALL) {
return best_cpu;
} else if (!cpumask_empty(&backup_search_cpu)) {
cpumask_copy(&search_cpu, &backup_search_cpu);
cpumask_clear(&backup_search_cpu);
+ cpu = -1;
+ placement_boost = SCHED_BOOST_NONE;
goto retry;
}
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9801487..c85928b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -11,6 +11,7 @@
#include <linux/irq_work.h>
#include <linux/tick.h>
#include <linux/slab.h>
+#include <linux/sched_energy.h>
#include "cpupri.h"
#include "cpudeadline.h"
@@ -26,6 +27,7 @@ struct rq;
struct cpuidle_state;
extern __read_mostly bool sched_predl;
+extern unsigned int sched_smp_overlap_capacity;
#ifdef CONFIG_SCHED_WALT
extern unsigned int sched_ravg_window;
@@ -2708,11 +2710,21 @@ extern void sched_boost_parse_dt(void);
extern void clear_ed_task(struct task_struct *p, struct rq *rq);
extern bool early_detection_notify(struct rq *rq, u64 wallclock);
-static inline unsigned int power_cost(int cpu, u64 demand)
+static inline unsigned int power_cost(int cpu, bool max)
{
- return cpu_max_possible_capacity(cpu);
+ struct sched_group_energy *sge = sge_array[cpu][SD_LEVEL1];
+
+ if (!sge || !sge->nr_cap_states)
+ return cpu_max_possible_capacity(cpu);
+
+ if (max)
+ return sge->cap_states[sge->nr_cap_states - 1].power;
+ else
+ return sge->cap_states[0].power;
}
+extern void walt_sched_energy_populated_callback(void);
+
#else /* CONFIG_SCHED_WALT */
struct walt_sched_stats;
@@ -2804,6 +2816,11 @@ static inline unsigned long thermal_cap(int cpu)
{
return cpu_rq(cpu)->cpu_capacity_orig;
}
+
+static inline int cpu_max_power_cost(int cpu)
+{
+ return capacity_orig_of(cpu);
+}
#endif
static inline void clear_walt_request(int cpu) { }
@@ -2829,11 +2846,13 @@ static inline bool early_detection_notify(struct rq *rq, u64 wallclock)
return 0;
}
-static inline unsigned int power_cost(int cpu, u64 demand)
+static inline unsigned int power_cost(int cpu, bool max)
{
return SCHED_CAPACITY_SCALE;
}
+static inline void walt_sched_energy_populated_callback(void) { }
+
#endif /* CONFIG_SCHED_WALT */
static inline bool energy_aware(void)
@@ -2853,4 +2872,6 @@ int
find_first_cpu_bit(struct task_struct *p, const cpumask_t *search_cpus,
struct sched_group *sg_target, bool *avoid_prev_cpu,
bool *do_rotate, struct find_first_cpu_bit_env *env);
+#else
+#define find_first_cpu_bit(...) -1
#endif
diff --git a/kernel/sched/walt.c b/kernel/sched/walt.c
index 1f5639c..f941d92 100644
--- a/kernel/sched/walt.c
+++ b/kernel/sched/walt.c
@@ -162,13 +162,6 @@ static const unsigned int top_tasks_bitmap_size =
*/
__read_mostly unsigned int sysctl_sched_freq_reporting_policy;
-
-#define SCHED_BIG_WAKER_TASK_LOAD_PCT 25UL
-#define SCHED_SMALL_WAKEE_TASK_LOAD_PCT 10UL
-
-__read_mostly unsigned int sched_big_waker_task_load;
-__read_mostly unsigned int sched_small_wakee_task_load;
-
static int __init set_sched_ravg_window(char *str)
{
unsigned int window_size;
@@ -1874,7 +1867,7 @@ update_task_rq_cpu_cycles(struct task_struct *p, struct rq *rq, int event,
p->cpu_cycles = cur_cycles;
- trace_sched_get_task_cpu_cycles(cpu, event, rq->cc.cycles, rq->cc.time);
+ trace_sched_get_task_cpu_cycles(cpu, event, rq->cc.cycles, rq->cc.time, p);
}
static inline void run_walt_irq_work(u64 old_window_start, struct rq *rq)
@@ -2152,7 +2145,7 @@ compare_clusters(void *priv, struct list_head *a, struct list_head *b)
return ret;
}
-void sort_clusters(void)
+static void sort_clusters(void)
{
struct sched_cluster *cluster;
struct list_head new_head;
@@ -2162,9 +2155,9 @@ void sort_clusters(void)
for_each_sched_cluster(cluster) {
cluster->max_power_cost = power_cost(cluster_first_cpu(cluster),
- max_task_load());
+ true);
cluster->min_power_cost = power_cost(cluster_first_cpu(cluster),
- 0);
+ false);
if (cluster->max_power_cost > tmp_max)
tmp_max = cluster->max_power_cost;
@@ -2183,6 +2176,59 @@ void sort_clusters(void)
move_list(&cluster_head, &new_head, false);
}
+int __read_mostly min_power_cpu;
+
+void walt_sched_energy_populated_callback(void)
+{
+ struct sched_cluster *cluster;
+ int prev_max = 0, next_min = 0;
+
+ mutex_lock(&cluster_lock);
+
+ if (num_clusters == 1) {
+ sysctl_sched_is_big_little = 0;
+ mutex_unlock(&cluster_lock);
+ return;
+ }
+
+ sort_clusters();
+
+ for_each_sched_cluster(cluster) {
+ if (cluster->min_power_cost > prev_max) {
+ prev_max = cluster->max_power_cost;
+ continue;
+ }
+ /*
+ * We assume no overlap in the power curves of
+ * clusters on a big.LITTLE system.
+ */
+ sysctl_sched_is_big_little = 0;
+ next_min = cluster->min_power_cost;
+ }
+
+ /*
+ * Find the OPP at which the lower power cluster
+ * power is overlapping with the next cluster.
+ */
+ if (!sysctl_sched_is_big_little) {
+ int cpu = cluster_first_cpu(sched_cluster[0]);
+ struct sched_group_energy *sge = sge_array[cpu][SD_LEVEL1];
+ int i;
+
+ for (i = 1; i < sge->nr_cap_states; i++) {
+ if (sge->cap_states[i].power >= next_min) {
+ sched_smp_overlap_capacity =
+ sge->cap_states[i-1].cap;
+ break;
+ }
+ }
+
+ min_power_cpu = cpu;
+ }
+
+ mutex_unlock(&cluster_lock);
+}
+
static void update_all_clusters_stats(void)
{
struct sched_cluster *cluster;
@@ -2344,10 +2390,58 @@ static struct notifier_block notifier_policy_block = {
.notifier_call = cpufreq_notifier_policy
};
+static int cpufreq_notifier_trans(struct notifier_block *nb,
+ unsigned long val, void *data)
+{
+ struct cpufreq_freqs *freq = (struct cpufreq_freqs *)data;
+ unsigned int cpu = freq->cpu, new_freq = freq->new;
+ unsigned long flags;
+ struct sched_cluster *cluster;
+ struct cpumask policy_cpus = cpu_rq(cpu)->freq_domain_cpumask;
+ int i, j;
+
+ if (val != CPUFREQ_POSTCHANGE)
+ return NOTIFY_DONE;
+
+ if (cpu_cur_freq(cpu) == new_freq)
+ return NOTIFY_OK;
+
+ for_each_cpu(i, &policy_cpus) {
+ cluster = cpu_rq(i)->cluster;
+
+ if (!use_cycle_counter) {
+ for_each_cpu(j, &cluster->cpus) {
+ struct rq *rq = cpu_rq(j);
+
+ raw_spin_lock_irqsave(&rq->lock, flags);
+ update_task_ravg(rq->curr, rq, TASK_UPDATE,
+ ktime_get_ns(), 0);
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+ }
+ }
+
+ cluster->cur_freq = new_freq;
+ cpumask_andnot(&policy_cpus, &policy_cpus, &cluster->cpus);
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block notifier_trans_block = {
+ .notifier_call = cpufreq_notifier_trans
+};
+
static int register_walt_callback(void)
{
- return cpufreq_register_notifier(¬ifier_policy_block,
- CPUFREQ_POLICY_NOTIFIER);
+ int ret;
+
+ ret = cpufreq_register_notifier(¬ifier_policy_block,
+ CPUFREQ_POLICY_NOTIFIER);
+ if (!ret)
+ ret = cpufreq_register_notifier(¬ifier_trans_block,
+ CPUFREQ_TRANSITION_NOTIFIER);
+
+ return ret;
}
/*
* cpufreq callbacks can be registered at core_initcall or later time.
@@ -2466,6 +2560,11 @@ static void _set_preferred_cluster(struct related_thread_group *grp)
if (list_empty(&grp->tasks))
return;
+ if (!sysctl_sched_is_big_little) {
+ grp->preferred_cluster = sched_cluster[0];
+ return;
+ }
+
wallclock = ktime_get_ns();
/*
@@ -3121,8 +3220,7 @@ void walt_sched_init(struct rq *rq)
walt_cpu_util_freq_divisor =
(sched_ravg_window >> SCHED_CAPACITY_SHIFT) * 100;
- sched_big_waker_task_load =
- (SCHED_BIG_WAKER_TASK_LOAD_PCT << SCHED_CAPACITY_SHIFT) / 100;
- sched_small_wakee_task_load =
- (SCHED_SMALL_WAKEE_TASK_LOAD_PCT << SCHED_CAPACITY_SHIFT) / 100;
+ sched_init_task_load_windows =
+ div64_u64((u64)sysctl_sched_init_task_load_pct *
+ (u64)sched_ravg_window, 100);
}
diff --git a/kernel/sched/walt.h b/kernel/sched/walt.h
index 86d5bfd..c8780cf 100644
--- a/kernel/sched/walt.h
+++ b/kernel/sched/walt.h
@@ -219,7 +219,7 @@ static inline unsigned int max_task_load(void)
return sched_ravg_window;
}
-static inline u32 cpu_cycles_to_freq(u64 cycles, u32 period)
+static inline u32 cpu_cycles_to_freq(u64 cycles, u64 period)
{
return div64_u64(cycles, period);
}
@@ -281,12 +281,16 @@ static inline int same_cluster(int src_cpu, int dst_cpu)
return cpu_rq(src_cpu)->cluster == cpu_rq(dst_cpu)->cluster;
}
-void sort_clusters(void);
-
void walt_irq_work(struct irq_work *irq_work);
void walt_sched_init(struct rq *rq);
+extern int __read_mostly min_power_cpu;
+static inline int walt_start_cpu(int prev_cpu)
+{
+ return sysctl_sched_is_big_little ? prev_cpu : min_power_cpu;
+}
+
#else /* CONFIG_SCHED_WALT */
static inline void walt_sched_init(struct rq *rq) { }
@@ -358,6 +362,11 @@ fixup_walt_sched_stats_common(struct rq *rq, struct task_struct *p,
{
}
+static inline int walt_start_cpu(int prev_cpu)
+{
+ return prev_cpu;
+}
+
#endif /* CONFIG_SCHED_WALT */
#endif
diff --git a/kernel/trace/msm_rtb.c b/kernel/trace/msm_rtb.c
index 9d9f0bf..d3bcd5c 100644
--- a/kernel/trace/msm_rtb.c
+++ b/kernel/trace/msm_rtb.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2013-2017, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -28,6 +28,7 @@
#include <asm-generic/sizes.h>
#include <linux/msm_rtb.h>
#include <asm/timex.h>
+#include <soc/qcom/minidump.h>
#define SENTINEL_BYTE_1 0xFF
#define SENTINEL_BYTE_2 0xAA
@@ -242,6 +243,7 @@ EXPORT_SYMBOL(uncached_logk);
static int msm_rtb_probe(struct platform_device *pdev)
{
struct msm_rtb_platform_data *d = pdev->dev.platform_data;
+ struct md_region md_entry;
#if defined(CONFIG_QCOM_RTB_SEPARATE_CPUS)
unsigned int cpu;
#endif
@@ -293,6 +295,12 @@ static int msm_rtb_probe(struct platform_device *pdev)
memset(msm_rtb.rtb, 0, msm_rtb.size);
+ strlcpy(md_entry.name, "KRTB_BUF", sizeof(md_entry.name));
+ md_entry.virt_addr = (uintptr_t)msm_rtb.rtb;
+ md_entry.phys_addr = msm_rtb.phys;
+ md_entry.size = msm_rtb.size;
+ if (msm_minidump_add_region(&md_entry))
+ pr_info("Failed to add RTB in Minidump\n");
#if defined(CONFIG_QCOM_RTB_SEPARATE_CPUS)
for_each_possible_cpu(cpu) {
diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h
index 8635417..29fa81f 100644
--- a/kernel/workqueue_internal.h
+++ b/kernel/workqueue_internal.h
@@ -9,6 +9,7 @@
#include <linux/workqueue.h>
#include <linux/kthread.h>
+#include <linux/preempt.h>
struct worker_pool;
@@ -59,7 +60,7 @@ struct worker {
*/
static inline struct worker *current_wq_worker(void)
{
- if (current->flags & PF_WQ_WORKER)
+ if (in_task() && (current->flags & PF_WQ_WORKER))
return kthread_data(current);
return NULL;
}
diff --git a/lib/asn1_decoder.c b/lib/asn1_decoder.c
index 0bd8a61..1ef0cec 100644
--- a/lib/asn1_decoder.c
+++ b/lib/asn1_decoder.c
@@ -228,7 +228,7 @@ int asn1_ber_decoder(const struct asn1_decoder *decoder,
hdr = 2;
/* Extract a tag from the data */
- if (unlikely(dp >= datalen - 1))
+ if (unlikely(datalen - dp < 2))
goto data_overrun_error;
tag = data[dp++];
if (unlikely((tag & 0x1f) == ASN1_LONG_TAG))
@@ -274,7 +274,7 @@ int asn1_ber_decoder(const struct asn1_decoder *decoder,
int n = len - 0x80;
if (unlikely(n > 2))
goto length_too_long;
- if (unlikely(dp >= datalen - n))
+ if (unlikely(n > datalen - dp))
goto data_overrun_error;
hdr += n;
for (len = 0; n > 0; n--) {
@@ -284,6 +284,9 @@ int asn1_ber_decoder(const struct asn1_decoder *decoder,
if (unlikely(len > datalen - dp))
goto data_overrun_error;
}
+ } else {
+ if (unlikely(len > datalen - dp))
+ goto data_overrun_error;
}
if (flags & FLAG_CONS) {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index acf411c..bcfc58b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -285,28 +285,37 @@ EXPORT_SYMBOL(nr_online_nodes);
int page_group_by_mobility_disabled __read_mostly;
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
+
+/*
+ * Determine how many pages need to be initialized durig early boot
+ * (non-deferred initialization).
+ * The value of first_deferred_pfn will be set later, once non-deferred pages
+ * are initialized, but for now set it ULONG_MAX.
+ */
static inline void reset_deferred_meminit(pg_data_t *pgdat)
{
- unsigned long max_initialise;
- unsigned long reserved_lowmem;
+ phys_addr_t start_addr, end_addr;
+ unsigned long max_pgcnt;
+ unsigned long reserved;
/*
* Initialise at least 2G of a node but also take into account that
* two large system hashes that can take up 1GB for 0.25TB/node.
*/
- max_initialise = max(2UL << (30 - PAGE_SHIFT),
- (pgdat->node_spanned_pages >> 8));
+ max_pgcnt = max(2UL << (30 - PAGE_SHIFT),
+ (pgdat->node_spanned_pages >> 8));
/*
* Compensate the all the memblock reservations (e.g. crash kernel)
* from the initial estimation to make sure we will initialize enough
* memory to boot.
*/
- reserved_lowmem = memblock_reserved_memory_within(pgdat->node_start_pfn,
- pgdat->node_start_pfn + max_initialise);
- max_initialise += reserved_lowmem;
+ start_addr = PFN_PHYS(pgdat->node_start_pfn);
+ end_addr = PFN_PHYS(pgdat->node_start_pfn + max_pgcnt);
+ reserved = memblock_reserved_memory_within(start_addr, end_addr);
+ max_pgcnt += PHYS_PFN(reserved);
- pgdat->static_init_size = min(max_initialise, pgdat->node_spanned_pages);
+ pgdat->static_init_pgcnt = min(max_pgcnt, pgdat->node_spanned_pages);
pgdat->first_deferred_pfn = ULONG_MAX;
}
@@ -333,7 +342,7 @@ static inline bool update_defer_init(pg_data_t *pgdat,
if (zone_end < pgdat_end_pfn(pgdat))
return true;
(*nr_initialised)++;
- if ((*nr_initialised > pgdat->static_init_size) &&
+ if ((*nr_initialised > pgdat->static_init_pgcnt) &&
(pfn & (PAGES_PER_SECTION - 1)) == 0) {
pgdat->first_deferred_pfn = pfn;
return false;
diff --git a/mm/page_owner.c b/mm/page_owner.c
index fe850b9..c4381d93 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -554,11 +554,17 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
continue;
/*
- * We are safe to check buddy flag and order, because
- * this is init stage and only single thread runs.
+ * To avoid having to grab zone->lock, be a little
+ * careful when reading buddy page order. The only
+ * danger is that we skip too much and potentially miss
+ * some early allocated pages, which is better than
+ * heavy lock contention.
*/
if (PageBuddy(page)) {
- pfn += (1UL << page_order(page)) - 1;
+ unsigned long order = page_order_unsafe(page);
+
+ if (order > 0 && order < MAX_ORDER)
+ pfn += (1UL << order) - 1;
continue;
}
@@ -577,6 +583,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
set_page_owner(page, 0, 0);
count++;
}
+ cond_resched();
}
pr_info("Node %d, zone %8s: page owner found early allocated %lu pages\n",
@@ -587,15 +594,12 @@ static void init_zones_in_node(pg_data_t *pgdat)
{
struct zone *zone;
struct zone *node_zones = pgdat->node_zones;
- unsigned long flags;
for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) {
if (!populated_zone(zone))
continue;
- spin_lock_irqsave(&zone->lock, flags);
init_pages_in_zone(pgdat, zone);
- spin_unlock_irqrestore(&zone->lock, flags);
}
}
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 2072444..d95341c 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -142,8 +142,12 @@ static int walk_hugetlb_range(unsigned long addr, unsigned long end,
do {
next = hugetlb_entry_end(h, addr, end);
pte = huge_pte_offset(walk->mm, addr & hmask);
- if (pte && walk->hugetlb_entry)
+
+ if (pte)
err = walk->hugetlb_entry(pte, hmask, addr, next, walk);
+ else if (walk->pte_hole)
+ err = walk->pte_hole(addr, next, walk);
+
if (err)
break;
} while (addr = next, addr != end);
diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
index 8d213f9..4a47074 100644
--- a/net/8021q/vlan.c
+++ b/net/8021q/vlan.c
@@ -376,6 +376,9 @@ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
dev->name);
vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
}
+ if (event == NETDEV_DOWN &&
+ (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
+ vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
vlan_info = rtnl_dereference(dev->vlan_info);
if (!vlan_info)
@@ -423,9 +426,6 @@ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
struct net_device *tmp;
LIST_HEAD(close_list);
- if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
- vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
-
/* Put all VLANs for this dev in the down state too. */
vlan_group_for_each_dev(grp, i, vlandev) {
flgs = vlandev->flags;
diff --git a/net/core/dev.c b/net/core/dev.c
index 18de74e..5685744 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1117,9 +1117,8 @@ static int dev_alloc_name_ns(struct net *net,
return ret;
}
-static int dev_get_valid_name(struct net *net,
- struct net_device *dev,
- const char *name)
+int dev_get_valid_name(struct net *net, struct net_device *dev,
+ const char *name)
{
BUG_ON(!net);
@@ -1135,6 +1134,7 @@ static int dev_get_valid_name(struct net *net,
return 0;
}
+EXPORT_SYMBOL(dev_get_valid_name);
/**
* dev_change_name - change name of a device
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index ba1146c..d7ecf40 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -4381,6 +4381,7 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet)
if (!xnet)
return;
+ ipvs_reset(skb);
skb_orphan(skb);
skb->mark = 0;
}
diff --git a/net/core/sock.c b/net/core/sock.c
index c6f42ee..1d88335 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1534,6 +1534,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
newsk->sk_userlocks = sk->sk_userlocks & ~SOCK_BINDPORT_LOCK;
sock_reset_flag(newsk, SOCK_DONE);
+ cgroup_sk_alloc(&newsk->sk_cgrp_data);
skb_queue_head_init(&newsk->sk_error_queue);
filter = rcu_dereference_protected(newsk->sk_filter, 1);
@@ -1568,8 +1569,6 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
atomic64_set(&newsk->sk_cookie, 0);
mem_cgroup_sk_alloc(newsk);
- cgroup_sk_alloc(&newsk->sk_cgrp_data);
-
/*
* Before updating sk_refcnt, we must commit prior changes to memory
* (Documentation/RCU/rculist_nulls.txt for details)
diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
index 9a1a352..77f396b 100644
--- a/net/core/sock_reuseport.c
+++ b/net/core/sock_reuseport.c
@@ -36,9 +36,14 @@ int reuseport_alloc(struct sock *sk)
* soft irq of receive path or setsockopt from process context
*/
spin_lock_bh(&reuseport_lock);
- WARN_ONCE(rcu_dereference_protected(sk->sk_reuseport_cb,
- lockdep_is_held(&reuseport_lock)),
- "multiple allocations for the same socket");
+
+ /* Allocation attempts can occur concurrently via the setsockopt path
+ * and the bind/hash path. Nothing to do when we lose the race.
+ */
+ if (rcu_dereference_protected(sk->sk_reuseport_cb,
+ lockdep_is_held(&reuseport_lock)))
+ goto out;
+
reuse = __reuseport_alloc(INIT_SOCKS);
if (!reuse) {
spin_unlock_bh(&reuseport_lock);
@@ -49,6 +54,7 @@ int reuseport_alloc(struct sock *sk)
reuse->num_socks = 1;
rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
+out:
spin_unlock_bh(&reuseport_lock);
return 0;
diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
index 8fc1600..8c7799cd 100644
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -414,8 +414,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
sk_daddr_set(newsk, ireq->ir_rmt_addr);
sk_rcv_saddr_set(newsk, ireq->ir_loc_addr);
newinet->inet_saddr = ireq->ir_loc_addr;
- newinet->inet_opt = ireq->opt;
- ireq->opt = NULL;
+ RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt));
newinet->mc_index = inet_iif(skb);
newinet->mc_ttl = ip_hdr(skb)->ttl;
newinet->inet_id = jiffies;
@@ -430,7 +429,10 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
if (__inet_inherit_port(sk, newsk) < 0)
goto put_and_exit;
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
-
+ if (*own_req)
+ ireq->ireq_opt = NULL;
+ else
+ newinet->inet_opt = NULL;
return newsk;
exit_overflow:
@@ -441,6 +443,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
__NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENDROPS);
return NULL;
put_and_exit:
+ newinet->inet_opt = NULL;
inet_csk_prepare_forced_close(newsk);
dccp_done(newsk);
goto exit;
@@ -492,7 +495,7 @@ static int dccp_v4_send_response(const struct sock *sk, struct request_sock *req
ireq->ir_rmt_addr);
err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
ireq->ir_rmt_addr,
- ireq->opt);
+ ireq_opt_deref(ireq));
err = net_xmit_eval(err);
}
@@ -548,7 +551,7 @@ static void dccp_v4_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb)
static void dccp_v4_reqsk_destructor(struct request_sock *req)
{
dccp_feat_list_purge(&dccp_rsk(req)->dreq_featneg);
- kfree(inet_rsk(req)->opt);
+ kfree(rcu_dereference_protected(inet_rsk(req)->ireq_opt, 1));
}
void dccp_syn_ack_timeout(const struct request_sock *req)
diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig
index 96e47c5..39bb5b3 100644
--- a/net/dsa/Kconfig
+++ b/net/dsa/Kconfig
@@ -1,12 +1,13 @@
config HAVE_NET_DSA
def_bool y
- depends on NETDEVICES && !S390
+ depends on INET && NETDEVICES && !S390
# Drivers must select NET_DSA and the appropriate tagging format
config NET_DSA
tristate "Distributed Switch Architecture"
- depends on HAVE_NET_DSA && NET_SWITCHDEV
+ depends on HAVE_NET_DSA
+ select NET_SWITCHDEV
select PHYLIB
---help---
Say Y if you want to enable support for the hardware switches supported
diff --git a/net/embms_kernel/embms_kernel.c b/net/embms_kernel/embms_kernel.c
index 7b79574..3bbe51b 100644
--- a/net/embms_kernel/embms_kernel.c
+++ b/net/embms_kernel/embms_kernel.c
@@ -62,7 +62,6 @@ static int handle_multicast_stream(struct sk_buff *skb)
{
struct iphdr *iph;
struct udphdr *udph;
- struct in_device *in_dev;
unsigned char *tmp_ptr = NULL;
struct sk_buff *skb_new = NULL;
struct sk_buff *skb_cpy = NULL;
@@ -396,12 +395,9 @@ static void print_tmgi_to_client_table(void)
int delete_tmgi_entry_from_table(char *buffer)
{
- int i;
struct tmgi_to_clnt_info_update *info_update;
- char message_buffer[sizeof(struct tmgi_to_clnt_info_update)];
struct clnt_info *temp_client = NULL;
struct tmgi_to_clnt_info *temp_tmgi = NULL;
- struct list_head *tmgi_entry_ptr, *prev_tmgi_entry_ptr;
struct list_head *clnt_ptr, *prev_clnt_ptr;
embms_debug("delete_tmgi_entry_from_table: Enter\n");
@@ -477,13 +473,10 @@ int delete_tmgi_entry_from_table(char *buffer)
*/
int delete_client_entry_from_all_tmgi(char *buffer)
{
- int i;
struct tmgi_to_clnt_info_update *info_update;
- char message_buffer[sizeof(struct tmgi_to_clnt_info_update)];
struct clnt_info *temp_client = NULL;
struct tmgi_to_clnt_info *tmgi = NULL;
struct list_head *tmgi_entry_ptr, *prev_tmgi_entry_ptr;
- struct list_head *clnt_ptr, *prev_clnt_ptr;
/* We use this function when we want to delete any
* client entry from all TMGI entries. This scenario
@@ -574,18 +567,11 @@ int delete_client_entry_from_all_tmgi(char *buffer)
*/
int add_client_entry_to_table(char *buffer)
{
- int i, ret;
+ int ret;
struct tmgi_to_clnt_info_update *info_update;
- char message_buffer[sizeof(struct tmgi_to_clnt_info_update)];
struct clnt_info *new_client = NULL;
- struct clnt_info *temp_client = NULL;
- struct tmgi_to_clnt_info *new_tmgi = NULL;
struct tmgi_to_clnt_info *tmgi = NULL;
- struct list_head *tmgi_entry_ptr, *prev_tmgi_entry_ptr;
- struct list_head *clnt_ptr, *prev_clnt_ptr;
struct neighbour *neigh_entry;
- struct in_device *iface_dev;
- struct in_ifaddr *iface_info;
embms_debug("add_client_entry_to_table: Enter\n");
@@ -699,13 +685,9 @@ int add_client_entry_to_table(char *buffer)
*/
int delete_client_entry_from_table(char *buffer)
{
- int i;
struct tmgi_to_clnt_info_update *info_update;
- char message_buffer[sizeof(struct tmgi_to_clnt_info_update)];
struct clnt_info *temp_client = NULL;
struct tmgi_to_clnt_info *temp_tmgi = NULL;
- struct list_head *tmgi_entry_ptr, *prev_tmgi_entry_ptr;
- struct list_head *clnt_ptr, *prev_clnt_ptr;
embms_debug("delete_client_entry_from_table: Enter\n");
@@ -796,11 +778,10 @@ int delete_client_entry_from_table(char *buffer)
* Return: Success if functoin call returns SUCCESS, error otherwise.
*/
-int embms_device_ioctl(struct file *file, unsigned int ioctl_num,
- unsigned long ioctl_param)
+long embms_device_ioctl(struct file *file, unsigned int ioctl_num,
+ unsigned long ioctl_param)
{
- int i, ret, error;
- char *temp;
+ int ret;
char buffer[BUF_LEN];
struct in_device *iface_dev;
struct in_ifaddr *iface_info;
diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index f2a7102..22377c8 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -270,6 +270,9 @@ static void ah_input_done(struct crypto_async_request *base, int err)
int ihl = ip_hdrlen(skb);
int ah_hlen = (ah->hdrlen + 2) << 2;
+ if (err)
+ goto out;
+
work_iph = AH_SKB_CB(skb)->tmp;
auth_data = ah_tmp_auth(work_iph, ihl);
icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
index ae20616..972353c 100644
--- a/net/ipv4/cipso_ipv4.c
+++ b/net/ipv4/cipso_ipv4.c
@@ -1943,7 +1943,7 @@ int cipso_v4_req_setattr(struct request_sock *req,
buf = NULL;
req_inet = inet_rsk(req);
- opt = xchg(&req_inet->opt, opt);
+ opt = xchg((__force struct ip_options_rcu **)&req_inet->ireq_opt, opt);
if (opt)
kfree_rcu(opt, rcu);
@@ -1965,11 +1965,13 @@ int cipso_v4_req_setattr(struct request_sock *req,
* values on failure.
*
*/
-static int cipso_v4_delopt(struct ip_options_rcu **opt_ptr)
+static int cipso_v4_delopt(struct ip_options_rcu __rcu **opt_ptr)
{
+ struct ip_options_rcu *opt = rcu_dereference_protected(*opt_ptr, 1);
int hdr_delta = 0;
- struct ip_options_rcu *opt = *opt_ptr;
+ if (!opt || opt->opt.cipso == 0)
+ return 0;
if (opt->opt.srr || opt->opt.rr || opt->opt.ts || opt->opt.router_alert) {
u8 cipso_len;
u8 cipso_off;
@@ -2031,14 +2033,10 @@ static int cipso_v4_delopt(struct ip_options_rcu **opt_ptr)
*/
void cipso_v4_sock_delattr(struct sock *sk)
{
- int hdr_delta;
- struct ip_options_rcu *opt;
struct inet_sock *sk_inet;
+ int hdr_delta;
sk_inet = inet_sk(sk);
- opt = rcu_dereference_protected(sk_inet->inet_opt, 1);
- if (!opt || opt->opt.cipso == 0)
- return;
hdr_delta = cipso_v4_delopt(&sk_inet->inet_opt);
if (sk_inet->is_icsk && hdr_delta > 0) {
@@ -2058,15 +2056,7 @@ void cipso_v4_sock_delattr(struct sock *sk)
*/
void cipso_v4_req_delattr(struct request_sock *req)
{
- struct ip_options_rcu *opt;
- struct inet_request_sock *req_inet;
-
- req_inet = inet_rsk(req);
- opt = req_inet->opt;
- if (!opt || opt->opt.cipso == 0)
- return;
-
- cipso_v4_delopt(&req_inet->opt);
+ cipso_v4_delopt(&inet_rsk(req)->ireq_opt);
}
/**
diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
index d5cac99..8c72034 100644
--- a/net/ipv4/gre_offload.c
+++ b/net/ipv4/gre_offload.c
@@ -98,7 +98,7 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
greh = (struct gre_base_hdr *)skb_transport_header(skb);
pcsum = (__sum16 *)(greh + 1);
- if (gso_partial) {
+ if (gso_partial && skb_is_gso(skb)) {
unsigned int partial_adj;
/* Adjust checksum to account for the fact that
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index c094ac9..11558ca 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -414,9 +414,11 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
{
const struct inet_request_sock *ireq = inet_rsk(req);
struct net *net = read_pnet(&ireq->ireq_net);
- struct ip_options_rcu *opt = ireq->opt;
+ struct ip_options_rcu *opt;
struct rtable *rt;
+ opt = ireq_opt_deref(ireq);
+
flowi4_init_output(fl4, ireq->ir_iif, ireq->ir_mark,
RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE,
sk->sk_protocol, inet_sk_flowi_flags(sk),
@@ -450,10 +452,9 @@ struct dst_entry *inet_csk_route_child_sock(const struct sock *sk,
struct flowi4 *fl4;
struct rtable *rt;
+ opt = rcu_dereference(ireq->ireq_opt);
fl4 = &newinet->cork.fl.u.ip4;
- rcu_read_lock();
- opt = rcu_dereference(newinet->inet_opt);
flowi4_init_output(fl4, ireq->ir_iif, ireq->ir_mark,
RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE,
sk->sk_protocol, inet_sk_flowi_flags(sk),
@@ -466,13 +467,11 @@ struct dst_entry *inet_csk_route_child_sock(const struct sock *sk,
goto no_route;
if (opt && opt->opt.is_strictroute && rt->rt_uses_gateway)
goto route_err;
- rcu_read_unlock();
return &rt->dst;
route_err:
ip_rt_put(rt);
no_route:
- rcu_read_unlock();
__IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
return NULL;
}
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index ca97835..b9bcf3d 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -455,10 +455,7 @@ static int inet_reuseport_add_sock(struct sock *sk,
return reuseport_add_sock(sk, sk2);
}
- /* Initial allocation may have already happened via setsockopt */
- if (!rcu_access_pointer(sk->sk_reuseport_cb))
- return reuseport_alloc(sk);
- return 0;
+ return reuseport_alloc(sk);
}
int __inet_hash(struct sock *sk, struct sock *osk,
diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
index c939258..56d71a0 100644
--- a/net/ipv4/ipip.c
+++ b/net/ipv4/ipip.c
@@ -128,43 +128,68 @@ static struct rtnl_link_ops ipip_link_ops __read_mostly;
static int ipip_err(struct sk_buff *skb, u32 info)
{
-
-/* All the routers (except for Linux) return only
- 8 bytes of packet payload. It means, that precise relaying of
- ICMP in the real Internet is absolutely infeasible.
- */
+ /* All the routers (except for Linux) return only
+ * 8 bytes of packet payload. It means, that precise relaying of
+ * ICMP in the real Internet is absolutely infeasible.
+ */
struct net *net = dev_net(skb->dev);
struct ip_tunnel_net *itn = net_generic(net, ipip_net_id);
const struct iphdr *iph = (const struct iphdr *)skb->data;
- struct ip_tunnel *t;
- int err;
const int type = icmp_hdr(skb)->type;
const int code = icmp_hdr(skb)->code;
+ struct ip_tunnel *t;
+ int err = 0;
- err = -ENOENT;
+ switch (type) {
+ case ICMP_DEST_UNREACH:
+ switch (code) {
+ case ICMP_SR_FAILED:
+ /* Impossible event. */
+ goto out;
+ default:
+ /* All others are translated to HOST_UNREACH.
+ * rfc2003 contains "deep thoughts" about NET_UNREACH,
+ * I believe they are just ether pollution. --ANK
+ */
+ break;
+ }
+ break;
+
+ case ICMP_TIME_EXCEEDED:
+ if (code != ICMP_EXC_TTL)
+ goto out;
+ break;
+
+ case ICMP_REDIRECT:
+ break;
+
+ default:
+ goto out;
+ }
+
t = ip_tunnel_lookup(itn, skb->dev->ifindex, TUNNEL_NO_KEY,
iph->daddr, iph->saddr, 0);
- if (!t)
+ if (!t) {
+ err = -ENOENT;
goto out;
+ }
if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) {
- ipv4_update_pmtu(skb, dev_net(skb->dev), info,
- t->parms.link, 0, iph->protocol, 0);
- err = 0;
+ ipv4_update_pmtu(skb, net, info, t->parms.link, 0,
+ iph->protocol, 0);
goto out;
}
if (type == ICMP_REDIRECT) {
- ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0,
- iph->protocol, 0);
- err = 0;
+ ipv4_redirect(skb, net, t->parms.link, 0, iph->protocol, 0);
goto out;
}
- if (t->parms.iph.daddr == 0)
+ if (t->parms.iph.daddr == 0) {
+ err = -ENOENT;
goto out;
+ }
- err = 0;
if (t->parms.iph.ttl == 0 && type == ICMP_TIME_EXCEEDED)
goto out;
diff --git a/net/ipv4/netfilter/ipt_NATTYPE.c b/net/ipv4/netfilter/ipt_NATTYPE.c
index b8597d2..bed569f 100644
--- a/net/ipv4/netfilter/ipt_NATTYPE.c
+++ b/net/ipv4/netfilter/ipt_NATTYPE.c
@@ -24,6 +24,7 @@
* Ubicom32 implementation derived from
* Cameo's implementation(with many thanks):
*/
+
#include <linux/types.h>
#include <linux/ip.h>
#include <linux/udp.h>
@@ -36,21 +37,17 @@
#include <linux/tcp.h>
#include <net/netfilter/nf_conntrack.h>
#include <net/netfilter/nf_conntrack_core.h>
-#include <net/netfilter/nf_nat_rule.h>
+#include <net/netfilter/nf_nat.h>
#include <linux/netfilter/x_tables.h>
#include <linux/netfilter_ipv4/ipt_NATTYPE.h>
#include <linux/atomic.h>
-#if !defined(NATTYPE_DEBUG)
-#define DEBUGP(type, args...)
-#else
static const char * const types[] = {"TYPE_PORT_ADDRESS_RESTRICTED",
"TYPE_ENDPOINT_INDEPENDENT",
"TYPE_ADDRESS_RESTRICTED"};
static const char * const modes[] = {"MODE_DNAT", "MODE_FORWARD_IN",
"MODE_FORWARD_OUT"};
#define DEBUGP(args...) pr_debug(args)
-#endif
/* netfilter NATTYPE TODO:
* Add magic value checks to data structure.
@@ -58,13 +55,17 @@ static const char * const modes[] = {"MODE_DNAT", "MODE_FORWARD_IN",
struct ipt_nattype {
struct list_head list;
struct timer_list timeout;
+ unsigned long timeout_value;
+ unsigned int nattype_cookie;
unsigned short proto; /* Protocol: TCP or UDP */
- struct nf_nat_ipv4_range range; /* LAN side src info*/
+ struct nf_nat_range range; /* LAN side source information */
unsigned short nat_port; /* Routed NAT port */
unsigned int dest_addr; /* Original egress packets dst addr */
unsigned short dest_port;/* Original egress packets destination port */
};
+#define NATTYPE_COOKIE 0x11abcdef
+
/* TODO: It might be better to use a hash table for performance in
* heavy traffic.
*/
@@ -77,11 +78,13 @@ static DEFINE_SPINLOCK(nattype_lock);
static void nattype_nte_debug_print(const struct ipt_nattype *nte,
const char *s)
{
- DEBUGP("%p: %s - proto[%d], src[%pI4:%d], nat[<x>:%d], dest[%pI4:%d]\n",
+ DEBUGP("%p:%s-proto[%d],src[%pI4:%d],nat[%d],dest[%pI4:%d]\n",
nte, s, nte->proto,
- &nte->range.min_ip, ntohs(nte->range.min.all),
+ &nte->range.min_addr.ip, ntohs(nte->range.min_proto.all),
ntohs(nte->nat_port),
&nte->dest_addr, ntohs(nte->dest_port));
+ DEBUGP("Timeout[%lx], Expires[%lx]\n", nte->timeout_value,
+ nte->timeout.expires);
}
/* netfilter NATTYPE nattype_free()
@@ -89,20 +92,31 @@ static void nattype_nte_debug_print(const struct ipt_nattype *nte,
*/
static void nattype_free(struct ipt_nattype *nte)
{
- nattype_nte_debug_print(nte, "free");
kfree(nte);
}
/* netfilter NATTYPE nattype_refresh_timer()
* Refresh the timer for this object.
*/
-static bool nattype_refresh_timer(struct ipt_nattype *nte)
+bool nattype_refresh_timer(unsigned long nat_type, unsigned long timeout_value)
{
+ struct ipt_nattype *nte = (struct ipt_nattype *)nat_type;
+
+ if (!nte)
+ return false;
+ spin_lock_bh(&nattype_lock);
+ if (nte->nattype_cookie != NATTYPE_COOKIE) {
+ spin_unlock_bh(&nattype_lock);
+ return false;
+ }
if (del_timer(&nte->timeout)) {
- nte->timeout.expires = jiffies + NATTYPE_TIMEOUT * HZ;
+ nte->timeout.expires = timeout_value;
add_timer(&nte->timeout);
+ spin_unlock_bh(&nattype_lock);
+ nattype_nte_debug_print(nte, "refresh");
return true;
}
+ spin_unlock_bh(&nattype_lock);
return false;
}
@@ -121,6 +135,7 @@ static void nattype_timer_timeout(unsigned long in_nattype)
nattype_nte_debug_print(nte, "timeout");
spin_lock_bh(&nattype_lock);
list_del(&nte->list);
+ memset(nte, 0, sizeof(struct ipt_nattype));
spin_unlock_bh(&nattype_lock);
nattype_free(nte);
}
@@ -200,7 +215,8 @@ static bool nattype_packet_in_match(const struct ipt_nattype *nte,
/* netfilter NATTYPE nattype_compare
* Compare two entries, return true if relevant fields are the same.
*/
-static bool nattype_compare(struct ipt_nattype *n1, struct ipt_nattype *n2)
+static bool nattype_compare(struct ipt_nattype *n1, struct ipt_nattype *n2,
+ const struct ipt_nattype_info *info)
{
/* netfilter NATTYPE Protocol
* compare.
@@ -215,16 +231,16 @@ static bool nattype_compare(struct ipt_nattype *n1, struct ipt_nattype *n2)
* Since we always keep min/max values the same,
* just compare the min values.
*/
- if (n1->range.min_ip != n2->range.min_ip) {
- DEBUGP("nattype_compare: r.min_ip mismatch: %pI4:%pI4\n",
- &n1->range.min_ip, &n2->range.min_ip);
+ if (n1->range.min_addr.ip != n2->range.min_addr.ip) {
+ DEBUGP("nattype_compare: r.min_addr.ip mismatch: %pI4:%pI4\n",
+ &n1->range.min_addr.ip, &n2->range.min_addr.ip);
return false;
}
- if (n1->range.min.all != n2->range.min.all) {
+ if (n1->range.min_proto.all != n2->range.min_proto.all) {
DEBUGP("nattype_compare: r.min mismatch: %d:%d\n",
- ntohs(n1->range.min.all),
- ntohs(n2->range.min.all));
+ ntohs(n1->range.min_proto.all),
+ ntohs(n2->range.min_proto.all));
return false;
}
@@ -237,20 +253,16 @@ static bool nattype_compare(struct ipt_nattype *n1, struct ipt_nattype *n2)
return false;
}
- /* netfilter NATTYPE
- * Destination compare
+ /* netfilter NATTYPE Destination compare
+ * Destination Comapre for Address Restricted Cone NAT.
*/
- if (n1->dest_addr != n2->dest_addr) {
+ if ((info->type == TYPE_ADDRESS_RESTRICTED) &&
+ (n1->dest_addr != n2->dest_addr)) {
DEBUGP("nattype_compare: dest_addr mismatch: %pI4:%pI4\n",
&n1->dest_addr, &n2->dest_addr);
return false;
}
- if (n1->dest_port != n2->dest_port) {
- DEBUGP("nattype_compare: dest_port mismatch: %d:%d\n",
- ntohs(n1->dest_port), ntohs(n2->dest_port));
- return false;
- }
return true;
}
@@ -270,7 +282,7 @@ static unsigned int nattype_nat(struct sk_buff *skb,
list_for_each_entry(nte, &nattype_list, list) {
struct nf_conn *ct;
enum ip_conntrack_info ctinfo;
- struct nf_nat_ipv4_range newrange;
+ struct nf_nat_range newrange;
unsigned int ret;
if (!nattype_packet_in_match(nte, skb, par->targinfo))
@@ -291,11 +303,22 @@ static unsigned int nattype_nat(struct sk_buff *skb,
return XT_CONTINUE;
}
- /* Expand the ingress conntrack
- * to include the reply as source
+ /* netfilter
+ * Refresh the timer, if we fail, break
+ * out and forward fail as though we never
+ * found the entry.
+ */
+ if (!nattype_refresh_timer((unsigned long)nte,
+ jiffies + nte->timeout_value))
+ break;
+
+ /* netfilter
+ * Expand the ingress conntrack to include the reply as source
*/
DEBUGP("Expand ingress conntrack=%p, type=%d, src[%pI4:%d]\n",
- ct, ctinfo, &newrange.min_ip, ntohs(newrange.min.all));
+ ct, ctinfo, &newrange.min_addr.ip,
+ ntohs(newrange.min_proto.all));
+ ct->nattype_entry = (unsigned long)nte;
ret = nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_DST);
DEBUGP("Expand returned: %d\n", ret);
return ret;
@@ -318,12 +341,22 @@ static unsigned int nattype_forward(struct sk_buff *skb,
enum ip_conntrack_info ctinfo;
const struct ipt_nattype_info *info = par->targinfo;
u16 nat_port;
+ enum ip_conntrack_dir dir;
- if (par->hooknum != NF_INET_FORWARD)
+
+ if (par->hooknum != NF_INET_POST_ROUTING)
return XT_CONTINUE;
- /* Ingress packet,
- * refresh the timer if we find an entry.
+ /* netfilter
+ * Egress packet, create a new rule in our list. If conntrack does
+ * not have an entry, skip this packet.
+ */
+ ct = nf_ct_get(skb, &ctinfo);
+ if (!ct)
+ return XT_CONTINUE;
+
+ /* netfilter
+ * Ingress packet, refresh the timer if we find an entry.
*/
if (info->mode == MODE_FORWARD_IN) {
spin_lock_bh(&nattype_lock);
@@ -335,12 +368,14 @@ static unsigned int nattype_forward(struct sk_buff *skb,
if (!nattype_packet_in_match(nte, skb, info))
continue;
+ spin_unlock_bh(&nattype_lock);
/* netfilter NATTYPE
* Refresh the timer, if we fail, break
* out and forward fail as though we never
* found the entry.
*/
- if (!nattype_refresh_timer(nte))
+ if (!nattype_refresh_timer((unsigned long)nte,
+ ct->timeout.expires))
break;
/* netfilter NATTYPE
@@ -348,7 +383,6 @@ static unsigned int nattype_forward(struct sk_buff *skb,
* entry values should not change so print
* them outside the lock.
*/
- spin_unlock_bh(&nattype_lock);
nattype_nte_debug_print(nte, "refresh");
DEBUGP("FORWARD_IN_ACCEPT\n");
return NF_ACCEPT;
@@ -358,15 +392,9 @@ static unsigned int nattype_forward(struct sk_buff *skb,
return XT_CONTINUE;
}
- /* netfilter NATTYPE
- * Egress packet, create a new rule in our list. If conntrack does
- * not have an entry, skip this packet.
- */
- ct = nf_ct_get(skb, &ctinfo);
- if (!ct || (ctinfo == IP_CT_NEW && ctinfo == IP_CT_RELATED))
- return XT_CONTINUE;
+ dir = CTINFO2DIR(ctinfo);
- nat_port = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.all;
+ nat_port = ct->tuplehash[!dir].tuple.dst.u.all;
/* netfilter NATTYPE
* Allocate a new entry
@@ -382,20 +410,22 @@ static unsigned int nattype_forward(struct sk_buff *skb,
nte->proto = iph->protocol;
nte->nat_port = nat_port;
nte->dest_addr = iph->daddr;
- nte->range.min_ip = iph->saddr;
- nte->range.max_ip = nte->range.min_ip;
+ nte->range.min_addr.ip = iph->saddr;
+ nte->range.max_addr.ip = nte->range.min_addr.ip;
/* netfilter NATTYPE
* TOOD: Would it be better to get this information from the
* conntrack instead of the headers.
*/
if (iph->protocol == IPPROTO_TCP) {
- nte->range.min.tcp.port = ((struct tcphdr *)protoh)->source;
- nte->range.max.tcp.port = nte->range.min.tcp.port;
+ nte->range.min_proto.tcp.port =
+ ((struct tcphdr *)protoh)->source;
+ nte->range.max_proto.tcp.port = nte->range.min_proto.tcp.port;
nte->dest_port = ((struct tcphdr *)protoh)->dest;
} else if (iph->protocol == IPPROTO_UDP) {
- nte->range.min.udp.port = ((struct udphdr *)protoh)->source;
- nte->range.max.udp.port = nte->range.min.udp.port;
+ nte->range.min_proto.udp.port =
+ ((struct udphdr *)protoh)->source;
+ nte->range.max_proto.udp.port = nte->range.min_proto.udp.port;
nte->dest_port = ((struct udphdr *)protoh)->dest;
}
nte->range.flags = (NF_NAT_RANGE_MAP_IPS |
@@ -416,15 +446,17 @@ static unsigned int nattype_forward(struct sk_buff *skb,
*/
spin_lock_bh(&nattype_lock);
list_for_each_entry(nte2, &nattype_list, list) {
- if (!nattype_compare(nte, nte2))
+ if (!nattype_compare(nte, nte2, info))
continue;
-
+ spin_unlock_bh(&nattype_lock);
/* netfilter NATTYPE
* If we can not refresh this entry, insert our new
* entry as this one is timed out and will be removed
* from the list shortly.
*/
- if (!nattype_refresh_timer(nte2))
+ if (!nattype_refresh_timer(
+ (unsigned long)nte2,
+ jiffies + nte2->timeout_value))
break;
/* netfilter NATTYPE
@@ -433,7 +465,6 @@ static unsigned int nattype_forward(struct sk_buff *skb,
*
* Free up the new entry.
*/
- spin_unlock_bh(&nattype_lock);
nattype_nte_debug_print(nte2, "refresh");
nattype_free(nte);
return XT_CONTINUE;
@@ -442,9 +473,12 @@ static unsigned int nattype_forward(struct sk_buff *skb,
/* netfilter NATTYPE
* Add the new entry to the list.
*/
- nte->timeout.expires = jiffies + (NATTYPE_TIMEOUT * HZ);
+ nte->timeout_value = ct->timeout.expires;
+ nte->timeout.expires = ct->timeout.expires + jiffies;
add_timer(&nte->timeout);
list_add(&nte->list, &nattype_list);
+ ct->nattype_entry = (unsigned long)nte;
+ nte->nattype_cookie = NATTYPE_COOKIE;
spin_unlock_bh(&nattype_lock);
nattype_nte_debug_print(nte, "ADD");
return XT_CONTINUE;
@@ -534,7 +568,7 @@ static int nattype_check(const struct xt_tgchk_param *par)
types[info->type], modes[info->mode]);
if (par->hook_mask & ~((1 << NF_INET_PRE_ROUTING) |
- (1 << NF_INET_FORWARD))) {
+ (1 << NF_INET_POST_ROUTING))) {
DEBUGP("nattype_check: bad hooks %x.\n", par->hook_mask);
return -EINVAL;
}
@@ -575,12 +609,14 @@ static struct xt_target nattype = {
.checkentry = nattype_check,
.targetsize = sizeof(struct ipt_nattype_info),
.hooks = ((1 << NF_INET_PRE_ROUTING) |
- (1 << NF_INET_FORWARD)),
+ (1 << NF_INET_POST_ROUTING)),
.me = THIS_MODULE,
};
static int __init init(void)
{
+ WARN_ON(nattype_refresh_timer);
+ RCU_INIT_POINTER(nattype_refresh_timer, nattype_refresh_timer_impl);
return xt_register_target(&nattype);
}
diff --git a/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c b/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
index ea91058..1eda519 100644
--- a/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
+++ b/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
@@ -68,7 +68,13 @@ nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
newrange.max_proto = range->max_proto;
/* Hand modified range to generic setup. */
+#if defined(CONFIG_IP_NF_TARGET_NATTYPE_MODULE)
+ nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_SRC);
+ return XT_CONTINUE;
+#else
return nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_SRC);
+#endif
+
}
EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4);
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index f56a668..4487c71 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -354,7 +354,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
/* We throwed the options of the initial SYN away, so we hope
* the ACK carries the same options again (see RFC1122 4.2.3.8)
*/
- ireq->opt = tcp_v4_save_options(skb);
+ RCU_INIT_POINTER(ireq->ireq_opt, tcp_v4_save_options(skb));
if (security_inet_conn_request(sk, skb, req)) {
reqsk_free(req);
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 4946e8f..0a57417 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -437,7 +437,7 @@ EXPORT_SYMBOL(tcp_init_sock);
static void tcp_tx_timestamp(struct sock *sk, u16 tsflags, struct sk_buff *skb)
{
- if (tsflags) {
+ if (tsflags && skb) {
struct skb_shared_info *shinfo = skb_shinfo(skb);
struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
@@ -972,10 +972,8 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset,
copied += copy;
offset += copy;
size -= copy;
- if (!size) {
- tcp_tx_timestamp(sk, sk->sk_tsflags, skb);
+ if (!size)
goto out;
- }
if (skb->len < size_goal || (flags & MSG_OOB))
continue;
@@ -1001,8 +999,11 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset,
}
out:
- if (copied && !(flags & MSG_SENDPAGE_NOTLAST))
- tcp_push(sk, flags, mss_now, tp->nonagle, size_goal);
+ if (copied) {
+ tcp_tx_timestamp(sk, sk->sk_tsflags, tcp_write_queue_tail(sk));
+ if (!(flags & MSG_SENDPAGE_NOTLAST))
+ tcp_push(sk, flags, mss_now, tp->nonagle, size_goal);
+ }
return copied;
do_error:
@@ -1295,7 +1296,6 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
copied += copy;
if (!msg_data_left(msg)) {
- tcp_tx_timestamp(sk, sockc.tsflags, skb);
if (unlikely(flags & MSG_EOR))
TCP_SKB_CB(skb)->eor = 1;
goto out;
@@ -1326,8 +1326,10 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
}
out:
- if (copied)
+ if (copied) {
+ tcp_tx_timestamp(sk, sockc.tsflags, tcp_write_queue_tail(sk));
tcp_push(sk, flags, mss_now, tp->nonagle, size_goal);
+ }
out_nopush:
release_sock(sk);
return copied + copied_syn;
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 491b03a..ec9e58b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -6239,7 +6239,7 @@ struct request_sock *inet_reqsk_alloc(const struct request_sock_ops *ops,
struct inet_request_sock *ireq = inet_rsk(req);
kmemcheck_annotate_bitfield(ireq, flags);
- ireq->opt = NULL;
+ ireq->ireq_opt = NULL;
#if IS_ENABLED(CONFIG_IPV6)
ireq->pktopts = NULL;
#endif
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index e8ab585..49d32fbc 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -864,7 +864,7 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
ireq->ir_rmt_addr,
- ireq->opt);
+ ireq_opt_deref(ireq));
err = net_xmit_eval(err);
}
@@ -876,7 +876,7 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
*/
static void tcp_v4_reqsk_destructor(struct request_sock *req)
{
- kfree(inet_rsk(req)->opt);
+ kfree(rcu_dereference_protected(inet_rsk(req)->ireq_opt, 1));
}
#ifdef CONFIG_TCP_MD5SIG
@@ -1202,7 +1202,7 @@ static void tcp_v4_init_req(struct request_sock *req,
sk_rcv_saddr_set(req_to_sk(req), ip_hdr(skb)->daddr);
sk_daddr_set(req_to_sk(req), ip_hdr(skb)->saddr);
- ireq->opt = tcp_v4_save_options(skb);
+ RCU_INIT_POINTER(ireq->ireq_opt, tcp_v4_save_options(skb));
}
static struct dst_entry *tcp_v4_route_req(const struct sock *sk,
@@ -1298,10 +1298,9 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
sk_daddr_set(newsk, ireq->ir_rmt_addr);
sk_rcv_saddr_set(newsk, ireq->ir_loc_addr);
newsk->sk_bound_dev_if = ireq->ir_iif;
- newinet->inet_saddr = ireq->ir_loc_addr;
- inet_opt = ireq->opt;
- rcu_assign_pointer(newinet->inet_opt, inet_opt);
- ireq->opt = NULL;
+ newinet->inet_saddr = ireq->ir_loc_addr;
+ inet_opt = rcu_dereference(ireq->ireq_opt);
+ RCU_INIT_POINTER(newinet->inet_opt, inet_opt);
newinet->mc_index = inet_iif(skb);
newinet->mc_ttl = ip_hdr(skb)->ttl;
newinet->rcv_tos = ip_hdr(skb)->tos;
@@ -1349,9 +1348,12 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
if (__inet_inherit_port(sk, newsk) < 0)
goto put_and_exit;
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
- if (*own_req)
+ if (likely(*own_req)) {
tcp_move_syn(newtp, req);
-
+ ireq->ireq_opt = NULL;
+ } else {
+ newinet->inet_opt = NULL;
+ }
return newsk;
exit_overflow:
@@ -1362,6 +1364,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
tcp_listendrop(sk);
return NULL;
put_and_exit:
+ newinet->inet_opt = NULL;
inet_csk_prepare_forced_close(newsk);
tcp_done(newsk);
goto exit;
diff --git a/net/ipv4/tcp_nv.c b/net/ipv4/tcp_nv.c
index 5de82a8..e45e2c4 100644
--- a/net/ipv4/tcp_nv.c
+++ b/net/ipv4/tcp_nv.c
@@ -263,7 +263,7 @@ static void tcpnv_acked(struct sock *sk, const struct ack_sample *sample)
/* rate in 100's bits per second */
rate64 = ((u64)sample->in_flight) * 8000000;
- rate = (u32)div64_u64(rate64, (u64)(avg_rtt * 100));
+ rate = (u32)div64_u64(rate64, (u64)(avg_rtt ?: 1) * 100);
/* Remember the maximum rate seen during this RTT
* Note: It may be more than one RTT. This function should be
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index dd08d16..3438faa 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1996,6 +1996,7 @@ static int tcp_mtu_probe(struct sock *sk)
nskb->ip_summed = skb->ip_summed;
tcp_insert_write_queue_before(nskb, skb, sk);
+ tcp_highest_sack_replace(sk, skb, nskb);
len = 0;
tcp_for_write_queue_from_safe(skb, next, sk) {
@@ -2535,7 +2536,7 @@ static void tcp_collapse_retrans(struct sock *sk, struct sk_buff *skb)
BUG_ON(tcp_skb_pcount(skb) != 1 || tcp_skb_pcount(next_skb) != 1);
- tcp_highest_sack_combine(sk, next_skb, skb);
+ tcp_highest_sack_replace(sk, next_skb, skb);
tcp_unlink_write_queue(next_skb, sk);
@@ -3109,13 +3110,8 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
tcp_ecn_make_synack(req, th);
th->source = htons(ireq->ir_num);
th->dest = ireq->ir_rmt_port;
- /* Setting of flags are superfluous here for callers (and ECE is
- * not even correctly set)
- */
- tcp_init_nondata_skb(skb, tcp_rsk(req)->snt_isn,
- TCPHDR_SYN | TCPHDR_ACK);
-
- th->seq = htonl(TCP_SKB_CB(skb)->seq);
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ th->seq = htonl(tcp_rsk(req)->snt_isn);
/* XXX data is queued and acked as is. No buffer/window check */
th->ack_seq = htonl(tcp_rsk(req)->rcv_nxt);
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 200c9b6..1589e6a 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -222,10 +222,7 @@ static int udp_reuseport_add_sock(struct sock *sk, struct udp_hslot *hslot,
}
}
- /* Initial allocation may have already happened via setsockopt */
- if (!rcu_access_pointer(sk->sk_reuseport_cb))
- return reuseport_alloc(sk);
- return 0;
+ return reuseport_alloc(sk);
}
/**
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index 0932c85..6401574 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -122,7 +122,7 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
* will be using a length value equal to only one MSS sized
* segment instead of the entire frame.
*/
- if (gso_partial) {
+ if (gso_partial && skb_is_gso(skb)) {
uh->len = htons(skb_shinfo(skb)->gso_size +
SKB_GSO_CB(skb)->data_offset +
skb->head - (unsigned char *)uh);
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 371312b..2abaa2e 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -3339,6 +3339,7 @@ static void addrconf_permanent_addr(struct net_device *dev)
if ((ifp->flags & IFA_F_PERMANENT) &&
fixup_permanent_addr(idev, ifp) < 0) {
write_unlock_bh(&idev->lock);
+ in6_ifa_hold(ifp);
ipv6_del_addr(ifp);
write_lock_bh(&idev->lock);
diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
index b912f0d..b82e439 100644
--- a/net/ipv6/ip6_flowlabel.c
+++ b/net/ipv6/ip6_flowlabel.c
@@ -315,6 +315,7 @@ struct ipv6_txoptions *fl6_merge_options(struct ipv6_txoptions *opt_space,
}
opt_space->dst1opt = fopt->dst1opt;
opt_space->opt_flen = fopt->opt_flen;
+ opt_space->tot_len = fopt->tot_len;
return opt_space;
}
EXPORT_SYMBOL_GPL(fl6_merge_options);
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
index 48e6e75..65a58fe 100644
--- a/net/ipv6/ip6_gre.c
+++ b/net/ipv6/ip6_gre.c
@@ -408,13 +408,16 @@ static void ip6gre_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
case ICMPV6_DEST_UNREACH:
net_dbg_ratelimited("%s: Path to destination invalid or inactive!\n",
t->parms.name);
- break;
+ if (code != ICMPV6_PORT_UNREACH)
+ break;
+ return;
case ICMPV6_TIME_EXCEED:
if (code == ICMPV6_EXC_HOPLIMIT) {
net_dbg_ratelimited("%s: Too small hop limit or routing loop in tunnel!\n",
t->parms.name);
+ break;
}
- break;
+ return;
case ICMPV6_PARAMPROB:
teli = 0;
if (code == ICMPV6_HDR_FIELD)
@@ -430,7 +433,7 @@ static void ip6gre_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
net_dbg_ratelimited("%s: Recipient unable to parse tunneled packet!\n",
t->parms.name);
}
- break;
+ return;
case ICMPV6_PKT_TOOBIG:
mtu = be32_to_cpu(info) - offset - t->tun_hlen;
if (t->dev->type == ARPHRD_ETHER)
@@ -438,7 +441,7 @@ static void ip6gre_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
if (mtu < IPV6_MIN_MTU)
mtu = IPV6_MIN_MTU;
t->dev->mtu = mtu;
- break;
+ return;
}
if (time_before(jiffies, t->err_time + IP6TUNNEL_ERR_TIMEO))
@@ -505,8 +508,8 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
__u32 *pmtu, __be16 proto)
{
struct ip6_tnl *tunnel = netdev_priv(dev);
- __be16 protocol = (dev->type == ARPHRD_ETHER) ?
- htons(ETH_P_TEB) : proto;
+ struct dst_entry *dst = skb_dst(skb);
+ __be16 protocol;
if (dev->type == ARPHRD_ETHER)
IPCB(skb)->flags = 0;
@@ -520,9 +523,14 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
tunnel->o_seqno++;
/* Push GRE header. */
+ protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto;
gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
protocol, tunnel->parms.o_key, htonl(tunnel->o_seqno));
+ /* TooBig packet may have updated dst->dev's mtu */
+ if (dst && dst_mtu(dst) > dst->dev->mtu)
+ dst->ops->update_pmtu(dst, NULL, skb, dst->dev->mtu);
+
return ip6_tnl_xmit(skb, dev, dsfield, fl6, encap_limit, pmtu,
NEXTHDR_GRE);
}
diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
index 424fbe1..649f4d8 100644
--- a/net/ipv6/ip6_offload.c
+++ b/net/ipv6/ip6_offload.c
@@ -105,7 +105,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
for (skb = segs; skb; skb = skb->next) {
ipv6h = (struct ipv6hdr *)(skb_mac_header(skb) + nhoff);
- if (gso_partial)
+ if (gso_partial && skb_is_gso(skb))
payload_len = skb_shinfo(skb)->gso_size +
SKB_GSO_CB(skb)->data_offset +
skb->head - (unsigned char *)(ipv6h + 1);
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index 821aa0b..4e6c439 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -1223,11 +1223,11 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
if (WARN_ON(v6_cork->opt))
return -EINVAL;
- v6_cork->opt = kzalloc(opt->tot_len, sk->sk_allocation);
+ v6_cork->opt = kzalloc(sizeof(*opt), sk->sk_allocation);
if (unlikely(!v6_cork->opt))
return -ENOBUFS;
- v6_cork->opt->tot_len = opt->tot_len;
+ v6_cork->opt->tot_len = sizeof(*opt);
v6_cork->opt->opt_flen = opt->opt_flen;
v6_cork->opt->opt_nflen = opt->opt_nflen;
diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
index 1696f1f..163f1fa 100644
--- a/net/l2tp/l2tp_ppp.c
+++ b/net/l2tp/l2tp_ppp.c
@@ -993,6 +993,9 @@ static int pppol2tp_session_ioctl(struct l2tp_session *session,
session->name, cmd, arg);
sk = ps->sock;
+ if (!sk)
+ return -EBADR;
+
sock_hold(sk);
switch (cmd) {
diff --git a/net/mac80211/key.c b/net/mac80211/key.c
index edd6f29..4c625a3 100644
--- a/net/mac80211/key.c
+++ b/net/mac80211/key.c
@@ -4,7 +4,7 @@
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2007-2008 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
- * Copyright 2015 Intel Deutschland GmbH
+ * Copyright 2015-2017 Intel Deutschland GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -19,6 +19,7 @@
#include <linux/slab.h>
#include <linux/export.h>
#include <net/mac80211.h>
+#include <crypto/algapi.h>
#include <asm/unaligned.h>
#include "ieee80211_i.h"
#include "driver-ops.h"
@@ -608,6 +609,39 @@ void ieee80211_key_free_unused(struct ieee80211_key *key)
ieee80211_key_free_common(key);
}
+static bool ieee80211_key_identical(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_key *old,
+ struct ieee80211_key *new)
+{
+ u8 tkip_old[WLAN_KEY_LEN_TKIP], tkip_new[WLAN_KEY_LEN_TKIP];
+ u8 *tk_old, *tk_new;
+
+ if (!old || new->conf.keylen != old->conf.keylen)
+ return false;
+
+ tk_old = old->conf.key;
+ tk_new = new->conf.key;
+
+ /*
+ * In station mode, don't compare the TX MIC key, as it's never used
+ * and offloaded rekeying may not care to send it to the host. This
+ * is the case in iwlwifi, for example.
+ */
+ if (sdata->vif.type == NL80211_IFTYPE_STATION &&
+ new->conf.cipher == WLAN_CIPHER_SUITE_TKIP &&
+ new->conf.keylen == WLAN_KEY_LEN_TKIP &&
+ !(new->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE)) {
+ memcpy(tkip_old, tk_old, WLAN_KEY_LEN_TKIP);
+ memcpy(tkip_new, tk_new, WLAN_KEY_LEN_TKIP);
+ memset(tkip_old + NL80211_TKIP_DATA_OFFSET_TX_MIC_KEY, 0, 8);
+ memset(tkip_new + NL80211_TKIP_DATA_OFFSET_TX_MIC_KEY, 0, 8);
+ tk_old = tkip_old;
+ tk_new = tkip_new;
+ }
+
+ return !crypto_memneq(tk_old, tk_new, new->conf.keylen);
+}
+
int ieee80211_key_link(struct ieee80211_key *key,
struct ieee80211_sub_if_data *sdata,
struct sta_info *sta)
@@ -619,9 +653,6 @@ int ieee80211_key_link(struct ieee80211_key *key,
pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
idx = key->conf.keyidx;
- key->local = sdata->local;
- key->sdata = sdata;
- key->sta = sta;
mutex_lock(&sdata->local->key_mtx);
@@ -632,6 +663,20 @@ int ieee80211_key_link(struct ieee80211_key *key,
else
old_key = key_mtx_dereference(sdata->local, sdata->keys[idx]);
+ /*
+ * Silently accept key re-installation without really installing the
+ * new version of the key to avoid nonce reuse or replay issues.
+ */
+ if (ieee80211_key_identical(sdata, old_key, key)) {
+ ieee80211_key_free_unused(key);
+ ret = 0;
+ goto out;
+ }
+
+ key->local = sdata->local;
+ key->sdata = sdata;
+ key->sta = sta;
+
increment_tailroom_need_count(sdata);
ieee80211_key_replace(sdata, sta, pairwise, old_key, key);
@@ -647,6 +692,7 @@ int ieee80211_key_link(struct ieee80211_key *key,
ret = 0;
}
+ out:
mutex_unlock(&sdata->local->key_mtx);
return ret;
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 072e80a..255a797 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -72,6 +72,12 @@ EXPORT_SYMBOL_GPL(nf_conntrack_expect_lock);
struct hlist_nulls_head *nf_conntrack_hash __read_mostly;
EXPORT_SYMBOL_GPL(nf_conntrack_hash);
+bool (*nattype_refresh_timer)
+ (unsigned long nattype,
+ unsigned long timeout_value)
+ __rcu __read_mostly;
+EXPORT_SYMBOL(nattype_refresh_timer);
+
struct conntrack_gc_work {
struct delayed_work dwork;
u32 last_bucket;
@@ -185,6 +191,7 @@ unsigned int nf_conntrack_htable_size __read_mostly;
EXPORT_SYMBOL_GPL(nf_conntrack_htable_size);
unsigned int nf_conntrack_max __read_mostly;
+
seqcount_t nf_conntrack_generation __read_mostly;
unsigned int nf_conntrack_pkt_threshold __read_mostly;
@@ -193,7 +200,8 @@ EXPORT_SYMBOL(nf_conntrack_pkt_threshold);
DEFINE_PER_CPU(struct nf_conn, nf_conntrack_untracked);
EXPORT_PER_CPU_SYMBOL(nf_conntrack_untracked);
-static unsigned int nf_conntrack_hash_rnd __read_mostly;
+unsigned int nf_conntrack_hash_rnd __read_mostly;
+EXPORT_SYMBOL(nf_conntrack_hash_rnd);
static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple,
const struct net *net)
@@ -396,6 +404,9 @@ destroy_conntrack(struct nf_conntrack *nfct)
struct nf_conn *ct = (struct nf_conn *)nfct;
struct nf_conntrack_l4proto *l4proto;
void (*delete_entry)(struct nf_conn *ct);
+ struct sip_list *sip_node = NULL;
+ struct list_head *sip_node_list;
+ struct list_head *sip_node_save_list;
pr_debug("destroy_conntrack(%pK)\n", ct);
NF_CT_ASSERT(atomic_read(&nfct->use) == 0);
@@ -423,6 +434,14 @@ destroy_conntrack(struct nf_conntrack *nfct)
rcu_read_unlock();
local_bh_disable();
+
+ pr_debug("freeing item in the SIP list\n");
+ list_for_each_safe(sip_node_list, sip_node_save_list,
+ &ct->sip_segment_list) {
+ sip_node = list_entry(sip_node_list, struct sip_list, list);
+ list_del(&sip_node->list);
+ kfree(sip_node);
+ }
/* Expectations will have been removed in clean_from_lists,
* except TFTP can create an expectation on the first packet,
* before connection is in the list, so we need to clean here,
@@ -707,7 +726,7 @@ static int nf_ct_resolve_clash(struct net *net, struct sk_buff *skb,
l4proto = __nf_ct_l4proto_find(nf_ct_l3num(ct), nf_ct_protonum(ct));
if (l4proto->allow_clash &&
- !nfct_nat(ct) &&
+ ((ct->status & IPS_NAT_DONE_MASK) == 0) &&
!nf_ct_is_dying(ct) &&
atomic_inc_not_zero(&ct->ct_general.use)) {
nf_ct_acct_merge(ct, ctinfo, (struct nf_conn *)skb->nfct);
@@ -1094,6 +1113,9 @@ __nf_conntrack_alloc(struct net *net,
nf_ct_zone_add(ct, zone);
+#if defined(CONFIG_IP_NF_TARGET_NATTYPE_MODULE)
+ ct->nattype_entry = 0;
+#endif
/* Because we use RCU lookups, we set ct_general.use to zero before
* this is inserted in any list.
*/
@@ -1197,6 +1219,7 @@ init_conntrack(struct net *net, struct nf_conn *tmpl,
GFP_ATOMIC);
local_bh_disable();
+ INIT_LIST_HEAD(&ct->sip_segment_list);
if (net->ct.expect_count) {
spin_lock(&nf_conntrack_expect_lock);
exp = nf_ct_find_expectation(net, zone, tuple);
@@ -1220,6 +1243,10 @@ init_conntrack(struct net *net, struct nf_conn *tmpl,
#ifdef CONFIG_NF_CONNTRACK_SECMARK
ct->secmark = exp->master->secmark;
#endif
+/* Initialize the NAT type entry. */
+#if defined(CONFIG_IP_NF_TARGET_NATTYPE_MODULE)
+ ct->nattype_entry = 0;
+#endif
NF_CT_STAT_INC(net, expect_new);
}
spin_unlock(&nf_conntrack_expect_lock);
@@ -1460,6 +1487,11 @@ void __nf_ct_refresh_acct(struct nf_conn *ct,
{
struct nf_conn_acct *acct;
u64 pkts;
+#if defined(CONFIG_IP_NF_TARGET_NATTYPE_MODULE)
+ bool (*nattype_ref_timer)
+ (unsigned long nattype,
+ unsigned long timeout_value);
+#endif
NF_CT_ASSERT(skb);
@@ -1472,6 +1504,13 @@ void __nf_ct_refresh_acct(struct nf_conn *ct,
extra_jiffies += nfct_time_stamp;
ct->timeout = extra_jiffies;
+/* Refresh the NAT type entry. */
+#if defined(CONFIG_IP_NF_TARGET_NATTYPE_MODULE)
+ nattype_ref_timer = rcu_dereference(nattype_refresh_timer);
+ if (nattype_ref_timer)
+ nattype_ref_timer(ct->nattype_entry, ct->timeout.expires);
+#endif
+
acct:
if (do_acct) {
acct = nf_conn_acct_find(ct);
diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
index 6bd58eea..1ce25f5 100644
--- a/net/netfilter/nf_conntrack_netlink.c
+++ b/net/netfilter/nf_conntrack_netlink.c
@@ -1540,12 +1540,23 @@ static int ctnetlink_change_timeout(struct nf_conn *ct,
const struct nlattr * const cda[])
{
u_int32_t timeout = ntohl(nla_get_be32(cda[CTA_TIMEOUT]));
+#if defined(CONFIG_IP_NF_TARGET_NATTYPE_MODULE)
+ bool (*nattype_ref_timer)
+ (unsigned long nattype,
+ unsigned long timeout_value);
+#endif
ct->timeout = nfct_time_stamp + timeout * HZ;
if (test_bit(IPS_DYING_BIT, &ct->status))
return -ETIME;
+/* Refresh the NAT type entry. */
+#if defined(CONFIG_IP_NF_TARGET_NATTYPE_MODULE)
+ nattype_ref_timer = rcu_dereference(nattype_refresh_timer);
+ if (nattype_ref_timer)
+ nattype_ref_timer(ct->nattype_entry, ct->timeout.expires);
+#endif
return 0;
}
diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c
index f132ef9..6d6731f 100644
--- a/net/netfilter/nf_conntrack_sip.c
+++ b/net/netfilter/nf_conntrack_sip.c
@@ -1,5 +1,6 @@
/* SIP extension for IP connection tracking.
*
+ * Copyright (c) 2015,2017, The Linux Foundation. All rights reserved.
* (C) 2005 by Christian Hentschel <chentschel@arnet.com.ar>
* based on RR's ip_conntrack_ftp.c and other modules.
* (C) 2007 United Security Providers
@@ -20,13 +21,18 @@
#include <linux/udp.h>
#include <linux/tcp.h>
#include <linux/netfilter.h>
-
+#include <net/tcp.h>
#include <net/netfilter/nf_conntrack.h>
#include <net/netfilter/nf_conntrack_core.h>
#include <net/netfilter/nf_conntrack_expect.h>
#include <net/netfilter/nf_conntrack_helper.h>
#include <net/netfilter/nf_conntrack_zones.h>
#include <linux/netfilter/nf_conntrack_sip.h>
+#include <net/netfilter/nf_nat.h>
+#include <net/netfilter/nf_nat_l3proto.h>
+#include <net/netfilter/nf_nat_l4proto.h>
+#include <net/netfilter/nf_queue.h>
+
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Christian Hentschel <chentschel@arnet.com.ar>");
@@ -54,6 +60,12 @@ EXPORT_SYMBOL_GPL(nf_nat_sip_hooks);
static struct ctl_table_header *sip_sysctl_header;
static unsigned int nf_ct_disable_sip_alg;
static int sip_direct_media = 1;
+static unsigned int nf_ct_enable_sip_segmentation;
+static int packet_count;
+static
+int proc_sip_segment(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos);
+
static struct ctl_table sip_sysctl_tbl[] = {
{
.procname = "nf_conntrack_disable_sip_alg",
@@ -69,9 +81,289 @@ static struct ctl_table sip_sysctl_tbl[] = {
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "nf_conntrack_enable_sip_segmentation",
+ .data = &nf_ct_enable_sip_segmentation,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_sip_segment,
+ },
{}
};
+static unsigned int (*nf_nat_sip_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ unsigned int dataoff,
+ const char **dptr,
+ unsigned int *datalen)
+ __read_mostly;
+EXPORT_SYMBOL(nf_nat_sip_hook);
+static void sip_calculate_parameters(s16 *diff, s16 *tdiff,
+ unsigned int *dataoff, const char **dptr,
+ unsigned int *datalen,
+ unsigned int msglen, unsigned int origlen)
+{
+ *diff = msglen - origlen;
+ *tdiff += *diff;
+ *dataoff += msglen;
+ *dptr += msglen;
+ *datalen = *datalen + *diff - msglen;
+}
+
+static void sip_update_params(enum ip_conntrack_dir dir,
+ unsigned int *msglen, unsigned int *origlen,
+ const char **dptr, unsigned int *datalen,
+ bool skb_is_combined, struct nf_conn *ct)
+{
+ if (skb_is_combined) {
+ /* The msglen of first skb has the total msg length of
+ * the two fragments. hence after combining,we update
+ * the msglen to that of the msglen of first skb
+ */
+ *msglen = (dir == IP_CT_DIR_ORIGINAL) ?
+ ct->segment.msg_length[0] : ct->segment.msg_length[1];
+ *origlen = *msglen;
+ *dptr = ct->dptr_prev;
+ *datalen = *msglen;
+ }
+}
+
+/* This function is to save all the information of the first segment
+ * that will be needed for combining the two segments
+ */
+static bool sip_save_segment_info(struct nf_conn *ct, struct sk_buff *skb,
+ unsigned int msglen, unsigned int datalen,
+ const char *dptr,
+ enum ip_conntrack_info ctinfo)
+{
+ enum ip_conntrack_dir dir = IP_CT_DIR_MAX;
+ bool skip = false;
+
+ /* one set of information is saved per direction ,also only one segment
+ * per direction is queued based on the assumption that after the first
+ * complete message leaves the kernel, only then the next fragmented
+ * segment will reach the kernel
+ */
+ dir = CTINFO2DIR(ctinfo);
+ if (dir == IP_CT_DIR_ORIGINAL) {
+ /* here we check if there is already an element queued for this
+ * direction, in that case we do not queue the next element,we
+ * make skip 1.ideally this scenario should never be hit
+ */
+ if (ct->sip_original_dir == 1) {
+ skip = true;
+ } else {
+ ct->segment.msg_length[0] = msglen;
+ ct->segment.data_len[0] = datalen;
+ ct->segment.skb_len[0] = skb->len;
+ ct->dptr_prev = dptr;
+ ct->sip_original_dir = 1;
+ skip = false;
+ }
+ } else {
+ if (ct->sip_reply_dir == 1) {
+ skip = true;
+ } else {
+ if (ct->sip_reply_dir == 1) {
+ skip = true;
+ } else {
+ ct->segment.msg_length[1] = msglen;
+ ct->segment.data_len[1] = datalen;
+ ct->segment.skb_len[1] = skb->len;
+ ct->dptr_prev = dptr;
+ ct->sip_reply_dir = 1;
+ skip = false;
+ }
+ }
+ }
+ return skip;
+}
+
+static struct sip_list *sip_coalesce_segments(struct nf_conn *ct,
+ struct sk_buff **skb_ref,
+ unsigned int dataoff,
+ struct sk_buff **combined_skb_ref,
+ bool *skip_sip_process,
+ bool do_not_process,
+ enum ip_conntrack_info ctinfo,
+ bool *success)
+
+{
+ struct list_head *list_trav_node;
+ struct list_head *list_backup_node;
+ struct nf_conn *ct_list;
+ enum ip_conntrack_info ctinfo_list;
+ enum ip_conntrack_dir dir_list;
+ enum ip_conntrack_dir dir = IP_CT_DIR_MAX;
+ const struct tcphdr *th_old;
+ unsigned int prev_data_len;
+ unsigned int seq_no, seq_old, exp_seq_no;
+ const struct tcphdr *th_new;
+ bool fragstolen = false;
+ int delta_truesize = 0;
+ struct sip_list *sip_entry = NULL;
+
+ th_new = (struct tcphdr *)(skb_network_header(*skb_ref) +
+ ip_hdrlen(*skb_ref));
+ seq_no = ntohl(th_new->seq);
+
+ if (ct) {
+ dir = CTINFO2DIR(ctinfo);
+ /* traverse the list it would have 1 or 2 elements. 1 element
+ * per direction at max
+ */
+ list_for_each_safe(list_trav_node, list_backup_node,
+ &ct->sip_segment_list){
+ sip_entry = list_entry(list_trav_node, struct sip_list,
+ list);
+ ct_list = nf_ct_get(sip_entry->entry->skb,
+ &ctinfo_list);
+ dir_list = CTINFO2DIR(ctinfo_list);
+ /* take an element and check if its direction matches
+ * with the current one
+ */
+ if (dir_list == dir) {
+ /* once we have the two elements to be combined
+ * we do another check. match the next expected
+ * seq no of the packet in the list with the
+ * seq no of the current packet.this is to be
+ * protected against out of order fragments
+ */
+ th_old = ((struct tcphdr *)(skb_network_header
+ (sip_entry->entry->skb) +
+ ip_hdrlen(sip_entry->entry->skb)));
+
+ prev_data_len = (dir == IP_CT_DIR_ORIGINAL) ?
+ ct->segment.data_len[0] :
+ ct->segment.data_len[1];
+ seq_old = (ntohl(th_old->seq));
+ exp_seq_no = seq_old + prev_data_len;
+
+ if (exp_seq_no == seq_no) {
+ /* Found packets to be combined.Pull
+ * header from second skb when
+ * preparing combined skb.This shifts
+ * the second skb start pointer to its
+ * data that was initially at the start
+ * of its headers.This so that the
+ * combined skb has the tcp ip headerof
+ * the first skb followed by the data
+ * of first skb followed by the data
+ * of second skb.
+ */
+ skb_pull(*skb_ref, dataoff);
+ if (skb_try_coalesce(
+ sip_entry->entry->skb,
+ *skb_ref, &fragstolen,
+ &delta_truesize)) {
+ pr_debug(" Combining segments\n");
+ *combined_skb_ref =
+ sip_entry->entry->skb;
+ *success = true;
+ list_del(list_trav_node);
+ } else{
+ skb_push(*skb_ref, dataoff);
+ }
+ }
+ } else if (do_not_process) {
+ *skip_sip_process = true;
+ }
+ }
+ }
+ return sip_entry;
+}
+
+static void recalc_header(struct sk_buff *skb, unsigned int skblen,
+ unsigned int oldlen, unsigned int protoff)
+{
+ unsigned int datalen;
+ struct tcphdr *tcph;
+ const struct nf_nat_l3proto *l3proto;
+
+ /* here we recalculate ip and tcp headers */
+ if (nf_ct_l3num((struct nf_conn *)skb->nfct) == NFPROTO_IPV4) {
+ /* fix IP hdr checksum information */
+ ip_hdr(skb)->tot_len = htons(skblen);
+ ip_send_check(ip_hdr(skb));
+ } else {
+ ipv6_hdr(skb)->payload_len =
+ htons(skblen - sizeof(struct ipv6hdr));
+ }
+ datalen = skb->len - protoff;
+ tcph = (struct tcphdr *)((void *)skb->data + protoff);
+ l3proto = __nf_nat_l3proto_find(nf_ct_l3num
+ ((struct nf_conn *)skb->nfct));
+ l3proto->csum_recalc(skb, IPPROTO_TCP, tcph, &tcph->check,
+ datalen, oldlen);
+}
+
+void (*nf_nat_sip_seq_adjust_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ s16 off);
+
+static unsigned int (*nf_nat_sip_expect_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ unsigned int dataoff,
+ const char **dptr,
+ unsigned int *datalen,
+ struct nf_conntrack_expect *exp,
+ unsigned int matchoff,
+ unsigned int matchlen)
+ __read_mostly;
+EXPORT_SYMBOL(nf_nat_sip_expect_hook);
+
+static unsigned int (*nf_nat_sdp_addr_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ unsigned int dataoff,
+ const char **dptr,
+ unsigned int *datalen,
+ unsigned int sdpoff,
+ enum sdp_header_types type,
+ enum sdp_header_types term,
+ const union nf_inet_addr *addr)
+ __read_mostly;
+EXPORT_SYMBOL(nf_nat_sdp_addr_hook);
+
+static unsigned int (*nf_nat_sdp_port_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ unsigned int dataoff,
+ const char **dptr,
+ unsigned int *datalen,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ u_int16_t port) __read_mostly;
+EXPORT_SYMBOL(nf_nat_sdp_port_hook);
+
+static unsigned int (*nf_nat_sdp_session_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ unsigned int dataoff,
+ const char **dptr,
+ unsigned int *datalen,
+ unsigned int sdpoff,
+ const union nf_inet_addr *addr)
+ __read_mostly;
+EXPORT_SYMBOL(nf_nat_sdp_session_hook);
+
+static unsigned int (*nf_nat_sdp_media_hook)
+ (struct sk_buff *skb,
+ unsigned int protoff,
+ unsigned int dataoff,
+ const char **dptr,
+ unsigned int *datalen,
+ struct nf_conntrack_expect *rtp_exp,
+ struct nf_conntrack_expect *rtcp_exp,
+ unsigned int mediaoff,
+ unsigned int medialen,
+ union nf_inet_addr *rtp_addr)
+ __read_mostly;
+EXPORT_SYMBOL(nf_nat_sdp_media_hook);
+
static int string_len(const struct nf_conn *ct, const char *dptr,
const char *limit, int *shift)
{
@@ -84,6 +376,43 @@ static int string_len(const struct nf_conn *ct, const char *dptr,
return len;
}
+static int nf_sip_enqueue_packet(struct nf_queue_entry *entry,
+ unsigned int queuenum)
+{
+ enum ip_conntrack_info ctinfo_list;
+ struct nf_conn *ct_temp;
+ struct sip_list *node = kzalloc(sizeof(*node),
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (!node)
+ return XT_CONTINUE;
+
+ ct_temp = nf_ct_get(entry->skb, &ctinfo_list);
+ node->entry = entry;
+ list_add(&node->list, &ct_temp->sip_segment_list);
+ return 0;
+}
+
+static const struct nf_queue_handler nf_sip_qh = {
+ .outfn = &nf_sip_enqueue_packet,
+};
+
+static
+int proc_sip_segment(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ int ret;
+
+ ret = proc_dointvec(ctl, write, buffer, lenp, ppos);
+ if (nf_ct_enable_sip_segmentation) {
+ pr_debug("registering queue handler\n");
+ nf_register_queue_handler(&init_net, &nf_sip_qh);
+ } else {
+ pr_debug("de-registering queue handler\n");
+ nf_unregister_queue_handler(&init_net);
+ }
+ return ret;
+}
+
static int digits_len(const struct nf_conn *ct, const char *dptr,
const char *limit, int *shift)
{
@@ -1505,13 +1834,29 @@ static int sip_help_tcp(struct sk_buff *skb, unsigned int protoff,
struct nf_conn *ct, enum ip_conntrack_info ctinfo)
{
struct tcphdr *th, _tcph;
- unsigned int dataoff, datalen;
+ unsigned int dataoff;
unsigned int matchoff, matchlen, clen;
- unsigned int msglen, origlen;
const char *dptr, *end;
s16 diff, tdiff = 0;
int ret = NF_ACCEPT;
bool term;
+ unsigned int datalen = 0, msglen = 0, origlen = 0;
+ unsigned int dataoff_orig = 0;
+ unsigned int splitlen, oldlen, oldlen1;
+ struct sip_list *sip_entry = NULL;
+ bool skip_sip_process = false;
+ bool do_not_process = false;
+ bool skip = false;
+ bool skb_is_combined = false;
+ enum ip_conntrack_dir dir = IP_CT_DIR_MAX;
+ struct sk_buff *combined_skb = NULL;
+ bool content_len_exists = 1;
+
+ packet_count++;
+ pr_debug("packet count %d\n", packet_count);
+
+ if (nf_ct_disable_sip_alg)
+ return NF_ACCEPT;
if (ctinfo != IP_CT_ESTABLISHED &&
ctinfo != IP_CT_ESTABLISHED_REPLY)
@@ -1535,11 +1880,26 @@ static int sip_help_tcp(struct sk_buff *skb, unsigned int protoff,
if (datalen < strlen("SIP/2.0 200"))
return NF_ACCEPT;
+ /* here we save the original datalength and data offset of the skb, this
+ * is needed later to split combined skbs
+ */
+ oldlen1 = skb->len - protoff;
+ dataoff_orig = dataoff;
+
+ if (!ct)
+ return NF_DROP;
while (1) {
if (ct_sip_get_header(ct, dptr, 0, datalen,
SIP_HDR_CONTENT_LENGTH,
- &matchoff, &matchlen) <= 0)
+ &matchoff, &matchlen) <= 0){
+ if (nf_ct_enable_sip_segmentation) {
+ do_not_process = true;
+ content_len_exists = 0;
+ goto destination;
+ } else {
break;
+ }
+ }
clen = simple_strtoul(dptr + matchoff, (char **)&end, 10);
if (dptr + matchoff == end)
@@ -1555,26 +1915,111 @@ static int sip_help_tcp(struct sk_buff *skb, unsigned int protoff,
}
if (!term)
break;
+
end += strlen("\r\n\r\n") + clen;
+destination:
- msglen = origlen = end - dptr;
- if (msglen > datalen)
+ if (content_len_exists == 0) {
+ origlen = datalen;
+ msglen = origlen;
+ } else {
+ origlen = end - dptr;
+ msglen = origlen;
+ }
+ pr_debug("mslgen %d datalen %d\n", msglen, datalen);
+ dir = CTINFO2DIR(ctinfo);
+ combined_skb = skb;
+ if (nf_ct_enable_sip_segmentation) {
+ /* Segmented Packet */
+ if (msglen > datalen) {
+ skip = sip_save_segment_info(ct, skb, msglen,
+ datalen, dptr,
+ ctinfo);
+ if (!skip)
+ return NF_QUEUE;
+ }
+ /* Traverse list to find prev segment */
+ /*Traverse the list if list non empty */
+ if (((&ct->sip_segment_list)->next) !=
+ (&ct->sip_segment_list)) {
+ /* Combine segments if they are fragments of
+ * the same message.
+ */
+ sip_entry = sip_coalesce_segments(ct, &skb,
+ dataoff,
+ &combined_skb,
+ &skip_sip_process,
+ do_not_process,
+ ctinfo,
+ &skb_is_combined);
+ sip_update_params(dir, &msglen, &origlen, &dptr,
+ &datalen,
+ skb_is_combined, ct);
+
+ if (skip_sip_process)
+ goto here;
+ } else if (do_not_process) {
+ goto here;
+ }
+ } else if (msglen > datalen) {
return NF_ACCEPT;
-
- ret = process_sip_msg(skb, ct, protoff, dataoff,
+ }
+ /* process the combined skb having the complete SIP message */
+ ret = process_sip_msg(combined_skb, ct, protoff, dataoff,
&dptr, &msglen);
+
/* process_sip_* functions report why this packet is dropped */
if (ret != NF_ACCEPT)
break;
- diff = msglen - origlen;
- tdiff += diff;
-
- dataoff += msglen;
- dptr += msglen;
- datalen = datalen + diff - msglen;
+ sip_calculate_parameters(&diff, &tdiff, &dataoff, &dptr,
+ &datalen, msglen, origlen);
+ if (nf_ct_enable_sip_segmentation && skb_is_combined)
+ break;
+ }
+ if (skb_is_combined) {
+ /* once combined skb is processed, split the skbs again The
+ * length to split at is the same as length of first skb. Any
+ * changes in the combined skb length because of SIP processing
+ * will reflect in the second fragment
+ */
+ splitlen = (dir == IP_CT_DIR_ORIGINAL) ?
+ ct->segment.skb_len[0] : ct->segment.skb_len[1];
+ oldlen = combined_skb->len - protoff;
+ skb_split(combined_skb, skb, splitlen);
+ /* Headers need to be recalculated since during SIP processing
+ * headers are calculated based on the change in length of the
+ * combined message
+ */
+ recalc_header(combined_skb, splitlen, oldlen, protoff);
+ /* Reinject the first skb now that the processing is complete */
+ if (sip_entry) {
+ nf_reinject(sip_entry->entry, NF_ACCEPT);
+ kfree(sip_entry);
+ }
+ skb->len = (oldlen1 + protoff) + tdiff - dataoff_orig;
+ /* After splitting, push the headers back to the first skb which
+ * were removed before combining the skbs.This moves the skb
+ * begin pointer back to the beginning of its headers
+ */
+ skb_push(skb, dataoff_orig);
+ /* Since the length of this second segment willbe affected
+ * because of SIP processing,we need to recalculate its header
+ * as well.
+ */
+ recalc_header(skb, skb->len, oldlen1, protoff);
+ /* Now that the processing is done and the first skb reinjected.
+ * We allow addition of fragmented skbs to the list for this
+ * direction
+ */
+ if (dir == IP_CT_DIR_ORIGINAL)
+ ct->sip_original_dir = 0;
+ else
+ ct->sip_reply_dir = 0;
}
- if (ret == NF_ACCEPT && ct->status & IPS_NAT_MASK) {
+here:
+
+ if (ret == NF_ACCEPT && ct && ct->status & IPS_NAT_MASK) {
const struct nf_nat_sip_hooks *hooks;
hooks = rcu_dereference(nf_nat_sip_hooks);
diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
index 2916f48..624d6e4 100644
--- a/net/netfilter/nf_nat_core.c
+++ b/net/netfilter/nf_nat_core.c
@@ -30,19 +30,17 @@
#include <net/netfilter/nf_conntrack_zones.h>
#include <linux/netfilter/nf_nat.h>
+static DEFINE_SPINLOCK(nf_nat_lock);
+
static DEFINE_MUTEX(nf_nat_proto_mutex);
static const struct nf_nat_l3proto __rcu *nf_nat_l3protos[NFPROTO_NUMPROTO]
__read_mostly;
static const struct nf_nat_l4proto __rcu **nf_nat_l4protos[NFPROTO_NUMPROTO]
__read_mostly;
-struct nf_nat_conn_key {
- const struct net *net;
- const struct nf_conntrack_tuple *tuple;
- const struct nf_conntrack_zone *zone;
-};
-
-static struct rhltable nf_nat_bysource_table;
+static struct hlist_head *nf_nat_bysource __read_mostly;
+static unsigned int nf_nat_htable_size __read_mostly;
+static unsigned int nf_nat_hash_rnd __read_mostly;
inline const struct nf_nat_l3proto *
__nf_nat_l3proto_find(u8 family)
@@ -121,17 +119,19 @@ int nf_xfrm_me_harder(struct net *net, struct sk_buff *skb, unsigned int family)
EXPORT_SYMBOL(nf_xfrm_me_harder);
#endif /* CONFIG_XFRM */
-static u32 nf_nat_bysource_hash(const void *data, u32 len, u32 seed)
+/* We keep an extra hash for each conntrack, for fast searching. */
+static inline unsigned int
+hash_by_src(const struct net *n, const struct nf_conntrack_tuple *tuple)
{
- const struct nf_conntrack_tuple *t;
- const struct nf_conn *ct = data;
+ unsigned int hash;
- t = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
+ get_random_once(&nf_nat_hash_rnd, sizeof(nf_nat_hash_rnd));
+
/* Original src, to ensure we map it consistently if poss. */
+ hash = jhash2((u32 *)&tuple->src, sizeof(tuple->src) / sizeof(u32),
+ tuple->dst.protonum ^ nf_nat_hash_rnd ^ net_hash_mix(n));
- seed ^= net_hash_mix(nf_ct_net(ct));
- return jhash2((const u32 *)&t->src, sizeof(t->src) / sizeof(u32),
- t->dst.protonum ^ seed);
+ return reciprocal_scale(hash, nf_nat_htable_size);
}
/* Is this tuple already taken? (not by us) */
@@ -187,28 +187,6 @@ same_src(const struct nf_conn *ct,
t->src.u.all == tuple->src.u.all);
}
-static int nf_nat_bysource_cmp(struct rhashtable_compare_arg *arg,
- const void *obj)
-{
- const struct nf_nat_conn_key *key = arg->key;
- const struct nf_conn *ct = obj;
-
- if (!same_src(ct, key->tuple) ||
- !net_eq(nf_ct_net(ct), key->net) ||
- !nf_ct_zone_equal(ct, key->zone, IP_CT_DIR_ORIGINAL))
- return 1;
-
- return 0;
-}
-
-static struct rhashtable_params nf_nat_bysource_params = {
- .head_offset = offsetof(struct nf_conn, nat_bysource),
- .obj_hashfn = nf_nat_bysource_hash,
- .obj_cmpfn = nf_nat_bysource_cmp,
- .nelem_hint = 256,
- .min_size = 1024,
-};
-
/* Only called for SRC manip */
static int
find_appropriate_src(struct net *net,
@@ -219,26 +197,22 @@ find_appropriate_src(struct net *net,
struct nf_conntrack_tuple *result,
const struct nf_nat_range *range)
{
+ unsigned int h = hash_by_src(net, tuple);
const struct nf_conn *ct;
- struct nf_nat_conn_key key = {
- .net = net,
- .tuple = tuple,
- .zone = zone
- };
- struct rhlist_head *hl, *h;
- hl = rhltable_lookup(&nf_nat_bysource_table, &key,
- nf_nat_bysource_params);
+ hlist_for_each_entry_rcu(ct, &nf_nat_bysource[h], nat_bysource) {
+ if (same_src(ct, tuple) &&
+ net_eq(net, nf_ct_net(ct)) &&
+ nf_ct_zone_equal(ct, zone, IP_CT_DIR_ORIGINAL)) {
+ /* Copy source part from reply tuple. */
+ nf_ct_invert_tuplepr(result,
+ &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
+ result->dst = tuple->dst;
- rhl_for_each_entry_rcu(ct, h, hl, nat_bysource) {
- nf_ct_invert_tuplepr(result,
- &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
- result->dst = tuple->dst;
-
- if (in_range(l3proto, l4proto, result, range))
- return 1;
+ if (in_range(l3proto, l4proto, result, range))
+ return 1;
+ }
}
-
return 0;
}
@@ -411,6 +385,7 @@ nf_nat_setup_info(struct nf_conn *ct,
const struct nf_nat_range *range,
enum nf_nat_manip_type maniptype)
{
+ struct net *net = nf_ct_net(ct);
struct nf_conntrack_tuple curr_tuple, new_tuple;
struct nf_conn_nat *nat;
@@ -452,19 +427,16 @@ nf_nat_setup_info(struct nf_conn *ct,
}
if (maniptype == NF_NAT_MANIP_SRC) {
- struct nf_nat_conn_key key = {
- .net = nf_ct_net(ct),
- .tuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
- .zone = nf_ct_zone(ct),
- };
- int err;
+ unsigned int srchash;
- err = rhltable_insert_key(&nf_nat_bysource_table,
- &key,
- &ct->nat_bysource,
- nf_nat_bysource_params);
- if (err)
- return NF_DROP;
+ srchash = hash_by_src(net,
+ &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
+ spin_lock_bh(&nf_nat_lock);
+ /* nf_conntrack_alter_reply might re-allocate extension aera */
+ nat = nfct_nat(ct);
+ hlist_add_head_rcu(&ct->nat_bysource,
+ &nf_nat_bysource[srchash]);
+ spin_unlock_bh(&nf_nat_lock);
}
/* It's done. */
@@ -550,10 +522,6 @@ struct nf_nat_proto_clean {
static int nf_nat_proto_remove(struct nf_conn *i, void *data)
{
const struct nf_nat_proto_clean *clean = data;
- struct nf_conn_nat *nat = nfct_nat(i);
-
- if (!nat)
- return 0;
if ((clean->l3proto && nf_ct_l3num(i) != clean->l3proto) ||
(clean->l4proto && nf_ct_protonum(i) != clean->l4proto))
@@ -564,12 +532,10 @@ static int nf_nat_proto_remove(struct nf_conn *i, void *data)
static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
{
- struct nf_conn_nat *nat = nfct_nat(ct);
-
if (nf_nat_proto_remove(ct, data))
return 1;
- if (!nat)
+ if ((ct->status & IPS_SRC_NAT_DONE) == 0)
return 0;
/* This netns is being destroyed, and conntrack has nat null binding.
@@ -578,9 +544,10 @@ static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
* Else, when the conntrack is destoyed, nf_nat_cleanup_conntrack()
* will delete entry from already-freed table.
*/
+ spin_lock_bh(&nf_nat_lock);
+ hlist_del_rcu(&ct->nat_bysource);
ct->status &= ~IPS_NAT_DONE_MASK;
- rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource,
- nf_nat_bysource_params);
+ spin_unlock_bh(&nf_nat_lock);
/* don't delete conntrack. Although that would make things a lot
* simpler, we'd end up flushing all conntracks on nat rmmod.
@@ -705,13 +672,11 @@ EXPORT_SYMBOL_GPL(nf_nat_l3proto_unregister);
/* No one using conntrack by the time this called. */
static void nf_nat_cleanup_conntrack(struct nf_conn *ct)
{
- struct nf_conn_nat *nat = nf_ct_ext_find(ct, NF_CT_EXT_NAT);
-
- if (!nat)
- return;
-
- rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource,
- nf_nat_bysource_params);
+ if (ct->status & IPS_SRC_NAT_DONE) {
+ spin_lock_bh(&nf_nat_lock);
+ hlist_del_rcu(&ct->nat_bysource);
+ spin_unlock_bh(&nf_nat_lock);
+ }
}
static struct nf_ct_ext_type nat_extend __read_mostly = {
@@ -846,13 +811,16 @@ static int __init nf_nat_init(void)
{
int ret;
- ret = rhltable_init(&nf_nat_bysource_table, &nf_nat_bysource_params);
- if (ret)
- return ret;
+ /* Leave them the same for the moment. */
+ nf_nat_htable_size = nf_conntrack_htable_size;
+
+ nf_nat_bysource = nf_ct_alloc_hashtable(&nf_nat_htable_size, 0);
+ if (!nf_nat_bysource)
+ return -ENOMEM;
ret = nf_ct_extend_register(&nat_extend);
if (ret < 0) {
- rhltable_destroy(&nf_nat_bysource_table);
+ nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
printk(KERN_ERR "nf_nat_core: Unable to register extension\n");
return ret;
}
@@ -876,7 +844,7 @@ static int __init nf_nat_init(void)
return 0;
cleanup_extend:
- rhltable_destroy(&nf_nat_bysource_table);
+ nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
nf_ct_extend_unregister(&nat_extend);
return ret;
}
@@ -896,8 +864,8 @@ static void __exit nf_nat_cleanup(void)
for (i = 0; i < NFPROTO_NUMPROTO; i++)
kfree(nf_nat_l4protos[i]);
-
- rhltable_destroy(&nf_nat_bysource_table);
+ synchronize_net();
+ nf_ct_free_hashtable(nf_nat_bysource, nf_nat_htable_size);
}
MODULE_LICENSE("GPL");
diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
index 6c1e024..7c33955 100644
--- a/net/netfilter/nft_meta.c
+++ b/net/netfilter/nft_meta.c
@@ -159,8 +159,34 @@ void nft_meta_get_eval(const struct nft_expr *expr,
else
*dest = PACKET_BROADCAST;
break;
+ case NFPROTO_NETDEV:
+ switch (skb->protocol) {
+ case htons(ETH_P_IP): {
+ int noff = skb_network_offset(skb);
+ struct iphdr *iph, _iph;
+
+ iph = skb_header_pointer(skb, noff,
+ sizeof(_iph), &_iph);
+ if (!iph)
+ goto err;
+
+ if (ipv4_is_multicast(iph->daddr))
+ *dest = PACKET_MULTICAST;
+ else
+ *dest = PACKET_BROADCAST;
+
+ break;
+ }
+ case htons(ETH_P_IPV6):
+ *dest = PACKET_MULTICAST;
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ goto err;
+ }
+ break;
default:
- WARN_ON(1);
+ WARN_ON_ONCE(1);
goto err;
}
break;
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index 2a5775f..c9fac08 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -2077,7 +2077,7 @@ static int netlink_dump(struct sock *sk)
struct sk_buff *skb = NULL;
struct nlmsghdr *nlh;
struct module *module;
- int len, err = -ENOBUFS;
+ int err = -ENOBUFS;
int alloc_min_size;
int alloc_size;
@@ -2124,9 +2124,11 @@ static int netlink_dump(struct sock *sk)
skb_reserve(skb, skb_tailroom(skb) - alloc_size);
netlink_skb_set_owner_r(skb, sk);
- len = cb->dump(skb, cb);
+ if (nlk->dump_done_errno > 0)
+ nlk->dump_done_errno = cb->dump(skb, cb);
- if (len > 0) {
+ if (nlk->dump_done_errno > 0 ||
+ skb_tailroom(skb) < nlmsg_total_size(sizeof(nlk->dump_done_errno))) {
mutex_unlock(nlk->cb_mutex);
if (sk_filter(sk, skb))
@@ -2136,13 +2138,15 @@ static int netlink_dump(struct sock *sk)
return 0;
}
- nlh = nlmsg_put_answer(skb, cb, NLMSG_DONE, sizeof(len), NLM_F_MULTI);
- if (!nlh)
+ nlh = nlmsg_put_answer(skb, cb, NLMSG_DONE,
+ sizeof(nlk->dump_done_errno), NLM_F_MULTI);
+ if (WARN_ON(!nlh))
goto errout_skb;
nl_dump_check_consistent(cb, nlh);
- memcpy(nlmsg_data(nlh), &len, sizeof(len));
+ memcpy(nlmsg_data(nlh), &nlk->dump_done_errno,
+ sizeof(nlk->dump_done_errno));
if (sk_filter(sk, skb))
kfree_skb(skb);
@@ -2207,16 +2211,18 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
cb->min_dump_alloc = control->min_dump_alloc;
cb->skb = skb;
+ if (cb->start) {
+ ret = cb->start(cb);
+ if (ret)
+ goto error_unlock;
+ }
+
nlk->cb_running = true;
+ nlk->dump_done_errno = INT_MAX;
mutex_unlock(nlk->cb_mutex);
- ret = 0;
- if (cb->start)
- ret = cb->start(cb);
-
- if (!ret)
- ret = netlink_dump(sk);
+ ret = netlink_dump(sk);
sock_put(sk);
diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h
index 4fdb383..bae961c 100644
--- a/net/netlink/af_netlink.h
+++ b/net/netlink/af_netlink.h
@@ -24,6 +24,7 @@ struct netlink_sock {
wait_queue_head_t wait;
bool bound;
bool cb_running;
+ int dump_done_errno;
struct netlink_callback cb;
struct mutex *cb_mutex;
struct mutex cb_def_mutex;
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index b17f909..e7f6657 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -1720,7 +1720,7 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
out:
if (err && rollover) {
- kfree(rollover);
+ kfree_rcu(rollover, rcu);
po->rollover = NULL;
}
mutex_unlock(&fanout_mutex);
@@ -1747,8 +1747,10 @@ static struct packet_fanout *fanout_release(struct sock *sk)
else
f = NULL;
- if (po->rollover)
+ if (po->rollover) {
kfree_rcu(po->rollover, rcu);
+ po->rollover = NULL;
+ }
}
mutex_unlock(&fanout_mutex);
@@ -3851,6 +3853,7 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
void *data = &val;
union tpacket_stats_u st;
struct tpacket_rollover_stats rstats;
+ struct packet_rollover *rollover;
if (level != SOL_PACKET)
return -ENOPROTOOPT;
@@ -3929,13 +3932,18 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
0);
break;
case PACKET_ROLLOVER_STATS:
- if (!po->rollover)
+ rcu_read_lock();
+ rollover = rcu_dereference(po->rollover);
+ if (rollover) {
+ rstats.tp_all = atomic_long_read(&rollover->num);
+ rstats.tp_huge = atomic_long_read(&rollover->num_huge);
+ rstats.tp_failed = atomic_long_read(&rollover->num_failed);
+ data = &rstats;
+ lv = sizeof(rstats);
+ }
+ rcu_read_unlock();
+ if (!rollover)
return -EINVAL;
- rstats.tp_all = atomic_long_read(&po->rollover->num);
- rstats.tp_huge = atomic_long_read(&po->rollover->num_huge);
- rstats.tp_failed = atomic_long_read(&po->rollover->num_failed);
- data = &rstats;
- lv = sizeof(rstats);
break;
case PACKET_TX_HAS_OFF:
val = po->tp_tx_has_off;
diff --git a/net/rmnet_data/rmnet_data_handlers.c b/net/rmnet_data/rmnet_data_handlers.c
index e38b08d..a5b22c4 100644
--- a/net/rmnet_data/rmnet_data_handlers.c
+++ b/net/rmnet_data/rmnet_data_handlers.c
@@ -573,7 +573,8 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
skb_is_nonlinear(skb);
if ((!(config->egress_data_format &
- RMNET_EGRESS_FORMAT_AGGREGATION)) || non_linear_skb)
+ RMNET_EGRESS_FORMAT_AGGREGATION)) || csum_required ||
+ non_linear_skb)
map_header = rmnet_map_add_map_header
(skb, additional_header_length, RMNET_MAP_NO_PAD_BYTES);
else
@@ -595,11 +596,14 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
skb->protocol = htons(ETH_P_MAP);
- if ((config->egress_data_format & RMNET_EGRESS_FORMAT_AGGREGATION) &&
- !non_linear_skb) {
+ if (config->egress_data_format & RMNET_EGRESS_FORMAT_AGGREGATION) {
if (rmnet_ul_aggregation_skip(skb, required_headroom))
return RMNET_MAP_SUCCESS;
+ if (non_linear_skb)
+ if (unlikely(__skb_linearize(skb)))
+ return RMNET_MAP_SUCCESS;
+
rmnet_map_aggregate(skb, config);
return RMNET_MAP_CONSUMED;
}
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index a74d32e..8b1dfe6 100644
--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -296,6 +296,8 @@ struct Qdisc *qdisc_lookup(struct net_device *dev, u32 handle)
{
struct Qdisc *q;
+ if (!handle)
+ return NULL;
q = qdisc_match_from_root(dev->qdisc, handle);
if (q)
goto out;
diff --git a/net/sctp/input.c b/net/sctp/input.c
index 6c79915..68b84d3 100644
--- a/net/sctp/input.c
+++ b/net/sctp/input.c
@@ -421,7 +421,7 @@ void sctp_icmp_redirect(struct sock *sk, struct sctp_transport *t,
{
struct dst_entry *dst;
- if (!t)
+ if (sock_owned_by_user(sk) || !t)
return;
dst = sctp_transport_dst_check(t);
if (dst)
diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
index ca4a63e..5d01527 100644
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -806,9 +806,10 @@ static void sctp_inet6_skb_msgname(struct sk_buff *skb, char *msgname,
addr->v6.sin6_flowinfo = 0;
addr->v6.sin6_port = sh->source;
addr->v6.sin6_addr = ipv6_hdr(skb)->saddr;
- if (ipv6_addr_type(&addr->v6.sin6_addr) & IPV6_ADDR_LINKLOCAL) {
+ if (ipv6_addr_type(&addr->v6.sin6_addr) & IPV6_ADDR_LINKLOCAL)
addr->v6.sin6_scope_id = sctp_v6_skb_iif(skb);
- }
+ else
+ addr->v6.sin6_scope_id = 0;
}
*addr_len = sctp_v6_addr_to_user(sctp_sk(skb->sk), addr);
@@ -881,8 +882,10 @@ static int sctp_inet6_bind_verify(struct sctp_sock *opt, union sctp_addr *addr)
net = sock_net(&opt->inet.sk);
rcu_read_lock();
dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id);
- if (!dev ||
- !ipv6_chk_addr(net, &addr->v6.sin6_addr, dev, 0)) {
+ if (!dev || !(opt->inet.freebind ||
+ net->ipv6.sysctl.ip_nonlocal_bind ||
+ ipv6_chk_addr(net, &addr->v6.sin6_addr,
+ dev, 0))) {
rcu_read_unlock();
return 0;
}
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 3ef7252..c062cea 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -168,6 +168,36 @@ static inline void sctp_set_owner_w(struct sctp_chunk *chunk)
sk_mem_charge(sk, chunk->skb->truesize);
}
+static void sctp_clear_owner_w(struct sctp_chunk *chunk)
+{
+ skb_orphan(chunk->skb);
+}
+
+static void sctp_for_each_tx_datachunk(struct sctp_association *asoc,
+ void (*cb)(struct sctp_chunk *))
+
+{
+ struct sctp_outq *q = &asoc->outqueue;
+ struct sctp_transport *t;
+ struct sctp_chunk *chunk;
+
+ list_for_each_entry(t, &asoc->peer.transport_addr_list, transports)
+ list_for_each_entry(chunk, &t->transmitted, transmitted_list)
+ cb(chunk);
+
+ list_for_each_entry(chunk, &q->retransmit, list)
+ cb(chunk);
+
+ list_for_each_entry(chunk, &q->sacked, list)
+ cb(chunk);
+
+ list_for_each_entry(chunk, &q->abandoned, list)
+ cb(chunk);
+
+ list_for_each_entry(chunk, &q->out_chunk_list, list)
+ cb(chunk);
+}
+
/* Verify that this is a valid address. */
static inline int sctp_verify_addr(struct sock *sk, union sctp_addr *addr,
int len)
@@ -4734,6 +4764,10 @@ int sctp_do_peeloff(struct sock *sk, sctp_assoc_t id, struct socket **sockp)
struct socket *sock;
int err = 0;
+ /* Do not peel off from one netns to another one. */
+ if (!net_eq(current->nsproxy->net_ns, sock_net(sk)))
+ return -EINVAL;
+
if (!asoc)
return -EINVAL;
@@ -7826,7 +7860,9 @@ static void sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
* paths won't try to lock it and then oldsk.
*/
lock_sock_nested(newsk, SINGLE_DEPTH_NESTING);
+ sctp_for_each_tx_datachunk(assoc, sctp_clear_owner_w);
sctp_assoc_migrate(assoc, newsk);
+ sctp_for_each_tx_datachunk(assoc, sctp_set_owner_w);
/* If the association on the newsk is already closed before accept()
* is called, set RCV_SHUTDOWN flag.
diff --git a/net/unix/diag.c b/net/unix/diag.c
index 4d96797..384c84e 100644
--- a/net/unix/diag.c
+++ b/net/unix/diag.c
@@ -257,6 +257,8 @@ static int unix_diag_get_exact(struct sk_buff *in_skb,
err = -ENOENT;
if (sk == NULL)
goto out_nosk;
+ if (!net_eq(sock_net(sk), net))
+ goto out;
err = sock_diag_check_cookie(sk, req->udiag_cookie);
if (err)
diff --git a/net/wireless/db.txt b/net/wireless/db.txt
index f323faf..ff9887f 100644
--- a/net/wireless/db.txt
+++ b/net/wireless/db.txt
@@ -1477,9 +1477,9 @@
country VN: DFS-FCC
(2402 - 2482 @ 40), (20)
- (5170 - 5250 @ 80), (24), AUTO-BW
- (5250 - 5330 @ 80), (24), DFS, AUTO-BW
- (5490 - 5730 @ 160), (24), DFS
+ (5170 - 5250 @ 80), (24)
+ (5250 - 5330 @ 80), (24), DFS
+ (5490 - 5730 @ 80), (24), DFS
(5735 - 5835 @ 80), (30)
# 60 gHz band channels 1-4
(57240 - 65880 @ 2160), (40)
diff --git a/net/wireless/util.c b/net/wireless/util.c
index a4ab20a..29c5661 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -532,123 +532,6 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
}
EXPORT_SYMBOL(ieee80211_data_to_8023_exthdr);
-int ieee80211_data_from_8023(struct sk_buff *skb, const u8 *addr,
- enum nl80211_iftype iftype,
- const u8 *bssid, bool qos)
-{
- struct ieee80211_hdr hdr;
- u16 hdrlen, ethertype;
- __le16 fc;
- const u8 *encaps_data;
- int encaps_len, skip_header_bytes;
- int nh_pos, h_pos;
- int head_need;
-
- if (unlikely(skb->len < ETH_HLEN))
- return -EINVAL;
-
- nh_pos = skb_network_header(skb) - skb->data;
- h_pos = skb_transport_header(skb) - skb->data;
-
- /* convert Ethernet header to proper 802.11 header (based on
- * operation mode) */
- ethertype = (skb->data[12] << 8) | skb->data[13];
- fc = cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA);
-
- switch (iftype) {
- case NL80211_IFTYPE_AP:
- case NL80211_IFTYPE_AP_VLAN:
- case NL80211_IFTYPE_P2P_GO:
- fc |= cpu_to_le16(IEEE80211_FCTL_FROMDS);
- /* DA BSSID SA */
- memcpy(hdr.addr1, skb->data, ETH_ALEN);
- memcpy(hdr.addr2, addr, ETH_ALEN);
- memcpy(hdr.addr3, skb->data + ETH_ALEN, ETH_ALEN);
- hdrlen = 24;
- break;
- case NL80211_IFTYPE_STATION:
- case NL80211_IFTYPE_P2P_CLIENT:
- fc |= cpu_to_le16(IEEE80211_FCTL_TODS);
- /* BSSID SA DA */
- memcpy(hdr.addr1, bssid, ETH_ALEN);
- memcpy(hdr.addr2, skb->data + ETH_ALEN, ETH_ALEN);
- memcpy(hdr.addr3, skb->data, ETH_ALEN);
- hdrlen = 24;
- break;
- case NL80211_IFTYPE_OCB:
- case NL80211_IFTYPE_ADHOC:
- /* DA SA BSSID */
- memcpy(hdr.addr1, skb->data, ETH_ALEN);
- memcpy(hdr.addr2, skb->data + ETH_ALEN, ETH_ALEN);
- memcpy(hdr.addr3, bssid, ETH_ALEN);
- hdrlen = 24;
- break;
- default:
- return -EOPNOTSUPP;
- }
-
- if (qos) {
- fc |= cpu_to_le16(IEEE80211_STYPE_QOS_DATA);
- hdrlen += 2;
- }
-
- hdr.frame_control = fc;
- hdr.duration_id = 0;
- hdr.seq_ctrl = 0;
-
- skip_header_bytes = ETH_HLEN;
- if (ethertype == ETH_P_AARP || ethertype == ETH_P_IPX) {
- encaps_data = bridge_tunnel_header;
- encaps_len = sizeof(bridge_tunnel_header);
- skip_header_bytes -= 2;
- } else if (ethertype >= ETH_P_802_3_MIN) {
- encaps_data = rfc1042_header;
- encaps_len = sizeof(rfc1042_header);
- skip_header_bytes -= 2;
- } else {
- encaps_data = NULL;
- encaps_len = 0;
- }
-
- skb_pull(skb, skip_header_bytes);
- nh_pos -= skip_header_bytes;
- h_pos -= skip_header_bytes;
-
- head_need = hdrlen + encaps_len - skb_headroom(skb);
-
- if (head_need > 0 || skb_cloned(skb)) {
- head_need = max(head_need, 0);
- if (head_need)
- skb_orphan(skb);
-
- if (pskb_expand_head(skb, head_need, 0, GFP_ATOMIC))
- return -ENOMEM;
-
- skb->truesize += head_need;
- }
-
- if (encaps_data) {
- memcpy(skb_push(skb, encaps_len), encaps_data, encaps_len);
- nh_pos += encaps_len;
- h_pos += encaps_len;
- }
-
- memcpy(skb_push(skb, hdrlen), &hdr, hdrlen);
-
- nh_pos += hdrlen;
- h_pos += hdrlen;
-
- /* Update skb pointers to various headers since this modified frame
- * is going to go through Linux networking code that may potentially
- * need things like pointer to IP header. */
- skb_reset_mac_header(skb);
- skb_set_network_header(skb, nh_pos);
- skb_set_transport_header(skb, h_pos);
-
- return 0;
-}
-EXPORT_SYMBOL(ieee80211_data_from_8023);
-
static void
__frame_add_frag(struct sk_buff *skb, struct page *page,
void *ptr, int len, int size)
diff --git a/samples/trace_events/trace-events-sample.c b/samples/trace_events/trace-events-sample.c
index 880a7d1..4ccff66 100644
--- a/samples/trace_events/trace-events-sample.c
+++ b/samples/trace_events/trace-events-sample.c
@@ -78,28 +78,36 @@ static int simple_thread_fn(void *arg)
}
static DEFINE_MUTEX(thread_mutex);
+static int simple_thread_cnt;
void foo_bar_reg(void)
{
+ mutex_lock(&thread_mutex);
+ if (simple_thread_cnt++)
+ goto out;
+
pr_info("Starting thread for foo_bar_fn\n");
/*
* We shouldn't be able to start a trace when the module is
* unloading (there's other locks to prevent that). But
* for consistency sake, we still take the thread_mutex.
*/
- mutex_lock(&thread_mutex);
simple_tsk_fn = kthread_run(simple_thread_fn, NULL, "event-sample-fn");
+ out:
mutex_unlock(&thread_mutex);
}
void foo_bar_unreg(void)
{
- pr_info("Killing thread for foo_bar_fn\n");
- /* protect against module unloading */
mutex_lock(&thread_mutex);
+ if (--simple_thread_cnt)
+ goto out;
+
+ pr_info("Killing thread for foo_bar_fn\n");
if (simple_tsk_fn)
kthread_stop(simple_tsk_fn);
simple_tsk_fn = NULL;
+ out:
mutex_unlock(&thread_mutex);
}
diff --git a/scripts/Makefile.dtbo b/scripts/Makefile.dtbo
index b298f4a..d7938c3 100644
--- a/scripts/Makefile.dtbo
+++ b/scripts/Makefile.dtbo
@@ -10,7 +10,12 @@
ifneq ($(DTC_OVERLAY_TEST_EXT),)
DTC_OVERLAY_TEST = $(DTC_OVERLAY_TEST_EXT)
quiet_cmd_dtbo_verify = VERIFY $@
-cmd_dtbo_verify = $(DTC_OVERLAY_TEST) $(addprefix $(obj)/,$($(@F)-base)) $@ $(dot-target).tmp
+cmd_dtbo_verify = $(foreach m,\
+ $(addprefix $(obj)/,$($(@F)-base)),\
+ $(if $(m),\
+ $(DTC_OVERLAY_TEST) $(m) $@ \
+ $(dot-target).$(patsubst $(obj)/%.dtb,%,$(m)).tmp;))\
+ true
else
cmd_dtbo_verify = true
endif
diff --git a/security/Kconfig b/security/Kconfig
index 5693989..4415de2 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -6,6 +6,11 @@
source security/keys/Kconfig
+if ARCH_QCOM
+source security/pfe/Kconfig
+endif
+
+
config SECURITY_DMESG_RESTRICT
bool "Restrict unprivileged access to the kernel syslog"
default n
diff --git a/security/Makefile b/security/Makefile
index f2d71cd..79166ba 100644
--- a/security/Makefile
+++ b/security/Makefile
@@ -9,6 +9,7 @@
subdir-$(CONFIG_SECURITY_APPARMOR) += apparmor
subdir-$(CONFIG_SECURITY_YAMA) += yama
subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin
+subdir-$(CONFIG_ARCH_QCOM) += pfe
# always enable default capabilities
obj-y += commoncap.o
@@ -24,6 +25,7 @@
obj-$(CONFIG_SECURITY_APPARMOR) += apparmor/
obj-$(CONFIG_SECURITY_YAMA) += yama/
obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/
+obj-$(CONFIG_ARCH_QCOM) += pfe/
obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o
# Object integrity file lists
diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 57bc405..935752c 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -671,9 +671,9 @@ enum profile_mode aa_g_profile_mode = APPARMOR_ENFORCE;
module_param_call(mode, param_set_mode, param_get_mode,
&aa_g_profile_mode, S_IRUSR | S_IWUSR);
-#ifdef CONFIG_SECURITY_APPARMOR_HASH
/* whether policy verification hashing is enabled */
bool aa_g_hash_policy = IS_ENABLED(CONFIG_SECURITY_APPARMOR_HASH_DEFAULT);
+#ifdef CONFIG_SECURITY_APPARMOR_HASH
module_param_named(hash_policy, aa_g_hash_policy, aabool, S_IRUSR | S_IWUSR);
#endif
diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
index 0974598..6830d24 100644
--- a/security/integrity/ima/ima_appraise.c
+++ b/security/integrity/ima/ima_appraise.c
@@ -303,6 +303,9 @@ void ima_update_xattr(struct integrity_iint_cache *iint, struct file *file)
if (iint->flags & IMA_DIGSIG)
return;
+ if (iint->ima_file_status != INTEGRITY_PASS)
+ return;
+
rc = ima_collect_measurement(iint, file, NULL, 0, ima_hash_algo);
if (rc < 0)
return;
diff --git a/security/keys/Kconfig b/security/keys/Kconfig
index e0a3978..0832f63 100644
--- a/security/keys/Kconfig
+++ b/security/keys/Kconfig
@@ -20,6 +20,10 @@
If you are unsure as to whether this is required, answer N.
+config KEYS_COMPAT
+ def_bool y
+ depends on COMPAT && KEYS
+
config PERSISTENT_KEYRINGS
bool "Enable register of persistent per-UID keyrings"
depends on KEYS
diff --git a/security/keys/keyring.c b/security/keys/keyring.c
index 32969f6..4e9b4d2 100644
--- a/security/keys/keyring.c
+++ b/security/keys/keyring.c
@@ -452,34 +452,33 @@ static long keyring_read(const struct key *keyring,
char __user *buffer, size_t buflen)
{
struct keyring_read_iterator_context ctx;
- unsigned long nr_keys;
- int ret;
+ long ret;
kenter("{%d},,%zu", key_serial(keyring), buflen);
if (buflen & (sizeof(key_serial_t) - 1))
return -EINVAL;
- nr_keys = keyring->keys.nr_leaves_on_tree;
- if (nr_keys == 0)
- return 0;
-
- /* Calculate how much data we could return */
- if (!buffer || !buflen)
- return nr_keys * sizeof(key_serial_t);
-
- /* Copy the IDs of the subscribed keys into the buffer */
- ctx.buffer = (key_serial_t __user *)buffer;
- ctx.buflen = buflen;
- ctx.count = 0;
- ret = assoc_array_iterate(&keyring->keys, keyring_read_iterator, &ctx);
- if (ret < 0) {
- kleave(" = %d [iterate]", ret);
- return ret;
+ /* Copy as many key IDs as fit into the buffer */
+ if (buffer && buflen) {
+ ctx.buffer = (key_serial_t __user *)buffer;
+ ctx.buflen = buflen;
+ ctx.count = 0;
+ ret = assoc_array_iterate(&keyring->keys,
+ keyring_read_iterator, &ctx);
+ if (ret < 0) {
+ kleave(" = %ld [iterate]", ret);
+ return ret;
+ }
}
- kleave(" = %zu [ok]", ctx.count);
- return ctx.count;
+ /* Return the size of the buffer needed */
+ ret = keyring->keys.nr_leaves_on_tree * sizeof(key_serial_t);
+ if (ret <= buflen)
+ kleave("= %ld [ok]", ret);
+ else
+ kleave("= %ld [buffer too small]", ret);
+ return ret;
}
/*
diff --git a/security/keys/trusted.c b/security/keys/trusted.c
index f4db42e..4ba2f6b 100644
--- a/security/keys/trusted.c
+++ b/security/keys/trusted.c
@@ -70,7 +70,7 @@ static int TSS_sha1(const unsigned char *data, unsigned int datalen,
}
ret = crypto_shash_digest(&sdesc->shash, data, datalen, digest);
- kfree(sdesc);
+ kzfree(sdesc);
return ret;
}
@@ -114,7 +114,7 @@ static int TSS_rawhmac(unsigned char *digest, const unsigned char *key,
if (!ret)
ret = crypto_shash_final(&sdesc->shash, digest);
out:
- kfree(sdesc);
+ kzfree(sdesc);
return ret;
}
@@ -165,7 +165,7 @@ static int TSS_authhmac(unsigned char *digest, const unsigned char *key,
paramdigest, TPM_NONCE_SIZE, h1,
TPM_NONCE_SIZE, h2, 1, &c, 0, 0);
out:
- kfree(sdesc);
+ kzfree(sdesc);
return ret;
}
@@ -246,7 +246,7 @@ static int TSS_checkhmac1(unsigned char *buffer,
if (memcmp(testhmac, authdata, SHA1_DIGEST_SIZE))
ret = -EINVAL;
out:
- kfree(sdesc);
+ kzfree(sdesc);
return ret;
}
@@ -347,7 +347,7 @@ static int TSS_checkhmac2(unsigned char *buffer,
if (memcmp(testhmac2, authdata2, SHA1_DIGEST_SIZE))
ret = -EINVAL;
out:
- kfree(sdesc);
+ kzfree(sdesc);
return ret;
}
@@ -564,7 +564,7 @@ static int tpm_seal(struct tpm_buf *tb, uint16_t keytype,
*bloblen = storedsize;
}
out:
- kfree(td);
+ kzfree(td);
return ret;
}
@@ -678,7 +678,7 @@ static int key_seal(struct trusted_key_payload *p,
if (ret < 0)
pr_info("trusted_key: srkseal failed (%d)\n", ret);
- kfree(tb);
+ kzfree(tb);
return ret;
}
@@ -703,7 +703,7 @@ static int key_unseal(struct trusted_key_payload *p,
/* pull migratable flag out of sealed key */
p->migratable = p->key[--p->key_len];
- kfree(tb);
+ kzfree(tb);
return ret;
}
@@ -1037,12 +1037,12 @@ static int trusted_instantiate(struct key *key,
if (!ret && options->pcrlock)
ret = pcrlock(options->pcrlock);
out:
- kfree(datablob);
- kfree(options);
+ kzfree(datablob);
+ kzfree(options);
if (!ret)
rcu_assign_keypointer(key, payload);
else
- kfree(payload);
+ kzfree(payload);
return ret;
}
@@ -1051,8 +1051,7 @@ static void trusted_rcu_free(struct rcu_head *rcu)
struct trusted_key_payload *p;
p = container_of(rcu, struct trusted_key_payload, rcu);
- memset(p->key, 0, p->key_len);
- kfree(p);
+ kzfree(p);
}
/*
@@ -1094,13 +1093,13 @@ static int trusted_update(struct key *key, struct key_preparsed_payload *prep)
ret = datablob_parse(datablob, new_p, new_o);
if (ret != Opt_update) {
ret = -EINVAL;
- kfree(new_p);
+ kzfree(new_p);
goto out;
}
if (!new_o->keyhandle) {
ret = -EINVAL;
- kfree(new_p);
+ kzfree(new_p);
goto out;
}
@@ -1114,22 +1113,22 @@ static int trusted_update(struct key *key, struct key_preparsed_payload *prep)
ret = key_seal(new_p, new_o);
if (ret < 0) {
pr_info("trusted_key: key_seal failed (%d)\n", ret);
- kfree(new_p);
+ kzfree(new_p);
goto out;
}
if (new_o->pcrlock) {
ret = pcrlock(new_o->pcrlock);
if (ret < 0) {
pr_info("trusted_key: pcrlock failed (%d)\n", ret);
- kfree(new_p);
+ kzfree(new_p);
goto out;
}
}
rcu_assign_keypointer(key, new_p);
call_rcu(&p->rcu, trusted_rcu_free);
out:
- kfree(datablob);
- kfree(new_o);
+ kzfree(datablob);
+ kzfree(new_o);
return ret;
}
@@ -1148,34 +1147,30 @@ static long trusted_read(const struct key *key, char __user *buffer,
p = rcu_dereference_key(key);
if (!p)
return -EINVAL;
- if (!buffer || buflen <= 0)
- return 2 * p->blob_len;
- ascii_buf = kmalloc(2 * p->blob_len, GFP_KERNEL);
- if (!ascii_buf)
- return -ENOMEM;
- bufp = ascii_buf;
- for (i = 0; i < p->blob_len; i++)
- bufp = hex_byte_pack(bufp, p->blob[i]);
- if ((copy_to_user(buffer, ascii_buf, 2 * p->blob_len)) != 0) {
- kfree(ascii_buf);
- return -EFAULT;
+ if (buffer && buflen >= 2 * p->blob_len) {
+ ascii_buf = kmalloc(2 * p->blob_len, GFP_KERNEL);
+ if (!ascii_buf)
+ return -ENOMEM;
+
+ bufp = ascii_buf;
+ for (i = 0; i < p->blob_len; i++)
+ bufp = hex_byte_pack(bufp, p->blob[i]);
+ if (copy_to_user(buffer, ascii_buf, 2 * p->blob_len) != 0) {
+ kzfree(ascii_buf);
+ return -EFAULT;
+ }
+ kzfree(ascii_buf);
}
- kfree(ascii_buf);
return 2 * p->blob_len;
}
/*
- * trusted_destroy - before freeing the key, clear the decrypted data
+ * trusted_destroy - clear and free the key's payload
*/
static void trusted_destroy(struct key *key)
{
- struct trusted_key_payload *p = key->payload.data[0];
-
- if (!p)
- return;
- memset(p->key, 0, p->key_len);
- kfree(key->payload.data[0]);
+ kzfree(key->payload.data[0]);
}
struct key_type key_type_trusted = {
diff --git a/security/pfe/Kconfig b/security/pfe/Kconfig
new file mode 100644
index 0000000..0cd9e81
--- /dev/null
+++ b/security/pfe/Kconfig
@@ -0,0 +1,28 @@
+menu "Qualcomm Technologies, Inc Per File Encryption security device drivers"
+ depends on ARCH_QCOM
+
+config PFT
+ bool "Per-File-Tagger driver"
+ depends on SECURITY
+ default n
+ help
+ This driver is used for tagging enterprise files.
+ It is part of the Per-File-Encryption (PFE) feature.
+ The driver is tagging files when created by
+ registered application.
+ Tagged files are encrypted using the dm-req-crypt driver.
+
+config PFK
+ bool "Per-File-Key driver"
+ depends on SECURITY
+ depends on SECURITY_SELINUX
+ default n
+ help
+ This driver is used for storing eCryptfs information
+ in file node.
+ This is part of eCryptfs hardware enhanced solution
+ provided by Qualcomm Technologies, Inc.
+ Information is used when file is encrypted later using
+ ICE or dm crypto engine
+
+endmenu
diff --git a/security/pfe/Makefile b/security/pfe/Makefile
new file mode 100644
index 0000000..242a216
--- /dev/null
+++ b/security/pfe/Makefile
@@ -0,0 +1,10 @@
+#
+# Makefile for the MSM specific security device drivers.
+#
+
+ccflags-y += -Isecurity/selinux -Isecurity/selinux/include
+ccflags-y += -Ifs/ext4
+ccflags-y += -Ifs/crypto
+
+obj-$(CONFIG_PFT) += pft.o
+obj-$(CONFIG_PFK) += pfk.o pfk_kc.o pfk_ice.o pfk_ext4.o
diff --git a/security/pfe/pfk.c b/security/pfe/pfk.c
new file mode 100644
index 0000000..615353e
--- /dev/null
+++ b/security/pfe/pfk.c
@@ -0,0 +1,483 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Per-File-Key (PFK).
+ *
+ * This driver is responsible for overall management of various
+ * Per File Encryption variants that work on top of or as part of different
+ * file systems.
+ *
+ * The driver has the following purpose :
+ * 1) Define priorities between PFE's if more than one is enabled
+ * 2) Extract key information from inode
+ * 3) Load and manage various keys in ICE HW engine
+ * 4) It should be invoked from various layers in FS/BLOCK/STORAGE DRIVER
+ * that need to take decision on HW encryption management of the data
+ * Some examples:
+ * BLOCK LAYER: when it takes decision on whether 2 chunks can be united
+ * to one encryption / decryption request sent to the HW
+ *
+ * UFS DRIVER: when it need to configure ICE HW with a particular key slot
+ * to be used for encryption / decryption
+ *
+ * PFE variants can differ on particular way of storing the cryptographic info
+ * inside inode, actions to be taken upon file operations, etc., but the common
+ * properties are described above
+ *
+ */
+
+
+/* Uncomment the line below to enable debug messages */
+/* #define DEBUG 1 */
+#define pr_fmt(fmt) "pfk [%s]: " fmt, __func__
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/errno.h>
+#include <linux/printk.h>
+#include <linux/bio.h>
+#include <linux/security.h>
+#include <crypto/ice.h>
+
+#include <linux/pfk.h>
+
+#include "pfk_kc.h"
+#include "objsec.h"
+#include "pfk_ice.h"
+#include "pfk_ext4.h"
+#include "pfk_internal.h"
+#include "ext4.h"
+
+static bool pfk_ready;
+
+
+/* might be replaced by a table when more than one cipher is supported */
+#define PFK_SUPPORTED_KEY_SIZE 32
+#define PFK_SUPPORTED_SALT_SIZE 32
+
+/* Various PFE types and function tables to support each one of them */
+enum pfe_type {EXT4_CRYPT_PFE, INVALID_PFE};
+
+typedef int (*pfk_parse_inode_type)(const struct bio *bio,
+ const struct inode *inode,
+ struct pfk_key_info *key_info,
+ enum ice_cryto_algo_mode *algo,
+ bool *is_pfe);
+
+typedef bool (*pfk_allow_merge_bio_type)(const struct bio *bio1,
+ const struct bio *bio2, const struct inode *inode1,
+ const struct inode *inode2);
+
+static const pfk_parse_inode_type pfk_parse_inode_ftable[] = {
+ /* EXT4_CRYPT_PFE */ &pfk_ext4_parse_inode,
+};
+
+static const pfk_allow_merge_bio_type pfk_allow_merge_bio_ftable[] = {
+ /* EXT4_CRYPT_PFE */ &pfk_ext4_allow_merge_bio,
+};
+
+static void __exit pfk_exit(void)
+{
+ pfk_ready = false;
+ pfk_ext4_deinit();
+ pfk_kc_deinit();
+}
+
+static int __init pfk_init(void)
+{
+
+ int ret = 0;
+
+ ret = pfk_ext4_init();
+ if (ret != 0)
+ goto fail;
+
+ ret = pfk_kc_init();
+ if (ret != 0) {
+ pr_err("could init pfk key cache, error %d\n", ret);
+ pfk_ext4_deinit();
+ goto fail;
+ }
+
+ pfk_ready = true;
+ pr_info("Driver initialized successfully\n");
+
+ return 0;
+
+fail:
+ pr_err("Failed to init driver\n");
+ return -ENODEV;
+}
+
+/*
+ * If more than one type is supported simultaneously, this function will also
+ * set the priority between them
+ */
+static enum pfe_type pfk_get_pfe_type(const struct inode *inode)
+{
+ if (!inode)
+ return INVALID_PFE;
+
+ if (pfk_is_ext4_type(inode))
+ return EXT4_CRYPT_PFE;
+
+ return INVALID_PFE;
+}
+
+/**
+ * inode_to_filename() - get the filename from inode pointer.
+ * @inode: inode pointer
+ *
+ * it is used for debug prints.
+ *
+ * Return: filename string or "unknown".
+ */
+char *inode_to_filename(const struct inode *inode)
+{
+ struct dentry *dentry = NULL;
+ char *filename = NULL;
+
+ if (hlist_empty(&inode->i_dentry))
+ return "unknown";
+
+ dentry = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias);
+ filename = dentry->d_iname;
+
+ return filename;
+}
+
+/**
+ * pfk_is_ready() - driver is initialized and ready.
+ *
+ * Return: true if the driver is ready.
+ */
+static inline bool pfk_is_ready(void)
+{
+ return pfk_ready;
+}
+
+/**
+ * pfk_bio_get_inode() - get the inode from a bio.
+ * @bio: Pointer to BIO structure.
+ *
+ * Walk the bio struct links to get the inode.
+ * Please note, that in general bio may consist of several pages from
+ * several files, but in our case we always assume that all pages come
+ * from the same file, since our logic ensures it. That is why we only
+ * walk through the first page to look for inode.
+ *
+ * Return: pointer to the inode struct if successful, or NULL otherwise.
+ *
+ */
+static struct inode *pfk_bio_get_inode(const struct bio *bio)
+{
+ struct address_space *mapping;
+
+ if (!bio)
+ return NULL;
+ if (!bio->bi_io_vec)
+ return NULL;
+ if (!bio->bi_io_vec->bv_page)
+ return NULL;
+ if (!bio_has_data((struct bio *)bio))
+ return NULL;
+
+ if (PageAnon(bio->bi_io_vec->bv_page)) {
+ struct inode *inode;
+
+ //Using direct-io (O_DIRECT) without page cache
+ inode = dio_bio_get_inode((struct bio *)bio);
+ pr_debug("inode on direct-io, inode = 0x%pK.\n", inode);
+
+ return inode;
+ }
+
+ mapping = page_mapping(bio->bi_io_vec->bv_page);
+ if (!mapping)
+ return NULL;
+
+ if (!mapping->host)
+ return NULL;
+
+ return bio->bi_io_vec->bv_page->mapping->host;
+}
+
+/**
+ * pfk_key_size_to_key_type() - translate key size to key size enum
+ * @key_size: key size in bytes
+ * @key_size_type: pointer to store the output enum (can be null)
+ *
+ * return 0 in case of success, error otherwise (i.e not supported key size)
+ */
+int pfk_key_size_to_key_type(size_t key_size,
+ enum ice_crpto_key_size *key_size_type)
+{
+ /*
+ * currently only 32 bit key size is supported
+ * in the future, table with supported key sizes might
+ * be introduced
+ */
+
+ if (key_size != PFK_SUPPORTED_KEY_SIZE) {
+ pr_err("not supported key size %zu\n", key_size);
+ return -EINVAL;
+ }
+
+ if (key_size_type)
+ *key_size_type = ICE_CRYPTO_KEY_SIZE_256;
+
+ return 0;
+}
+
+/*
+ * Retrieves filesystem type from inode's superblock
+ */
+bool pfe_is_inode_filesystem_type(const struct inode *inode,
+ const char *fs_type)
+{
+ if (!inode || !fs_type)
+ return false;
+
+ if (!inode->i_sb)
+ return false;
+
+ if (!inode->i_sb->s_type)
+ return false;
+
+ return (strcmp(inode->i_sb->s_type->name, fs_type) == 0);
+}
+
+
+/**
+ * pfk_load_key_start() - loads PFE encryption key to the ICE
+ * Can also be invoked from non
+ * PFE context, in this case it
+ * is not relevant and is_pfe
+ * flag is set to false
+ *
+ * @bio: Pointer to the BIO structure
+ * @ice_setting: Pointer to ice setting structure that will be filled with
+ * ice configuration values, including the index to which the key was loaded
+ * @is_pfe: will be false if inode is not relevant to PFE, in such a case
+ * it should be treated as non PFE by the block layer
+ *
+ * Returns the index where the key is stored in encryption hw and additional
+ * information that will be used later for configuration of the encryption hw.
+ *
+ * Must be followed by pfk_load_key_end when key is no longer used by ice
+ *
+ */
+int pfk_load_key_start(const struct bio *bio,
+ struct ice_crypto_setting *ice_setting, bool *is_pfe,
+ bool async)
+{
+ int ret = 0;
+ struct pfk_key_info key_info = {NULL, NULL, 0, 0};
+ enum ice_cryto_algo_mode algo_mode = ICE_CRYPTO_ALGO_MODE_AES_XTS;
+ enum ice_crpto_key_size key_size_type = 0;
+ u32 key_index = 0;
+ struct inode *inode = NULL;
+ enum pfe_type which_pfe = INVALID_PFE;
+
+ if (!is_pfe) {
+ pr_err("is_pfe is NULL\n");
+ return -EINVAL;
+ }
+
+ /*
+ * only a few errors below can indicate that
+ * this function was not invoked within PFE context,
+ * otherwise we will consider it PFE
+ */
+ *is_pfe = true;
+
+ if (!pfk_is_ready())
+ return -ENODEV;
+
+ if (!ice_setting) {
+ pr_err("ice setting is NULL\n");
+ return -EINVAL;
+ }
+//pr_err("%s %d\n", __func__, __LINE__);
+ inode = pfk_bio_get_inode(bio);
+ if (!inode) {
+ *is_pfe = false;
+ return -EINVAL;
+ }
+ //pr_err("%s %d\n", __func__, __LINE__);
+ which_pfe = pfk_get_pfe_type(inode);
+ if (which_pfe == INVALID_PFE) {
+ *is_pfe = false;
+ return -EPERM;
+ }
+
+ pr_debug("parsing file %s with PFE %d\n",
+ inode_to_filename(inode), which_pfe);
+//pr_err("%s %d\n", __func__, __LINE__);
+ ret = (*(pfk_parse_inode_ftable[which_pfe]))
+ (bio, inode, &key_info, &algo_mode, is_pfe);
+ if (ret != 0)
+ return ret;
+//pr_err("%s %d\n", __func__, __LINE__);
+ ret = pfk_key_size_to_key_type(key_info.key_size, &key_size_type);
+ if (ret != 0)
+ return ret;
+//pr_err("%s %d\n", __func__, __LINE__);
+ ret = pfk_kc_load_key_start(key_info.key, key_info.key_size,
+ key_info.salt, key_info.salt_size, &key_index, async);
+ if (ret) {
+ if (ret != -EBUSY && ret != -EAGAIN)
+ pr_err("start: could not load key into pfk key cache, error %d\n",
+ ret);
+
+ return ret;
+ }
+
+ ice_setting->key_size = key_size_type;
+ ice_setting->algo_mode = algo_mode;
+ /* hardcoded for now */
+ ice_setting->key_mode = ICE_CRYPTO_USE_LUT_SW_KEY;
+ ice_setting->key_index = key_index;
+
+ pr_debug("loaded key for file %s key_index %d\n",
+ inode_to_filename(inode), key_index);
+
+ return 0;
+}
+
+/**
+ * pfk_load_key_end() - marks the PFE key as no longer used by ICE
+ * Can also be invoked from non
+ * PFE context, in this case it is not
+ * relevant and is_pfe flag is
+ * set to false
+ *
+ * @bio: Pointer to the BIO structure
+ * @is_pfe: Pointer to is_pfe flag, which will be true if function was invoked
+ * from PFE context
+ */
+int pfk_load_key_end(const struct bio *bio, bool *is_pfe)
+{
+ int ret = 0;
+ struct pfk_key_info key_info = {0};
+ enum pfe_type which_pfe = INVALID_PFE;
+ struct inode *inode = NULL;
+
+ if (!is_pfe) {
+ pr_err("is_pfe is NULL\n");
+ return -EINVAL;
+ }
+
+ /* only a few errors below can indicate that
+ * this function was not invoked within PFE context,
+ * otherwise we will consider it PFE
+ */
+ *is_pfe = true;
+
+ if (!pfk_is_ready())
+ return -ENODEV;
+
+ inode = pfk_bio_get_inode(bio);
+ if (!inode) {
+ *is_pfe = false;
+ return -EINVAL;
+ }
+
+ which_pfe = pfk_get_pfe_type(inode);
+ if (which_pfe == INVALID_PFE) {
+ *is_pfe = false;
+ return -EPERM;
+ }
+
+ ret = (*(pfk_parse_inode_ftable[which_pfe]))
+ (bio, inode, &key_info, NULL, is_pfe);
+ if (ret != 0)
+ return ret;
+
+ pfk_kc_load_key_end(key_info.key, key_info.key_size,
+ key_info.salt, key_info.salt_size);
+
+ pr_debug("finished using key for file %s\n",
+ inode_to_filename(inode));
+
+ return 0;
+}
+
+/**
+ * pfk_allow_merge_bio() - Check if 2 BIOs can be merged.
+ * @bio1: Pointer to first BIO structure.
+ * @bio2: Pointer to second BIO structure.
+ *
+ * Prevent merging of BIOs from encrypted and non-encrypted
+ * files, or files encrypted with different key.
+ * Also prevent non encrypted and encrypted data from the same file
+ * to be merged (ecryptfs header if stored inside file should be non
+ * encrypted)
+ * This API is called by the file system block layer.
+ *
+ * Return: true if the BIOs allowed to be merged, false
+ * otherwise.
+ */
+bool pfk_allow_merge_bio(const struct bio *bio1, const struct bio *bio2)
+{
+ struct inode *inode1 = NULL;
+ struct inode *inode2 = NULL;
+ enum pfe_type which_pfe1 = INVALID_PFE;
+ enum pfe_type which_pfe2 = INVALID_PFE;
+
+ if (!pfk_is_ready())
+ return false;
+
+ if (!bio1 || !bio2)
+ return false;
+
+ if (bio1 == bio2)
+ return true;
+
+ inode1 = pfk_bio_get_inode(bio1);
+ inode2 = pfk_bio_get_inode(bio2);
+
+
+ which_pfe1 = pfk_get_pfe_type(inode1);
+ which_pfe2 = pfk_get_pfe_type(inode2);
+
+ /* nodes with different encryption, do not merge */
+ if (which_pfe1 != which_pfe2)
+ return false;
+
+ /* both nodes do not have encryption, allow merge */
+ if (which_pfe1 == INVALID_PFE)
+ return true;
+
+ return (*(pfk_allow_merge_bio_ftable[which_pfe1]))(bio1, bio2,
+ inode1, inode2);
+}
+/**
+ * Flush key table on storage core reset. During core reset key configuration
+ * is lost in ICE. We need to flash the cache, so that the keys will be
+ * reconfigured again for every subsequent transaction
+ */
+void pfk_clear_on_reset(void)
+{
+ if (!pfk_is_ready())
+ return;
+
+ pfk_kc_clear_on_reset();
+}
+
+module_init(pfk_init);
+module_exit(pfk_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Per-File-Key driver");
diff --git a/security/pfe/pfk_ext4.c b/security/pfe/pfk_ext4.c
new file mode 100644
index 0000000..7ce70bc
--- /dev/null
+++ b/security/pfe/pfk_ext4.c
@@ -0,0 +1,212 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Per-File-Key (PFK) - EXT4
+ *
+ * This driver is used for working with EXT4 crypt extension
+ *
+ * The key information is stored in node by EXT4 when file is first opened
+ * and will be later accessed by Block Device Driver to actually load the key
+ * to encryption hw.
+ *
+ * PFK exposes API's for loading and removing keys from encryption hw
+ * and also API to determine whether 2 adjacent blocks can be agregated by
+ * Block Layer in one request to encryption hw.
+ *
+ */
+
+
+/* Uncomment the line below to enable debug messages */
+/* #define DEBUG 1 */
+#define pr_fmt(fmt) "pfk_ext4 [%s]: " fmt, __func__
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/errno.h>
+#include <linux/printk.h>
+
+#include "ext4_ice.h"
+#include "pfk_ext4.h"
+
+static bool pfk_ext4_ready;
+
+/*
+ * pfk_ext4_deinit() - Deinit function, should be invoked by upper PFK layer
+ */
+void pfk_ext4_deinit(void)
+{
+ pfk_ext4_ready = false;
+}
+
+/*
+ * pfk_ecryptfs_init() - Init function, should be invoked by upper PFK layer
+ */
+int __init pfk_ext4_init(void)
+{
+ pfk_ext4_ready = true;
+ pr_info("PFK EXT4 inited successfully\n");
+
+ return 0;
+}
+
+/**
+ * pfk_ecryptfs_is_ready() - driver is initialized and ready.
+ *
+ * Return: true if the driver is ready.
+ */
+static inline bool pfk_ext4_is_ready(void)
+{
+ return pfk_ext4_ready;
+}
+
+/**
+ * pfk_ext4_dump_inode() - dumps all interesting info about inode to the screen
+ *
+ *
+ */
+/*
+ * static void pfk_ext4_dump_inode(const struct inode* inode)
+ * {
+ * struct ext4_crypt_info *ci = ext4_encryption_info((struct inode*)inode);
+ *
+ * pr_debug("dumping inode with address 0x%p\n", inode);
+ * pr_debug("S_ISREG is %d\n", S_ISREG(inode->i_mode));
+ * pr_debug("EXT4_INODE_ENCRYPT flag is %d\n",
+ * ext4_test_inode_flag((struct inode*)inode, EXT4_INODE_ENCRYPT));
+ * if (ci) {
+ * pr_debug("crypt_info address 0x%p\n", ci);
+ * pr_debug("ci->ci_data_mode %d\n", ci->ci_data_mode);
+ * } else {
+ * pr_debug("crypt_info is NULL\n");
+ * }
+ * }
+ */
+
+/**
+ * pfk_is_ext4_type() - return true if inode belongs to ICE EXT4 PFE
+ * @inode: inode pointer
+ */
+bool pfk_is_ext4_type(const struct inode *inode)
+{
+ if (!pfe_is_inode_filesystem_type(inode, "ext4"))
+ return false;
+
+ return ext4_should_be_processed_by_ice(inode);
+}
+
+/**
+ * pfk_ext4_parse_cipher() - parse cipher from inode to enum
+ * @inode: inode
+ * @algo: pointer to store the output enum (can be null)
+ *
+ * return 0 in case of success, error otherwise (i.e not supported cipher)
+ */
+static int pfk_ext4_parse_cipher(const struct inode *inode,
+ enum ice_cryto_algo_mode *algo)
+{
+ /*
+ * currently only AES XTS algo is supported
+ * in the future, table with supported ciphers might
+ * be introduced
+ */
+
+ if (!inode)
+ return -EINVAL;
+
+ if (!ext4_is_aes_xts_cipher(inode)) {
+ pr_err("ext4 alghoritm is not supported by pfk\n");
+ return -EINVAL;
+ }
+
+ if (algo)
+ *algo = ICE_CRYPTO_ALGO_MODE_AES_XTS;
+
+ return 0;
+}
+
+
+int pfk_ext4_parse_inode(const struct bio *bio,
+ const struct inode *inode,
+ struct pfk_key_info *key_info,
+ enum ice_cryto_algo_mode *algo,
+ bool *is_pfe)
+{
+ int ret = 0;
+
+ if (!is_pfe)
+ return -EINVAL;
+
+ /*
+ * only a few errors below can indicate that
+ * this function was not invoked within PFE context,
+ * otherwise we will consider it PFE
+ */
+ *is_pfe = true;
+
+ if (!pfk_ext4_is_ready())
+ return -ENODEV;
+
+ if (!inode)
+ return -EINVAL;
+
+ if (!key_info)
+ return -EINVAL;
+
+ key_info->key = ext4_get_ice_encryption_key(inode);
+ if (!key_info->key) {
+ pr_err("could not parse key from ext4\n");
+ return -EINVAL;
+ }
+
+ key_info->key_size = ext4_get_ice_encryption_key_size(inode);
+ if (!key_info->key_size) {
+ pr_err("could not parse key size from ext4\n");
+ return -EINVAL;
+ }
+
+ key_info->salt = ext4_get_ice_encryption_salt(inode);
+ if (!key_info->salt) {
+ pr_err("could not parse salt from ext4\n");
+ return -EINVAL;
+ }
+
+ key_info->salt_size = ext4_get_ice_encryption_salt_size(inode);
+ if (!key_info->salt_size) {
+ pr_err("could not parse salt size from ext4\n");
+ return -EINVAL;
+ }
+
+ ret = pfk_ext4_parse_cipher(inode, algo);
+ if (ret != 0) {
+ pr_err("not supported cipher\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
+ const struct bio *bio2, const struct inode *inode1,
+ const struct inode *inode2)
+{
+ /* if there is no ext4 pfk, don't disallow merging blocks */
+ if (!pfk_ext4_is_ready())
+ return true;
+
+ if (!inode1 || !inode2)
+ return false;
+
+ return ext4_is_ice_encryption_info_equal(inode1, inode2);
+}
+
diff --git a/security/pfe/pfk_ext4.h b/security/pfe/pfk_ext4.h
new file mode 100644
index 0000000..1f33632
--- /dev/null
+++ b/security/pfe/pfk_ext4.h
@@ -0,0 +1,37 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _PFK_EXT4_H_
+#define _PFK_EXT4_H_
+
+#include <linux/types.h>
+#include <linux/fs.h>
+#include <crypto/ice.h>
+#include "pfk_internal.h"
+
+bool pfk_is_ext4_type(const struct inode *inode);
+
+int pfk_ext4_parse_inode(const struct bio *bio,
+ const struct inode *inode,
+ struct pfk_key_info *key_info,
+ enum ice_cryto_algo_mode *algo,
+ bool *is_pfe);
+
+bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
+ const struct bio *bio2, const struct inode *inode1,
+ const struct inode *inode2);
+
+int __init pfk_ext4_init(void);
+
+void pfk_ext4_deinit(void);
+
+#endif /* _PFK_EXT4_H_ */
diff --git a/security/pfe/pfk_ice.c b/security/pfe/pfk_ice.c
new file mode 100644
index 0000000..f0bbf9c
--- /dev/null
+++ b/security/pfe/pfk_ice.c
@@ -0,0 +1,188 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/async.h>
+#include <linux/mm.h>
+#include <linux/of.h>
+#include <soc/qcom/scm.h>
+#include <linux/device-mapper.h>
+#include <soc/qcom/qseecomi.h>
+#include <crypto/ice.h>
+#include "pfk_ice.h"
+
+
+/**********************************/
+/** global definitions **/
+/**********************************/
+
+#define TZ_ES_SET_ICE_KEY 0x2
+#define TZ_ES_INVALIDATE_ICE_KEY 0x3
+
+/* index 0 and 1 is reserved for FDE */
+#define MIN_ICE_KEY_INDEX 2
+
+#define MAX_ICE_KEY_INDEX 31
+
+
+#define TZ_ES_SET_ICE_KEY_ID \
+ TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, TZ_SVC_ES, TZ_ES_SET_ICE_KEY)
+
+
+#define TZ_ES_INVALIDATE_ICE_KEY_ID \
+ TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, \
+ TZ_SVC_ES, TZ_ES_INVALIDATE_ICE_KEY)
+
+
+#define TZ_ES_SET_ICE_KEY_PARAM_ID \
+ TZ_SYSCALL_CREATE_PARAM_ID_5( \
+ TZ_SYSCALL_PARAM_TYPE_VAL, \
+ TZ_SYSCALL_PARAM_TYPE_BUF_RW, TZ_SYSCALL_PARAM_TYPE_VAL, \
+ TZ_SYSCALL_PARAM_TYPE_BUF_RW, TZ_SYSCALL_PARAM_TYPE_VAL)
+
+#define TZ_ES_INVALIDATE_ICE_KEY_PARAM_ID \
+ TZ_SYSCALL_CREATE_PARAM_ID_1( \
+ TZ_SYSCALL_PARAM_TYPE_VAL)
+
+#define ICE_KEY_SIZE 32
+#define ICE_SALT_SIZE 32
+
+static uint8_t ice_key[ICE_KEY_SIZE];
+static uint8_t ice_salt[ICE_KEY_SIZE];
+
+int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
+ char *storage_type)
+{
+ struct scm_desc desc = {0};
+ int ret, ret1;
+ char *tzbuf_key = (char *)ice_key;
+ char *tzbuf_salt = (char *)ice_salt;
+ char *s_type = storage_type;
+
+ uint32_t smc_id = 0;
+ u32 tzbuflen_key = sizeof(ice_key);
+ u32 tzbuflen_salt = sizeof(ice_salt);
+
+ if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
+ pr_err("%s Invalid index %d\n", __func__, index);
+ return -EINVAL;
+ }
+ if (!key || !salt) {
+ pr_err("%s Invalid key/salt\n", __func__);
+ return -EINVAL;
+ }
+
+ if (!tzbuf_key || !tzbuf_salt) {
+ pr_err("%s No Memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ if (s_type == NULL) {
+ pr_err("%s Invalid Storage type\n", __func__);
+ return -EINVAL;
+ }
+
+ memset(tzbuf_key, 0, tzbuflen_key);
+ memset(tzbuf_salt, 0, tzbuflen_salt);
+
+ memcpy(ice_key, key, tzbuflen_key);
+ memcpy(ice_salt, salt, tzbuflen_salt);
+
+ dmac_flush_range(tzbuf_key, tzbuf_key + tzbuflen_key);
+ dmac_flush_range(tzbuf_salt, tzbuf_salt + tzbuflen_salt);
+
+ smc_id = TZ_ES_SET_ICE_KEY_ID;
+
+ desc.arginfo = TZ_ES_SET_ICE_KEY_PARAM_ID;
+ desc.args[0] = index;
+ desc.args[1] = virt_to_phys(tzbuf_key);
+ desc.args[2] = tzbuflen_key;
+ desc.args[3] = virt_to_phys(tzbuf_salt);
+ desc.args[4] = tzbuflen_salt;
+
+ ret = qcom_ice_setup_ice_hw((const char *)s_type, true);
+
+ if (ret) {
+ pr_err("%s: could not enable clocks: %d\n", __func__, ret);
+ goto out;
+ }
+
+ ret = scm_call2(smc_id, &desc);
+
+ if (ret) {
+ pr_err("%s: Set Key Error: %d\n", __func__, ret);
+ if (ret == -EBUSY) {
+ if (qcom_ice_setup_ice_hw((const char *)s_type, false))
+ pr_err("%s: clock disable failed\n", __func__);
+ goto out;
+ }
+ /*Try to invalidate the key to keep ICE in proper state*/
+ smc_id = TZ_ES_INVALIDATE_ICE_KEY_ID;
+ desc.arginfo = TZ_ES_INVALIDATE_ICE_KEY_PARAM_ID;
+ desc.args[0] = index;
+ ret1 = scm_call2(smc_id, &desc);
+ if (ret1)
+ pr_err("%s: Invalidate Key Error: %d\n", __func__,
+ ret1);
+ }
+ ret = qcom_ice_setup_ice_hw((const char *)s_type, false);
+
+out:
+ return ret;
+}
+
+int qti_pfk_ice_invalidate_key(uint32_t index, char *storage_type)
+{
+ struct scm_desc desc = {0};
+ int ret;
+
+ uint32_t smc_id = 0;
+
+ if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
+ pr_err("%s Invalid index %d\n", __func__, index);
+ return -EINVAL;
+ }
+
+ if (storage_type == NULL) {
+ pr_err("%s Invalid Storage type\n", __func__);
+ return -EINVAL;
+ }
+
+ smc_id = TZ_ES_INVALIDATE_ICE_KEY_ID;
+
+ desc.arginfo = TZ_ES_INVALIDATE_ICE_KEY_PARAM_ID;
+ desc.args[0] = index;
+
+ ret = qcom_ice_setup_ice_hw((const char *)storage_type, true);
+
+ if (ret) {
+ pr_err("%s: could not enable clocks: 0x%x\n", __func__, ret);
+ return ret;
+ }
+
+ ret = scm_call2(smc_id, &desc);
+
+ if (ret) {
+ pr_err("%s: Error: 0x%x\n", __func__, ret);
+ if (qcom_ice_setup_ice_hw((const char *)storage_type, false))
+ pr_err("%s: could not disable clocks\n", __func__);
+ } else {
+ ret = qcom_ice_setup_ice_hw((const char *)storage_type, false);
+ }
+
+ return ret;
+}
diff --git a/security/pfe/pfk_ice.h b/security/pfe/pfk_ice.h
new file mode 100644
index 0000000..fb7c0d1
--- /dev/null
+++ b/security/pfe/pfk_ice.h
@@ -0,0 +1,33 @@
+/* Copyright (c) 2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef PFK_ICE_H_
+#define PFK_ICE_H_
+
+/*
+ * PFK ICE
+ *
+ * ICE keys configuration through scm calls.
+ *
+ */
+
+#include <linux/types.h>
+
+int pfk_ice_init(void);
+int pfk_ice_deinit(void);
+
+int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
+ char *storage_type);
+int qti_pfk_ice_invalidate_key(uint32_t index, char *storage_type);
+
+
+#endif /* PFK_ICE_H_ */
diff --git a/security/pfe/pfk_internal.h b/security/pfe/pfk_internal.h
new file mode 100644
index 0000000..86526fa
--- /dev/null
+++ b/security/pfe/pfk_internal.h
@@ -0,0 +1,34 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _PFK_INTERNAL_H_
+#define _PFK_INTERNAL_H_
+
+#include <linux/types.h>
+#include <crypto/ice.h>
+
+struct pfk_key_info {
+ const unsigned char *key;
+ const unsigned char *salt;
+ size_t key_size;
+ size_t salt_size;
+};
+
+int pfk_key_size_to_key_type(size_t key_size,
+ enum ice_crpto_key_size *key_size_type);
+
+bool pfe_is_inode_filesystem_type(const struct inode *inode,
+ const char *fs_type);
+
+char *inode_to_filename(const struct inode *inode);
+
+#endif /* _PFK_INTERNAL_H_ */
diff --git a/security/pfe/pfk_kc.c b/security/pfe/pfk_kc.c
new file mode 100644
index 0000000..da71f80
--- /dev/null
+++ b/security/pfe/pfk_kc.c
@@ -0,0 +1,905 @@
+/*
+ * Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * PFK Key Cache
+ *
+ * Key Cache used internally in PFK.
+ * The purpose of the cache is to save access time to QSEE when loading keys.
+ * Currently the cache is the same size as the total number of keys that can
+ * be loaded to ICE. Since this number is relatively small, the algorithms for
+ * cache eviction are simple, linear and based on last usage timestamp, i.e
+ * the node that will be evicted is the one with the oldest timestamp.
+ * Empty entries always have the oldest timestamp.
+ */
+
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+#include <crypto/ice.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/jiffies.h>
+#include <linux/slab.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+
+#include "pfk_kc.h"
+#include "pfk_ice.h"
+
+
+/** the first available index in ice engine */
+#define PFK_KC_STARTING_INDEX 2
+
+/** currently the only supported key and salt sizes */
+#define PFK_KC_KEY_SIZE 32
+#define PFK_KC_SALT_SIZE 32
+
+/** Table size */
+/* TODO replace by some constant from ice.h */
+#define PFK_KC_TABLE_SIZE ((32) - (PFK_KC_STARTING_INDEX))
+
+/** The maximum key and salt size */
+#define PFK_MAX_KEY_SIZE PFK_KC_KEY_SIZE
+#define PFK_MAX_SALT_SIZE PFK_KC_SALT_SIZE
+#define PFK_UFS "ufs"
+
+static DEFINE_SPINLOCK(kc_lock);
+static unsigned long flags;
+static bool kc_ready;
+static char *s_type = "sdcc";
+
+/**
+ * enum pfk_kc_entry_state - state of the entry inside kc table
+ *
+ * @FREE: entry is free
+ * @ACTIVE_ICE_PRELOAD: entry is actively used by ICE engine
+ and cannot be used by others. SCM call
+ to load key to ICE is pending to be performed
+ * @ACTIVE_ICE_LOADED: entry is actively used by ICE engine and
+ cannot be used by others. SCM call to load the
+ key to ICE was successfully executed and key is
+ now loaded
+ * @INACTIVE_INVALIDATING: entry is being invalidated during file close
+ and cannot be used by others until invalidation
+ is complete
+ * @INACTIVE: entry's key is already loaded, but is not
+ currently being used. It can be re-used for
+ optimization and to avoid SCM call cost or
+ it can be taken by another key if there are
+ no FREE entries
+ * @SCM_ERROR: error occurred while scm call was performed to
+ load the key to ICE
+ */
+enum pfk_kc_entry_state {
+ FREE,
+ ACTIVE_ICE_PRELOAD,
+ ACTIVE_ICE_LOADED,
+ INACTIVE_INVALIDATING,
+ INACTIVE,
+ SCM_ERROR
+};
+
+struct kc_entry {
+ unsigned char key[PFK_MAX_KEY_SIZE];
+ size_t key_size;
+
+ unsigned char salt[PFK_MAX_SALT_SIZE];
+ size_t salt_size;
+
+ u64 time_stamp;
+ u32 key_index;
+
+ struct task_struct *thread_pending;
+
+ enum pfk_kc_entry_state state;
+
+ /* ref count for the number of requests in the HW queue for this key */
+ int loaded_ref_cnt;
+ int scm_error;
+};
+
+static struct kc_entry kc_table[PFK_KC_TABLE_SIZE];
+
+/**
+ * kc_is_ready() - driver is initialized and ready.
+ *
+ * Return: true if the key cache is ready.
+ */
+static inline bool kc_is_ready(void)
+{
+ return kc_ready;
+}
+
+static inline void kc_spin_lock(void)
+{
+ spin_lock_irqsave(&kc_lock, flags);
+}
+
+static inline void kc_spin_unlock(void)
+{
+ spin_unlock_irqrestore(&kc_lock, flags);
+}
+
+/**
+ * kc_entry_is_available() - checks whether the entry is available
+ *
+ * Return true if it is , false otherwise or if invalid
+ * Should be invoked under spinlock
+ */
+static bool kc_entry_is_available(const struct kc_entry *entry)
+{
+ if (!entry)
+ return false;
+
+ return (entry->state == FREE || entry->state == INACTIVE);
+}
+
+/**
+ * kc_entry_wait_till_available() - waits till entry is available
+ *
+ * Returns 0 in case of success or -ERESTARTSYS if the wait was interrupted
+ * by signal
+ *
+ * Should be invoked under spinlock
+ */
+static int kc_entry_wait_till_available(struct kc_entry *entry)
+{
+ int res = 0;
+
+ while (!kc_entry_is_available(entry)) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (signal_pending(current)) {
+ res = -ERESTARTSYS;
+ break;
+ }
+ /* assuming only one thread can try to invalidate
+ * the same entry
+ */
+ entry->thread_pending = current;
+ kc_spin_unlock();
+ schedule();
+ kc_spin_lock();
+ }
+ set_current_state(TASK_RUNNING);
+
+ return res;
+}
+
+/**
+ * kc_entry_start_invalidating() - moves entry to state
+ * INACTIVE_INVALIDATING
+ * If entry is in use, waits till
+ * it gets available
+ * @entry: pointer to entry
+ *
+ * Return 0 in case of success, otherwise error
+ * Should be invoked under spinlock
+ */
+static int kc_entry_start_invalidating(struct kc_entry *entry)
+{
+ int res;
+
+ res = kc_entry_wait_till_available(entry);
+ if (res)
+ return res;
+
+ entry->state = INACTIVE_INVALIDATING;
+
+ return 0;
+}
+
+/**
+ * kc_entry_finish_invalidating() - moves entry to state FREE
+ * wakes up all the tasks waiting
+ * on it
+ *
+ * @entry: pointer to entry
+ *
+ * Return 0 in case of success, otherwise error
+ * Should be invoked under spinlock
+ */
+static void kc_entry_finish_invalidating(struct kc_entry *entry)
+{
+ if (!entry)
+ return;
+
+ if (entry->state != INACTIVE_INVALIDATING)
+ return;
+
+ entry->state = FREE;
+}
+
+/**
+ * kc_min_entry() - compare two entries to find one with minimal time
+ * @a: ptr to the first entry. If NULL the other entry will be returned
+ * @b: pointer to the second entry
+ *
+ * Return the entry which timestamp is the minimal, or b if a is NULL
+ */
+static inline struct kc_entry *kc_min_entry(struct kc_entry *a,
+ struct kc_entry *b)
+{
+ if (!a)
+ return b;
+
+ if (time_before64(b->time_stamp, a->time_stamp))
+ return b;
+
+ return a;
+}
+
+/**
+ * kc_entry_at_index() - return entry at specific index
+ * @index: index of entry to be accessed
+ *
+ * Return entry
+ * Should be invoked under spinlock
+ */
+static struct kc_entry *kc_entry_at_index(int index)
+{
+ return &(kc_table[index]);
+}
+
+/**
+ * kc_find_key_at_index() - find kc entry starting at specific index
+ * @key: key to look for
+ * @key_size: the key size
+ * @salt: salt to look for
+ * @salt_size: the salt size
+ * @sarting_index: index to start search with, if entry found, updated with
+ * index of that entry
+ *
+ * Return entry or NULL in case of error
+ * Should be invoked under spinlock
+ */
+static struct kc_entry *kc_find_key_at_index(const unsigned char *key,
+ size_t key_size, const unsigned char *salt, size_t salt_size,
+ int *starting_index)
+{
+ struct kc_entry *entry = NULL;
+ int i = 0;
+
+ for (i = *starting_index; i < PFK_KC_TABLE_SIZE; i++) {
+ entry = kc_entry_at_index(i);
+
+ if (salt != NULL) {
+ if (entry->salt_size != salt_size)
+ continue;
+
+ if (memcmp(entry->salt, salt, salt_size) != 0)
+ continue;
+ }
+
+ if (entry->key_size != key_size)
+ continue;
+
+ if (memcmp(entry->key, key, key_size) == 0) {
+ *starting_index = i;
+ return entry;
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * kc_find_key() - find kc entry
+ * @key: key to look for
+ * @key_size: the key size
+ * @salt: salt to look for
+ * @salt_size: the salt size
+ *
+ * Return entry or NULL in case of error
+ * Should be invoked under spinlock
+ */
+static struct kc_entry *kc_find_key(const unsigned char *key, size_t key_size,
+ const unsigned char *salt, size_t salt_size)
+{
+ int index = 0;
+
+ return kc_find_key_at_index(key, key_size, salt, salt_size, &index);
+}
+
+/**
+ * kc_find_oldest_entry_non_locked() - finds the entry with minimal timestamp
+ * that is not locked
+ *
+ * Returns entry with minimal timestamp. Empty entries have timestamp
+ * of 0, therefore they are returned first.
+ * If all the entries are locked, will return NULL
+ * Should be invoked under spin lock
+ */
+static struct kc_entry *kc_find_oldest_entry_non_locked(void)
+{
+ struct kc_entry *curr_min_entry = NULL;
+ struct kc_entry *entry = NULL;
+ int i = 0;
+
+ for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
+ entry = kc_entry_at_index(i);
+
+ if (entry->state == FREE)
+ return entry;
+
+ if (entry->state == INACTIVE)
+ curr_min_entry = kc_min_entry(curr_min_entry, entry);
+ }
+
+ return curr_min_entry;
+}
+
+/**
+ * kc_update_timestamp() - updates timestamp of entry to current
+ *
+ * @entry: entry to update
+ *
+ */
+static void kc_update_timestamp(struct kc_entry *entry)
+{
+ if (!entry)
+ return;
+
+ entry->time_stamp = get_jiffies_64();
+}
+
+/**
+ * kc_clear_entry() - clear the key from entry and mark entry not in use
+ *
+ * @entry: pointer to entry
+ *
+ * Should be invoked under spinlock
+ */
+static void kc_clear_entry(struct kc_entry *entry)
+{
+ if (!entry)
+ return;
+
+ memset(entry->key, 0, entry->key_size);
+ memset(entry->salt, 0, entry->salt_size);
+
+ entry->key_size = 0;
+ entry->salt_size = 0;
+
+ entry->time_stamp = 0;
+ entry->scm_error = 0;
+
+ entry->state = FREE;
+
+ entry->loaded_ref_cnt = 0;
+ entry->thread_pending = NULL;
+}
+
+/**
+ * kc_update_entry() - replaces the key in given entry and
+ * loads the new key to ICE
+ *
+ * @entry: entry to replace key in
+ * @key: key
+ * @key_size: key_size
+ * @salt: salt
+ * @salt_size: salt_size
+ *
+ * The previous key is securely released and wiped, the new one is loaded
+ * to ICE.
+ * Should be invoked under spinlock
+ */
+static int kc_update_entry(struct kc_entry *entry, const unsigned char *key,
+ size_t key_size, const unsigned char *salt, size_t salt_size)
+{
+ int ret;
+
+ kc_clear_entry(entry);
+
+ memcpy(entry->key, key, key_size);
+ entry->key_size = key_size;
+
+ memcpy(entry->salt, salt, salt_size);
+ entry->salt_size = salt_size;
+
+ /* Mark entry as no longer free before releasing the lock */
+ entry->state = ACTIVE_ICE_PRELOAD;
+ kc_spin_unlock();
+
+ ret = qti_pfk_ice_set_key(entry->key_index, entry->key,
+ entry->salt, s_type);
+
+ kc_spin_lock();
+ return ret;
+}
+
+/**
+ * pfk_kc_init() - init function
+ *
+ * Return 0 in case of success, error otherwise
+ */
+int pfk_kc_init(void)
+{
+ int i = 0;
+ struct kc_entry *entry = NULL;
+
+ kc_spin_lock();
+ for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
+ entry = kc_entry_at_index(i);
+ entry->key_index = PFK_KC_STARTING_INDEX + i;
+ }
+ kc_ready = true;
+ kc_spin_unlock();
+ return 0;
+}
+
+/**
+ * pfk_kc_denit() - deinit function
+ *
+ * Return 0 in case of success, error otherwise
+ */
+int pfk_kc_deinit(void)
+{
+ int res = pfk_kc_clear();
+
+ kc_ready = false;
+ return res;
+}
+
+/**
+ * pfk_kc_load_key_start() - retrieve the key from cache or add it if
+ * it's not there and return the ICE hw key index in @key_index.
+ * @key: pointer to the key
+ * @key_size: the size of the key
+ * @salt: pointer to the salt
+ * @salt_size: the size of the salt
+ * @key_index: the pointer to key_index where the output will be stored
+ * @async: whether scm calls are allowed in the caller context
+ *
+ * If key is present in cache, than the key_index will be retrieved from cache.
+ * If it is not present, the oldest entry from kc table will be evicted,
+ * the key will be loaded to ICE via QSEE to the index that is the evicted
+ * entry number and stored in cache.
+ * Entry that is going to be used is marked as being used, it will mark
+ * as not being used when ICE finishes using it and pfk_kc_load_key_end
+ * will be invoked.
+ * As QSEE calls can only be done from a non-atomic context, when @async flag
+ * is set to 'false', it specifies that it is ok to make the calls in the
+ * current context. Otherwise, when @async is set, the caller should retry the
+ * call again from a different context, and -EAGAIN error will be returned.
+ *
+ * Return 0 in case of success, error otherwise
+ */
+int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
+ const unsigned char *salt, size_t salt_size, u32 *key_index,
+ bool async)
+{
+ int ret = 0;
+ struct kc_entry *entry = NULL;
+ bool entry_exists = false;
+
+ if (!kc_is_ready())
+ return -ENODEV;
+
+ if (!key || !salt || !key_index) {
+ pr_err("%s key/salt/key_index NULL\n", __func__);
+ return -EINVAL;
+ }
+
+ if (key_size != PFK_KC_KEY_SIZE) {
+ pr_err("unsupported key size %zu\n", key_size);
+ return -EINVAL;
+ }
+
+ if (salt_size != PFK_KC_SALT_SIZE) {
+ pr_err("unsupported salt size %zu\n", salt_size);
+ return -EINVAL;
+ }
+
+ kc_spin_lock();
+
+ entry = kc_find_key(key, key_size, salt, salt_size);
+ if (!entry) {
+ if (async) {
+ pr_debug("%s task will populate entry\n", __func__);
+ kc_spin_unlock();
+ return -EAGAIN;
+ }
+
+ entry = kc_find_oldest_entry_non_locked();
+ if (!entry) {
+ /* could not find a single non locked entry,
+ * return EBUSY to upper layers so that the
+ * request will be rescheduled
+ */
+ kc_spin_unlock();
+ return -EBUSY;
+ }
+ } else {
+ entry_exists = true;
+ }
+
+ pr_debug("entry with index %d is in state %d\n",
+ entry->key_index, entry->state);
+
+ switch (entry->state) {
+ case (INACTIVE):
+ if (entry_exists) {
+ kc_update_timestamp(entry);
+ entry->state = ACTIVE_ICE_LOADED;
+
+ if (!strcmp(s_type, (char *)PFK_UFS)) {
+ if (async)
+ entry->loaded_ref_cnt++;
+ } else {
+ entry->loaded_ref_cnt++;
+ }
+ break;
+ }
+ case (FREE):
+ ret = kc_update_entry(entry, key, key_size, salt, salt_size);
+ if (ret) {
+ entry->state = SCM_ERROR;
+ entry->scm_error = ret;
+ pr_err("%s: key load error (%d)\n", __func__, ret);
+ } else {
+ kc_update_timestamp(entry);
+ entry->state = ACTIVE_ICE_LOADED;
+
+ /*
+ * In case of UFS only increase ref cnt for async calls,
+ * sync calls from within work thread do not pass
+ * requests further to HW
+ */
+ if (!strcmp(s_type, (char *)PFK_UFS)) {
+ if (async)
+ entry->loaded_ref_cnt++;
+ } else {
+ entry->loaded_ref_cnt++;
+ }
+ }
+ break;
+ case (ACTIVE_ICE_PRELOAD):
+ case (INACTIVE_INVALIDATING):
+ ret = -EAGAIN;
+ break;
+ case (ACTIVE_ICE_LOADED):
+ kc_update_timestamp(entry);
+
+ if (!strcmp(s_type, (char *)PFK_UFS)) {
+ if (async)
+ entry->loaded_ref_cnt++;
+ } else {
+ entry->loaded_ref_cnt++;
+ }
+ break;
+ case(SCM_ERROR):
+ ret = entry->scm_error;
+ kc_clear_entry(entry);
+ entry->state = FREE;
+ break;
+ default:
+ pr_err("invalid state %d for entry with key index %d\n",
+ entry->state, entry->key_index);
+ ret = -EINVAL;
+ }
+
+ *key_index = entry->key_index;
+ kc_spin_unlock();
+
+ return ret;
+}
+
+/**
+ * pfk_kc_load_key_end() - finish the process of key loading that was started
+ * by pfk_kc_load_key_start
+ * by marking the entry as not
+ * being in use
+ * @key: pointer to the key
+ * @key_size: the size of the key
+ * @salt: pointer to the salt
+ * @salt_size: the size of the salt
+ *
+ */
+void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
+ const unsigned char *salt, size_t salt_size)
+{
+ struct kc_entry *entry = NULL;
+ struct task_struct *tmp_pending = NULL;
+ int ref_cnt = 0;
+
+ if (!kc_is_ready())
+ return;
+
+ if (!key || !salt)
+ return;
+
+ if (key_size != PFK_KC_KEY_SIZE)
+ return;
+
+ if (salt_size != PFK_KC_SALT_SIZE)
+ return;
+
+ kc_spin_lock();
+
+ entry = kc_find_key(key, key_size, salt, salt_size);
+ if (!entry) {
+ kc_spin_unlock();
+ pr_err("internal error, there should an entry to unlock\n");
+
+ return;
+ }
+ ref_cnt = --entry->loaded_ref_cnt;
+
+ if (ref_cnt < 0)
+ pr_err("internal error, ref count should never be negative\n");
+
+ if (!ref_cnt) {
+ entry->state = INACTIVE;
+ /*
+ * wake-up invalidation if it's waiting
+ * for the entry to be released
+ */
+ if (entry->thread_pending) {
+ tmp_pending = entry->thread_pending;
+ entry->thread_pending = NULL;
+
+ kc_spin_unlock();
+ wake_up_process(tmp_pending);
+ return;
+ }
+ }
+
+ kc_spin_unlock();
+}
+
+/**
+ * pfk_kc_remove_key() - remove the key from cache and from ICE engine
+ * @key: pointer to the key
+ * @key_size: the size of the key
+ * @salt: pointer to the key
+ * @salt_size: the size of the key
+ *
+ * Return 0 in case of success, error otherwise (also in case of non
+ * (existing key)
+ */
+int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
+ const unsigned char *salt, size_t salt_size)
+{
+ struct kc_entry *entry = NULL;
+ int res = 0;
+
+ if (!kc_is_ready())
+ return -ENODEV;
+
+ if (!key)
+ return -EINVAL;
+
+ if (!salt)
+ return -EINVAL;
+
+ if (key_size != PFK_KC_KEY_SIZE)
+ return -EINVAL;
+
+ if (salt_size != PFK_KC_SALT_SIZE)
+ return -EINVAL;
+
+ kc_spin_lock();
+
+ entry = kc_find_key(key, key_size, salt, salt_size);
+ if (!entry) {
+ pr_debug("%s: key does not exist\n", __func__);
+ kc_spin_unlock();
+ return -EINVAL;
+ }
+
+ res = kc_entry_start_invalidating(entry);
+ if (res != 0) {
+ kc_spin_unlock();
+ return res;
+ }
+ kc_clear_entry(entry);
+
+ kc_spin_unlock();
+
+ qti_pfk_ice_invalidate_key(entry->key_index, s_type);
+
+ kc_spin_lock();
+ kc_entry_finish_invalidating(entry);
+ kc_spin_unlock();
+
+ return 0;
+}
+
+/**
+ * pfk_kc_remove_key() - remove the key from cache and from ICE engine
+ * when no salt is available. Will only search key part, if there are several,
+ * all will be removed
+ *
+ * @key: pointer to the key
+ * @key_size: the size of the key
+ *
+ * Return 0 in case of success, error otherwise (also for non-existing key)
+ */
+int pfk_kc_remove_key(const unsigned char *key, size_t key_size)
+{
+ struct kc_entry *entry = NULL;
+ int index = 0;
+ int temp_indexes[PFK_KC_TABLE_SIZE] = {0};
+ int temp_indexes_size = 0;
+ int i = 0;
+ int res = 0;
+
+ if (!kc_is_ready())
+ return -ENODEV;
+
+ if (!key)
+ return -EINVAL;
+
+ if (key_size != PFK_KC_KEY_SIZE)
+ return -EINVAL;
+
+ memset(temp_indexes, -1, sizeof(temp_indexes));
+
+ kc_spin_lock();
+
+ entry = kc_find_key_at_index(key, key_size, NULL, 0, &index);
+ if (!entry) {
+ pr_err("%s: key does not exist\n", __func__);
+ kc_spin_unlock();
+ return -EINVAL;
+ }
+
+ res = kc_entry_start_invalidating(entry);
+ if (res != 0) {
+ kc_spin_unlock();
+ return res;
+ }
+
+ temp_indexes[temp_indexes_size++] = index;
+ kc_clear_entry(entry);
+
+ /* let's clean additional entries with the same key if there are any */
+ do {
+ index++;
+ entry = kc_find_key_at_index(key, key_size, NULL, 0, &index);
+ if (!entry)
+ break;
+
+ res = kc_entry_start_invalidating(entry);
+ if (res != 0) {
+ kc_spin_unlock();
+ goto out;
+ }
+
+ temp_indexes[temp_indexes_size++] = index;
+
+ kc_clear_entry(entry);
+
+
+ } while (true);
+
+ kc_spin_unlock();
+
+ temp_indexes_size--;
+ for (i = temp_indexes_size; i >= 0 ; i--)
+ qti_pfk_ice_invalidate_key(
+ kc_entry_at_index(temp_indexes[i])->key_index,
+ s_type);
+
+ /* fall through */
+ res = 0;
+
+out:
+ kc_spin_lock();
+ for (i = temp_indexes_size; i >= 0 ; i--)
+ kc_entry_finish_invalidating(
+ kc_entry_at_index(temp_indexes[i]));
+ kc_spin_unlock();
+
+ return res;
+}
+
+/**
+ * pfk_kc_clear() - clear the table and remove all keys from ICE
+ *
+ * Return 0 on success, error otherwise
+ *
+ */
+int pfk_kc_clear(void)
+{
+ struct kc_entry *entry = NULL;
+ int i = 0;
+ int res = 0;
+
+ if (!kc_is_ready())
+ return -ENODEV;
+
+ kc_spin_lock();
+ for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
+ entry = kc_entry_at_index(i);
+ res = kc_entry_start_invalidating(entry);
+ if (res != 0) {
+ kc_spin_unlock();
+ goto out;
+ }
+ kc_clear_entry(entry);
+ }
+ kc_spin_unlock();
+
+ for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
+ qti_pfk_ice_invalidate_key(kc_entry_at_index(i)->key_index,
+ s_type);
+
+ /* fall through */
+ res = 0;
+out:
+ kc_spin_lock();
+ for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
+ kc_entry_finish_invalidating(kc_entry_at_index(i));
+ kc_spin_unlock();
+
+ return res;
+}
+
+/**
+ * pfk_kc_clear_on_reset() - clear the table and remove all keys from ICE
+ * The assumption is that at this point we don't have any pending transactions
+ * Also, there is no need to clear keys from ICE
+ *
+ * Return 0 on success, error otherwise
+ *
+ */
+void pfk_kc_clear_on_reset(void)
+{
+ struct kc_entry *entry = NULL;
+ int i = 0;
+
+ if (!kc_is_ready())
+ return;
+
+ kc_spin_lock();
+ for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
+ entry = kc_entry_at_index(i);
+ kc_clear_entry(entry);
+ }
+ kc_spin_unlock();
+}
+
+static int pfk_kc_find_storage_type(char **device)
+{
+ char boot[20] = {'\0'};
+ char *match = (char *)strnstr(saved_command_line,
+ "androidboot.bootdevice=",
+ strlen(saved_command_line));
+ if (match) {
+ memcpy(boot, (match + strlen("androidboot.bootdevice=")),
+ sizeof(boot) - 1);
+ if (strnstr(boot, PFK_UFS, strlen(boot)))
+ *device = PFK_UFS;
+
+ return 0;
+ }
+ return -EINVAL;
+}
+
+static int __init pfk_kc_pre_init(void)
+{
+ return pfk_kc_find_storage_type(&s_type);
+}
+
+static void __exit pfk_kc_exit(void)
+{
+ s_type = NULL;
+}
+
+module_init(pfk_kc_pre_init);
+module_exit(pfk_kc_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Per-File-Key-KC driver");
diff --git a/security/pfe/pfk_kc.h b/security/pfe/pfk_kc.h
new file mode 100644
index 0000000..dc4ad15
--- /dev/null
+++ b/security/pfe/pfk_kc.h
@@ -0,0 +1,33 @@
+/* Copyright (c) 2015-2017, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef PFK_KC_H_
+#define PFK_KC_H_
+
+#include <linux/types.h>
+
+int pfk_kc_init(void);
+int pfk_kc_deinit(void);
+int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
+ const unsigned char *salt, size_t salt_size, u32 *key_index,
+ bool async);
+void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
+ const unsigned char *salt, size_t salt_size);
+int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
+ const unsigned char *salt, size_t salt_size);
+int pfk_kc_remove_key(const unsigned char *key, size_t key_size);
+int pfk_kc_clear(void);
+void pfk_kc_clear_on_reset(void);
+extern char *saved_command_line;
+
+
+#endif /* PFK_KC_H_ */
diff --git a/security/security.c b/security/security.c
index 6a7b359..e1f9e32 100644
--- a/security/security.c
+++ b/security/security.c
@@ -524,6 +524,14 @@ int security_inode_create(struct inode *dir, struct dentry *dentry, umode_t mode
}
EXPORT_SYMBOL_GPL(security_inode_create);
+int security_inode_post_create(struct inode *dir, struct dentry *dentry,
+ umode_t mode)
+{
+ if (unlikely(IS_PRIVATE(dir)))
+ return 0;
+ return call_int_hook(inode_post_create, 0, dir, dentry, mode);
+}
+
int security_inode_link(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry)
{
@@ -1668,6 +1676,8 @@ struct security_hook_heads security_hook_heads __lsm_ro_after_init = {
.inode_init_security =
LIST_HEAD_INIT(security_hook_heads.inode_init_security),
.inode_create = LIST_HEAD_INIT(security_hook_heads.inode_create),
+ .inode_post_create =
+ LIST_HEAD_INIT(security_hook_heads.inode_post_create),
.inode_link = LIST_HEAD_INIT(security_hook_heads.inode_link),
.inode_unlink = LIST_HEAD_INIT(security_hook_heads.inode_unlink),
.inode_symlink =
diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h
index c21e135..13011038 100644
--- a/security/selinux/include/objsec.h
+++ b/security/selinux/include/objsec.h
@@ -25,8 +25,9 @@
#include <linux/in.h>
#include <linux/spinlock.h>
#include <net/net_namespace.h>
-#include "flask.h"
-#include "avc.h"
+//#include "flask.h"
+//#include "avc.h"
+#include "security.h"
struct task_security_struct {
u32 osid; /* SID prior to last execve */
@@ -52,6 +53,8 @@ struct inode_security_struct {
u32 sid; /* SID of this object */
u16 sclass; /* security class of this object */
unsigned char initialized; /* initialization flag */
+ u32 tag; /* Per-File-Encryption tag */
+ void *pfk_data; /* Per-File-Key data from ecryptfs */
struct mutex lock;
};
diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
index 308a286..b8e98c1 100644
--- a/security/selinux/include/security.h
+++ b/security/selinux/include/security.h
@@ -12,7 +12,6 @@
#include <linux/dcache.h>
#include <linux/magic.h>
#include <linux/types.h>
-#include "flask.h"
#define SECSID_NULL 0x00000000 /* unspecified SID */
#define SECSID_WILD 0xffffffff /* wildcard SID */
diff --git a/sound/core/seq/oss/seq_oss_midi.c b/sound/core/seq/oss/seq_oss_midi.c
index aaff9ee..b30b213 100644
--- a/sound/core/seq/oss/seq_oss_midi.c
+++ b/sound/core/seq/oss/seq_oss_midi.c
@@ -612,9 +612,7 @@ send_midi_event(struct seq_oss_devinfo *dp, struct snd_seq_event *ev, struct seq
if (!dp->timer->running)
len = snd_seq_oss_timer_start(dp->timer);
if (ev->type == SNDRV_SEQ_EVENT_SYSEX) {
- if ((ev->flags & SNDRV_SEQ_EVENT_LENGTH_MASK) == SNDRV_SEQ_EVENT_LENGTH_VARIABLE)
- snd_seq_oss_readq_puts(dp->readq, mdev->seq_device,
- ev->data.ext.ptr, ev->data.ext.len);
+ snd_seq_oss_readq_sysex(dp->readq, mdev->seq_device, ev);
} else {
len = snd_midi_event_decode(mdev->coder, msg, sizeof(msg), ev);
if (len > 0)
diff --git a/sound/core/seq/oss/seq_oss_readq.c b/sound/core/seq/oss/seq_oss_readq.c
index 046cb586..06b2122 100644
--- a/sound/core/seq/oss/seq_oss_readq.c
+++ b/sound/core/seq/oss/seq_oss_readq.c
@@ -118,6 +118,35 @@ snd_seq_oss_readq_puts(struct seq_oss_readq *q, int dev, unsigned char *data, in
}
/*
+ * put MIDI sysex bytes; the event buffer may be chained, thus it has
+ * to be expanded via snd_seq_dump_var_event().
+ */
+struct readq_sysex_ctx {
+ struct seq_oss_readq *readq;
+ int dev;
+};
+
+static int readq_dump_sysex(void *ptr, void *buf, int count)
+{
+ struct readq_sysex_ctx *ctx = ptr;
+
+ return snd_seq_oss_readq_puts(ctx->readq, ctx->dev, buf, count);
+}
+
+int snd_seq_oss_readq_sysex(struct seq_oss_readq *q, int dev,
+ struct snd_seq_event *ev)
+{
+ struct readq_sysex_ctx ctx = {
+ .readq = q,
+ .dev = dev
+ };
+
+ if ((ev->flags & SNDRV_SEQ_EVENT_LENGTH_MASK) != SNDRV_SEQ_EVENT_LENGTH_VARIABLE)
+ return 0;
+ return snd_seq_dump_var_event(ev, readq_dump_sysex, &ctx);
+}
+
+/*
* copy an event to input queue:
* return zero if enqueued
*/
diff --git a/sound/core/seq/oss/seq_oss_readq.h b/sound/core/seq/oss/seq_oss_readq.h
index f1463f1..8d033ca 100644
--- a/sound/core/seq/oss/seq_oss_readq.h
+++ b/sound/core/seq/oss/seq_oss_readq.h
@@ -44,6 +44,8 @@ void snd_seq_oss_readq_delete(struct seq_oss_readq *q);
void snd_seq_oss_readq_clear(struct seq_oss_readq *readq);
unsigned int snd_seq_oss_readq_poll(struct seq_oss_readq *readq, struct file *file, poll_table *wait);
int snd_seq_oss_readq_puts(struct seq_oss_readq *readq, int dev, unsigned char *data, int len);
+int snd_seq_oss_readq_sysex(struct seq_oss_readq *q, int dev,
+ struct snd_seq_event *ev);
int snd_seq_oss_readq_put_event(struct seq_oss_readq *readq, union evrec *ev);
int snd_seq_oss_readq_put_timestamp(struct seq_oss_readq *readq, unsigned long curt, int seq_mode);
int snd_seq_oss_readq_pick(struct seq_oss_readq *q, union evrec *rec);
diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index c411483..45ef591 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -663,7 +663,7 @@ static int deliver_to_subscribers(struct snd_seq_client *client,
if (atomic)
read_lock(&grp->list_lock);
else
- down_read(&grp->list_mutex);
+ down_read_nested(&grp->list_mutex, hop);
list_for_each_entry(subs, &grp->list_head, src_list) {
/* both ports ready? */
if (atomic_read(&subs->ref_count) != 2)
diff --git a/sound/core/seq/seq_device.c b/sound/core/seq/seq_device.c
index c4acf17..e40a2cb 100644
--- a/sound/core/seq/seq_device.c
+++ b/sound/core/seq/seq_device.c
@@ -148,8 +148,10 @@ void snd_seq_device_load_drivers(void)
flush_work(&autoload_work);
}
EXPORT_SYMBOL(snd_seq_device_load_drivers);
+#define cancel_autoload_drivers() cancel_work_sync(&autoload_work)
#else
#define queue_autoload_drivers() /* NOP */
+#define cancel_autoload_drivers() /* NOP */
#endif
/*
@@ -159,6 +161,7 @@ static int snd_seq_device_dev_free(struct snd_device *device)
{
struct snd_seq_device *dev = device->device_data;
+ cancel_autoload_drivers();
put_device(&dev->dev);
return 0;
}
diff --git a/sound/core/timer_compat.c b/sound/core/timer_compat.c
index 6a437eb..59127b6 100644
--- a/sound/core/timer_compat.c
+++ b/sound/core/timer_compat.c
@@ -133,7 +133,8 @@ enum {
#endif /* CONFIG_X86_X32 */
};
-static long snd_timer_user_ioctl_compat(struct file *file, unsigned int cmd, unsigned long arg)
+static long __snd_timer_user_ioctl_compat(struct file *file, unsigned int cmd,
+ unsigned long arg)
{
void __user *argp = compat_ptr(arg);
@@ -153,7 +154,7 @@ static long snd_timer_user_ioctl_compat(struct file *file, unsigned int cmd, uns
case SNDRV_TIMER_IOCTL_PAUSE:
case SNDRV_TIMER_IOCTL_PAUSE_OLD:
case SNDRV_TIMER_IOCTL_NEXT_DEVICE:
- return snd_timer_user_ioctl(file, cmd, (unsigned long)argp);
+ return __snd_timer_user_ioctl(file, cmd, (unsigned long)argp);
case SNDRV_TIMER_IOCTL_GPARAMS32:
return snd_timer_user_gparams_compat(file, argp);
case SNDRV_TIMER_IOCTL_INFO32:
@@ -167,3 +168,15 @@ static long snd_timer_user_ioctl_compat(struct file *file, unsigned int cmd, uns
}
return -ENOIOCTLCMD;
}
+
+static long snd_timer_user_ioctl_compat(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct snd_timer_user *tu = file->private_data;
+ long ret;
+
+ mutex_lock(&tu->ioctl_lock);
+ ret = __snd_timer_user_ioctl_compat(file, cmd, arg);
+ mutex_unlock(&tu->ioctl_lock);
+ return ret;
+}
diff --git a/sound/drivers/vx/vx_pcm.c b/sound/drivers/vx/vx_pcm.c
index 1146727..ea7b377 100644
--- a/sound/drivers/vx/vx_pcm.c
+++ b/sound/drivers/vx/vx_pcm.c
@@ -1015,7 +1015,7 @@ static void vx_pcm_capture_update(struct vx_core *chip, struct snd_pcm_substream
int size, space, count;
struct snd_pcm_runtime *runtime = subs->runtime;
- if (! pipe->prepared || (chip->chip_status & VX_STAT_IS_STALE))
+ if (!pipe->running || (chip->chip_status & VX_STAT_IS_STALE))
return;
size = runtime->buffer_size - snd_pcm_capture_avail(runtime);
@@ -1048,8 +1048,10 @@ static void vx_pcm_capture_update(struct vx_core *chip, struct snd_pcm_substream
/* ok, let's accelerate! */
int align = pipe->align * 3;
space = (count / align) * align;
- vx_pseudo_dma_read(chip, runtime, pipe, space);
- count -= space;
+ if (space > 0) {
+ vx_pseudo_dma_read(chip, runtime, pipe, space);
+ count -= space;
+ }
}
/* read the rest of bytes */
while (count > 0) {
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index fe1d06d..80c40a1 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -338,6 +338,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
case 0x10ec0288:
case 0x10ec0295:
case 0x10ec0298:
+ case 0x10ec0299:
alc_update_coef_idx(codec, 0x10, 1<<9, 0);
break;
case 0x10ec0285:
@@ -914,6 +915,7 @@ static struct alc_codec_rename_pci_table rename_pci_tbl[] = {
{ 0x10ec0256, 0x1028, 0, "ALC3246" },
{ 0x10ec0225, 0x1028, 0, "ALC3253" },
{ 0x10ec0295, 0x1028, 0, "ALC3254" },
+ { 0x10ec0299, 0x1028, 0, "ALC3271" },
{ 0x10ec0670, 0x1025, 0, "ALC669X" },
{ 0x10ec0676, 0x1025, 0, "ALC679X" },
{ 0x10ec0282, 0x1043, 0, "ALC3229" },
@@ -3721,6 +3723,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
break;
case 0x10ec0225:
case 0x10ec0295:
+ case 0x10ec0299:
alc_process_coef_fw(codec, coef0225);
break;
case 0x10ec0867:
@@ -3829,6 +3832,7 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
break;
case 0x10ec0225:
case 0x10ec0295:
+ case 0x10ec0299:
alc_update_coef_idx(codec, 0x45, 0x3f<<10, 0x31<<10);
snd_hda_set_pin_ctl_cache(codec, hp_pin, 0);
alc_process_coef_fw(codec, coef0225);
@@ -3887,6 +3891,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
switch (codec->core.vendor_id) {
case 0x10ec0225:
case 0x10ec0295:
+ case 0x10ec0299:
alc_process_coef_fw(codec, coef0225);
break;
case 0x10ec0236:
@@ -4004,6 +4009,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
break;
case 0x10ec0225:
case 0x10ec0295:
+ case 0x10ec0299:
alc_process_coef_fw(codec, coef0225);
break;
case 0x10ec0867:
@@ -4098,6 +4104,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
break;
case 0x10ec0225:
case 0x10ec0295:
+ case 0x10ec0299:
alc_process_coef_fw(codec, coef0225);
break;
}
@@ -4183,6 +4190,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
break;
case 0x10ec0225:
case 0x10ec0295:
+ case 0x10ec0299:
alc_process_coef_fw(codec, coef0225);
msleep(800);
val = alc_read_coef_idx(codec, 0x46);
@@ -6251,6 +6259,7 @@ static int patch_alc269(struct hda_codec *codec)
break;
case 0x10ec0225:
case 0x10ec0295:
+ case 0x10ec0299:
spec->codec_variant = ALC269_TYPE_ALC225;
break;
case 0x10ec0234:
@@ -7249,6 +7258,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
HDA_CODEC_ENTRY(0x10ec0294, "ALC294", patch_alc269),
HDA_CODEC_ENTRY(0x10ec0295, "ALC295", patch_alc269),
HDA_CODEC_ENTRY(0x10ec0298, "ALC298", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0299, "ALC299", patch_alc269),
HDA_CODEC_REV_ENTRY(0x10ec0861, 0x100340, "ALC660", patch_alc861),
HDA_CODEC_ENTRY(0x10ec0660, "ALC660-VD", patch_alc861vd),
HDA_CODEC_ENTRY(0x10ec0861, "ALC861", patch_alc861),
diff --git a/sound/pci/vx222/vx222_ops.c b/sound/pci/vx222/vx222_ops.c
index af83b3b..8e457ea 100644
--- a/sound/pci/vx222/vx222_ops.c
+++ b/sound/pci/vx222/vx222_ops.c
@@ -269,12 +269,12 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
/* Transfer using pseudo-dma.
*/
- if (offset + count > pipe->buffer_bytes) {
+ if (offset + count >= pipe->buffer_bytes) {
int length = pipe->buffer_bytes - offset;
count -= length;
length >>= 2; /* in 32bit words */
/* Transfer using pseudo-dma. */
- while (length-- > 0) {
+ for (; length > 0; length--) {
outl(cpu_to_le32(*addr), port);
addr++;
}
@@ -284,7 +284,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
pipe->hw_ptr += count;
count >>= 2; /* in 32bit words */
/* Transfer using pseudo-dma. */
- while (count-- > 0) {
+ for (; count > 0; count--) {
outl(cpu_to_le32(*addr), port);
addr++;
}
@@ -307,12 +307,12 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
vx2_setup_pseudo_dma(chip, 0);
/* Transfer using pseudo-dma.
*/
- if (offset + count > pipe->buffer_bytes) {
+ if (offset + count >= pipe->buffer_bytes) {
int length = pipe->buffer_bytes - offset;
count -= length;
length >>= 2; /* in 32bit words */
/* Transfer using pseudo-dma. */
- while (length-- > 0)
+ for (; length > 0; length--)
*addr++ = le32_to_cpu(inl(port));
addr = (u32 *)runtime->dma_area;
pipe->hw_ptr = 0;
@@ -320,7 +320,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
pipe->hw_ptr += count;
count >>= 2; /* in 32bit words */
/* Transfer using pseudo-dma. */
- while (count-- > 0)
+ for (; count > 0; count--)
*addr++ = le32_to_cpu(inl(port));
vx2_release_pseudo_dma(chip);
diff --git a/sound/pcmcia/vx/vxp_ops.c b/sound/pcmcia/vx/vxp_ops.c
index 2819729..56aa1ba 100644
--- a/sound/pcmcia/vx/vxp_ops.c
+++ b/sound/pcmcia/vx/vxp_ops.c
@@ -369,12 +369,12 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
unsigned short *addr = (unsigned short *)(runtime->dma_area + offset);
vx_setup_pseudo_dma(chip, 1);
- if (offset + count > pipe->buffer_bytes) {
+ if (offset + count >= pipe->buffer_bytes) {
int length = pipe->buffer_bytes - offset;
count -= length;
length >>= 1; /* in 16bit words */
/* Transfer using pseudo-dma. */
- while (length-- > 0) {
+ for (; length > 0; length--) {
outw(cpu_to_le16(*addr), port);
addr++;
}
@@ -384,7 +384,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
pipe->hw_ptr += count;
count >>= 1; /* in 16bit words */
/* Transfer using pseudo-dma. */
- while (count-- > 0) {
+ for (; count > 0; count--) {
outw(cpu_to_le16(*addr), port);
addr++;
}
@@ -411,12 +411,12 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
if (snd_BUG_ON(count % 2))
return;
vx_setup_pseudo_dma(chip, 0);
- if (offset + count > pipe->buffer_bytes) {
+ if (offset + count >= pipe->buffer_bytes) {
int length = pipe->buffer_bytes - offset;
count -= length;
length >>= 1; /* in 16bit words */
/* Transfer using pseudo-dma. */
- while (length-- > 0)
+ for (; length > 0; length--)
*addr++ = le16_to_cpu(inw(port));
addr = (unsigned short *)runtime->dma_area;
pipe->hw_ptr = 0;
@@ -424,7 +424,7 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
pipe->hw_ptr += count;
count >>= 1; /* in 16bit words */
/* Transfer using pseudo-dma. */
- while (count-- > 1)
+ for (; count > 1; count--)
*addr++ = le16_to_cpu(inw(port));
/* Disable DMA */
pchip->regDIALOG &= ~VXP_DLG_DMAREAD_SEL_MASK;
diff --git a/sound/soc/codecs/adau17x1.c b/sound/soc/codecs/adau17x1.c
index 439aa3f..79dcb1e 100644
--- a/sound/soc/codecs/adau17x1.c
+++ b/sound/soc/codecs/adau17x1.c
@@ -91,6 +91,27 @@ static int adau17x1_pll_event(struct snd_soc_dapm_widget *w,
return 0;
}
+static int adau17x1_adc_fixup(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = snd_soc_dapm_to_codec(w->dapm);
+ struct adau *adau = snd_soc_codec_get_drvdata(codec);
+
+ /*
+ * If we are capturing, toggle the ADOSR bit in Converter Control 0 to
+ * avoid losing SNR (workaround from ADI). This must be done after
+ * the ADC(s) have been enabled. According to the data sheet, it is
+ * normally illegal to set this bit when the sampling rate is 96 kHz,
+ * but according to ADI it is acceptable for this workaround.
+ */
+ regmap_update_bits(adau->regmap, ADAU17X1_CONVERTER0,
+ ADAU17X1_CONVERTER0_ADOSR, ADAU17X1_CONVERTER0_ADOSR);
+ regmap_update_bits(adau->regmap, ADAU17X1_CONVERTER0,
+ ADAU17X1_CONVERTER0_ADOSR, 0);
+
+ return 0;
+}
+
static const char * const adau17x1_mono_stereo_text[] = {
"Stereo",
"Mono Left Channel (L+R)",
@@ -122,7 +143,8 @@ static const struct snd_soc_dapm_widget adau17x1_dapm_widgets[] = {
SND_SOC_DAPM_MUX("Right DAC Mode Mux", SND_SOC_NOPM, 0, 0,
&adau17x1_dac_mode_mux),
- SND_SOC_DAPM_ADC("Left Decimator", NULL, ADAU17X1_ADC_CONTROL, 0, 0),
+ SND_SOC_DAPM_ADC_E("Left Decimator", NULL, ADAU17X1_ADC_CONTROL, 0, 0,
+ adau17x1_adc_fixup, SND_SOC_DAPM_POST_PMU),
SND_SOC_DAPM_ADC("Right Decimator", NULL, ADAU17X1_ADC_CONTROL, 1, 0),
SND_SOC_DAPM_DAC("Left DAC", NULL, ADAU17X1_DAC_CONTROL0, 0, 0),
SND_SOC_DAPM_DAC("Right DAC", NULL, ADAU17X1_DAC_CONTROL0, 1, 0),
diff --git a/sound/soc/codecs/adau17x1.h b/sound/soc/codecs/adau17x1.h
index bf04b7e..db35003 100644
--- a/sound/soc/codecs/adau17x1.h
+++ b/sound/soc/codecs/adau17x1.h
@@ -129,5 +129,7 @@ bool adau17x1_has_dsp(struct adau *adau);
#define ADAU17X1_CONVERTER0_CONVSR_MASK 0x7
+#define ADAU17X1_CONVERTER0_ADOSR BIT(3)
+
#endif
diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
index bd19fad..c17f262 100644
--- a/sound/soc/intel/boards/bytcr_rt5640.c
+++ b/sound/soc/intel/boards/bytcr_rt5640.c
@@ -807,7 +807,6 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
static struct platform_driver snd_byt_rt5640_mc_driver = {
.driver = {
.name = "bytcr_rt5640",
- .pm = &snd_soc_pm_ops,
},
.probe = snd_byt_rt5640_mc_probe,
};
diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
index eabff3a..ae49f81 100644
--- a/sound/soc/intel/boards/bytcr_rt5651.c
+++ b/sound/soc/intel/boards/bytcr_rt5651.c
@@ -317,7 +317,6 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
static struct platform_driver snd_byt_rt5651_mc_driver = {
.driver = {
.name = "bytcr_rt5651",
- .pm = &snd_soc_pm_ops,
},
.probe = snd_byt_rt5651_mc_probe,
};
diff --git a/sound/soc/sunxi/sun4i-spdif.c b/sound/soc/sunxi/sun4i-spdif.c
index 88fbb3a..048de15 100644
--- a/sound/soc/sunxi/sun4i-spdif.c
+++ b/sound/soc/sunxi/sun4i-spdif.c
@@ -403,14 +403,6 @@ static struct snd_soc_dai_driver sun4i_spdif_dai = {
.name = "spdif",
};
-static const struct snd_soc_dapm_widget dit_widgets[] = {
- SND_SOC_DAPM_OUTPUT("spdif-out"),
-};
-
-static const struct snd_soc_dapm_route dit_routes[] = {
- { "spdif-out", NULL, "Playback" },
-};
-
static const struct of_device_id sun4i_spdif_of_match[] = {
{ .compatible = "allwinner,sun4i-a10-spdif", },
{ .compatible = "allwinner,sun6i-a31-spdif", },
diff --git a/sound/usb/usb_audio_qmi_svc.c b/sound/usb/usb_audio_qmi_svc.c
index 0aeabfe..e2cebf15 100644
--- a/sound/usb/usb_audio_qmi_svc.c
+++ b/sound/usb/usb_audio_qmi_svc.c
@@ -68,6 +68,8 @@ struct intf_info {
unsigned long xfer_buf_va;
size_t xfer_buf_size;
phys_addr_t xfer_buf_pa;
+ unsigned int data_ep_pipe;
+ unsigned int sync_ep_pipe;
u8 *xfer_buf;
u8 intf_num;
u8 pcm_card_num;
@@ -415,6 +417,7 @@ static int prepare_qmi_response(struct snd_usb_substream *subs,
int protocol, card_num, pcm_dev_num;
void *hdr_ptr;
u8 *xfer_buf;
+ unsigned int data_ep_pipe = 0, sync_ep_pipe = 0;
u32 len, mult, remainder, xfer_buf_len, sg_len, i, total_len = 0;
unsigned long va, va_sg, tr_data_va = 0, tr_sync_va = 0;
phys_addr_t xhci_pa, xfer_buf_pa, tr_data_pa = 0, tr_sync_pa = 0;
@@ -531,6 +534,7 @@ static int prepare_qmi_response(struct snd_usb_substream *subs,
subs->data_endpoint->ep_num);
goto err;
}
+ data_ep_pipe = subs->data_endpoint->pipe;
memcpy(&resp->std_as_data_ep_desc, &ep->desc, sizeof(ep->desc));
resp->std_as_data_ep_desc_valid = 1;
@@ -548,6 +552,7 @@ static int prepare_qmi_response(struct snd_usb_substream *subs,
pr_debug("%s: implicit fb on data ep\n", __func__);
goto skip_sync_ep;
}
+ sync_ep_pipe = subs->sync_endpoint->pipe;
memcpy(&resp->std_as_sync_ep_desc, &ep->desc, sizeof(ep->desc));
resp->std_as_sync_ep_desc_valid = 1;
@@ -704,6 +709,8 @@ static int prepare_qmi_response(struct snd_usb_substream *subs,
uadev[card_num].info[info_idx].xfer_buf_va = va;
uadev[card_num].info[info_idx].xfer_buf_pa = xfer_buf_pa;
uadev[card_num].info[info_idx].xfer_buf_size = len;
+ uadev[card_num].info[info_idx].data_ep_pipe = data_ep_pipe;
+ uadev[card_num].info[info_idx].sync_ep_pipe = sync_ep_pipe;
uadev[card_num].info[info_idx].xfer_buf = xfer_buf;
uadev[card_num].info[info_idx].pcm_card_num = card_num;
uadev[card_num].info[info_idx].pcm_dev_num = pcm_dev_num;
@@ -732,6 +739,26 @@ static int prepare_qmi_response(struct snd_usb_substream *subs,
static void uaudio_dev_intf_cleanup(struct usb_device *udev,
struct intf_info *info)
{
+
+ struct usb_host_endpoint *ep;
+
+ if (info->data_ep_pipe) {
+ ep = usb_pipe_endpoint(udev, info->data_ep_pipe);
+ if (!ep)
+ pr_debug("%s: no data ep\n", __func__);
+ else
+ usb_stop_endpoint(udev, ep);
+ info->data_ep_pipe = 0;
+ }
+ if (info->sync_ep_pipe) {
+ ep = usb_pipe_endpoint(udev, info->sync_ep_pipe);
+ if (!ep)
+ pr_debug("%s: no sync ep\n", __func__);
+ else
+ usb_stop_endpoint(udev, ep);
+ info->sync_ep_pipe = 0;
+ }
+
uaudio_iommu_unmap(MEM_XFER_RING, info->data_xfer_ring_va,
info->data_xfer_ring_size);
info->data_xfer_ring_va = 0;
diff --git a/techpack/.gitignore b/techpack/.gitignore
index 58da0b8..829fdf6 100644
--- a/techpack/.gitignore
+++ b/techpack/.gitignore
@@ -1,2 +1,3 @@
# ignore all subdirs except stub
+*
!/stub/
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 4e778ea..415a9c3 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -309,10 +309,11 @@ __add_event(struct list_head *list, int *idx,
event_attr_init(attr);
- evsel = perf_evsel__new_idx(attr, (*idx)++);
+ evsel = perf_evsel__new_idx(attr, *idx);
if (!evsel)
return NULL;
+ (*idx)++;
evsel->cpus = cpu_map__get(cpus);
evsel->own_cpus = cpu_map__get(cpus);
diff --git a/tools/testing/selftests/firmware/fw_filesystem.sh b/tools/testing/selftests/firmware/fw_filesystem.sh
index 5c495ad..d8ac9ba 100755
--- a/tools/testing/selftests/firmware/fw_filesystem.sh
+++ b/tools/testing/selftests/firmware/fw_filesystem.sh
@@ -48,18 +48,18 @@
NAME=$(basename "$FW")
-if printf '\000' >"$DIR"/trigger_request; then
+if printf '\000' >"$DIR"/trigger_request 2> /dev/null; then
echo "$0: empty filename should not succeed" >&2
exit 1
fi
-if printf '\000' >"$DIR"/trigger_async_request; then
+if printf '\000' >"$DIR"/trigger_async_request 2> /dev/null; then
echo "$0: empty filename should not succeed (async)" >&2
exit 1
fi
# Request a firmware that doesn't exist, it should fail.
-if echo -n "nope-$NAME" >"$DIR"/trigger_request; then
+if echo -n "nope-$NAME" >"$DIR"/trigger_request 2> /dev/null; then
echo "$0: firmware shouldn't have loaded" >&2
exit 1
fi
diff --git a/tools/testing/selftests/firmware/fw_userhelper.sh b/tools/testing/selftests/firmware/fw_userhelper.sh
index b9983f8..01c626a 100755
--- a/tools/testing/selftests/firmware/fw_userhelper.sh
+++ b/tools/testing/selftests/firmware/fw_userhelper.sh
@@ -64,9 +64,33 @@
echo "ABCD0123" >"$FW"
NAME=$(basename "$FW")
+DEVPATH="$DIR"/"nope-$NAME"/loading
+
# Test failure when doing nothing (timeout works).
-echo 1 >/sys/class/firmware/timeout
-echo -n "$NAME" >"$DIR"/trigger_request
+echo -n 2 >/sys/class/firmware/timeout
+echo -n "nope-$NAME" >"$DIR"/trigger_request 2>/dev/null &
+
+# Give the kernel some time to load the loading file, must be less
+# than the timeout above.
+sleep 1
+if [ ! -f $DEVPATH ]; then
+ echo "$0: fallback mechanism immediately cancelled"
+ echo ""
+ echo "The file never appeared: $DEVPATH"
+ echo ""
+ echo "This might be a distribution udev rule setup by your distribution"
+ echo "to immediately cancel all fallback requests, this must be"
+ echo "removed before running these tests. To confirm look for"
+ echo "a firmware rule like /lib/udev/rules.d/50-firmware.rules"
+ echo "and see if you have something like this:"
+ echo ""
+ echo "SUBSYSTEM==\"firmware\", ACTION==\"add\", ATTR{loading}=\"-1\""
+ echo ""
+ echo "If you do remove this file or comment out this line before"
+ echo "proceeding with these tests."
+ exit 1
+fi
+
if diff -q "$FW" /dev/test_firmware >/dev/null ; then
echo "$0: firmware was not expected to match" >&2
exit 1